text stringlengths 4 2.78M |
|---|
---
abstract: 'The ABSTRACT is to be in fully-justified italicized text, at the top of the left-hand column, below the author and affiliation information. Use the word “Abstract” as the title, in 12-point Times, boldface type, centered relative to the column, initially capitalized. The abstract is to be in 10-point, single-spaced type. Leave two blank lines after the Abstract, then begin the main text. Look at previous ICCV abstracts to get a feel for style and length.'
author:
- |
First Author\
Institution1\
Institution1 address\
[firstauthor@i1.org]{}
- |
Second Author\
Institution2\
First line of institution2 address\
[secondauthor@i2.org]{}
bibliography:
- 'egbib.bib'
title: LaTeX Author Guidelines for ICCV Proceedings
---
Introduction
============
Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version.
Language
--------
All manuscripts must be in English.
Dual submission
---------------
Please refer to the author guidelines on the ICCV 2019 web page for a discussion of the policy on dual submissions.
Paper length
------------
Papers, excluding the references section, must be no longer than eight pages in length. The references section will not be included in the page count, and there is no limit on the length of the references section. For example, a paper of eight pages with two pages of references would have a total length of 10 pages. [**There will be no extra page charges for ICCV 2019.**]{}
Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this LaTeX guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven.
The ruler
---------
The LaTeX style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-LaTeXdocument preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (LaTeX users may uncomment the `\iccvfinalcopy` command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper — this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (e.g. this line is $095.5$), although in most cases one would expect that the approximate location will be adequate.
Mathematics
-----------
Please number all of your sections and displayed equations. It is important for readers to be able to refer to any particular equation. Just because you didn’t refer to it in the text doesn’t mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like “the equation second from the top of page 3 column 1”. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin’s description of how to write mathematics: <http://www.pamitc.org/documents/mermin.pdf>.
Blind review
------------
Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one’s own work—in fact it is often impossible to review a paper unless the previous citations are known and available.
Blind review means that you do not use the words “my” or “our” when citing previous work. That is all. (But see below for techreports.)
Saying “this builds on the work of Lucy Smith \[1\]” does not say that you are Lucy Smith; it says that you are building on her work. If you are Smith and Jones, do not say “as we show in \[7\]”, say “as Smith and Jones show in \[7\]” and at the end of the paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
> An analysis of the frobnicatable foo filter.
>
> In this paper we present a performance analysis of our previous paper \[1\], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me.
>
> \[1\] Removed for blind review
An example of an acceptable paper:
> An analysis of the frobnicatable foo filter.
>
> In this paper we present a performance analysis of the paper of Smith , and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me.
>
> \[1\] Smith, L and Jones, C. “The frobnicatable foo filter, a fundamental contribution to human knowledge”. Nature 381(12), 1-213.
If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission [@Authors14] as additional material and cite it as
> \[1\] Authors. “The frobnicatable foo filter”, F&G 2014 Submission ID 324, Supplied as additional material [fg324.pdf]{}.
Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not [*require*]{} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper “further details may be found in [@Authors14b]”. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let’s say it’s 1969, you have solved a key problem on the Apollo lander, and you believe that the ICCV70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled “Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties”, by Zeus .
You can handle this paper like any other. Don’t write “We show how to improve our previous work \[Anonymous, 1968\]. This time we tested the algorithm on a lunar lander \[name of lander removed for blind review\]”. That would be silly, and would immediately identify the authors. Instead write the following:
> We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems \[Zeus et al. 1968\] didn’t handle case B properly. Ours handles it by including a foo term in the bar integral.
>
> ...
>
> The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don’t you know. It displayed the following behaviours which show how well we solved cases A and B: ...
As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus , but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B.
FAQ\
[**Q:**]{} Are acknowledgements OK?\
[**A:**]{} No. Leave them for the final copy.\
[**Q:**]{} How do I cite my results reported in open challenges? [**A:**]{} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\
\[fig:onecol\]
Miscellaneous
-------------
Compare the following:\
--------------------- -------------------
`$conf_a$` $conf_a$
`$\mathit{conf}_a$` $\mathit{conf}_a$
--------------------- -------------------
\
See The TeXbook, p165.
The space after , meaning “for example”, should not be a sentence-ending space. So is correct, [*e.g.*]{} is not. The provided `\eg` macro takes care of this.
When citing a multi-author paper, you may save space by using “et alia”, shortened to “” (not “[*et. al.*]{}” as “[*et*]{}” is a complete word.) However, use it only when there are three or more authors. Thus, the following is correct: “ Frobnication has been trendy lately. It was introduced by Alpher [@Alpher02], and subsequently developed by Alpher and Fotheringham-Smythe [@Alpher03], and Alpher [@Alpher04].”
This is incorrect: “... subsequently developed by Alpher [@Alpher03] ...” because reference [@Alpher03] has just two authors. If you use the `\etal` macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher .
For this citation style, keep multiple citations in numerical (not chronological) order, so prefer [@Alpher03; @Alpher02; @Authors14] to [@Alpher02; @Alpher03; @Authors14].
Formatting your paper
=====================
All text must be in a two-column format. The total allowable width of the text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1.0 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1.0 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page.
Margins and page numbering
--------------------------
All printed material, including text, illustrations, and charts, must be kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm) high.
Type-style and fonts
--------------------
Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422 cm). Make sure your text is fully justified—that is, flush left and flush right. Please do not place any additional blank lines between paragraphs.
Figure and table captions should be 9-point Roman type as in Figures \[fig:onecol\] and \[fig:short\]. Short captions should be centred.
Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, [**1. Introduction**]{}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, [ **1.1. Database elements**]{}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line.
Footnotes
---------
Please use footnotes[^1] sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced.
References
----------
List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example [@Authors14]. Where appropriate, include the name(s) of editors of referenced books.
Method Frobnability
-------- ------------------------
Theirs Frumpy
Yours Frobbly
Ours Makes one’s heart Frob
: Results. Ours is better.
Illustrations, graphs, and photographs
--------------------------------------
All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic.
When placing figures in LaTeX, it’s almost always best to use `\includegraphics`, and to specify the figure width as a multiple of the line width as in the example below
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
Color
-----
Please refer to the author guidelines on the ICCV 2019 web page for a discussion of the use of color in your document.
Final copy
==========
You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings.
Please direct any questions to the production editor in charge of these proceedings at the IEEE Computer Society Press: Phone (714) 821-8380, or Fax (714) 761-1784.
[^1]: This is what a footnote looks like. It often distracts the reader from the main flow of the argument.
|
---
abstract: 'We give a short proof of the Cwikel–Lieb–Rozenblum (CLR) bound on the number of negative eigenvalues of Schrödinger operators. The argument, which is based on work of Rumin, leads to remarkably good constants and applies to the case of operator-valued potentials as well. Moreover, we obtain the general form of Cwikel’s estimate about the singular values of operators of the form $f(X) g(-i\nabla)$.'
address: 'Rupert L. Frank, Department of Mathematics, Princeton University, Princeton, NJ 08544, USA'
author:
- 'Rupert L. Frank'
title: 'Cwikel’s theorem and the CLR inequality'
---
[^1]
Introduction
============
Among the most beautiful theorems in spectral theory is Cwikel’s result about trace ideal properties of operators on $L_2({\mathbb{R}}^d)$ of the form $f(X)g(-i\nabla)$. Here $f(X)$ denotes multiplication by the function $f$ in position space and $g(-i\nabla)$ denotes multiplication by $g$ in momentum space. Cwikel’s theorem says that $f\in L_q({\mathbb{R}}^d)$ and $g\in L_{q,w}({\mathbb{R}}^d)$ implies $f(X) g(-i\nabla)\in \mathfrak{S}_{q,w}(L_2({\mathbb{R}}^d))$ for $q>2$. (We recall the definition of weak $L_q$ and weak $\mathfrak S_q$ spaces below.) This was conjectured by Simon in [@Si] and proved by Cwikel in [@Cw]; see also the review [@BiKaSo] for some extensions of this result.
An immediate consequence of Cwikel’s theorem is the famous Cwikel–Lieb–Rozenblum bound on the number $N(0,-\Delta+V)$ of negative eigenvalues (counting multiplicities) of Schrödinger operators $-\Delta+V$ in $L_2({\mathbb{R}}^d)$, $d\geq 3$, namely, $$\label{eq:clr}
N(0,-\Delta+V) \leq {\mathrm{const}\ }\int_{{\mathbb{R}}^d} V(x)_-^{d/2} \,dx \,.$$ Here $V(x)_-=\max\{-V(x),0\}$ denotes the negative part. The meaning of this bound is that the semi-classical approximation, $$\iint_{{\mathbb{R}}^d\times{\mathbb{R}}^d} \chi_{\{ p^2 + V(x)<0 \}} \frac{dx\,dp}{(2\pi)^d} = (2\pi)^{-d} |\{ p\in{\mathbb{R}}^d:\ |p|<1\}| \int_{{\mathbb{R}}^d} V(x)_-^{d/2} \,dx \,,$$ is, indeed, a uniform upper bound on $N(0,-\Delta+V)$ up to a universal constant depending only on the dimension. Different proofs of were given in [@Ro; @Li; @Fe; @LiYa; @Co]; see also the reviews [@LaWe2; @H2].
One of our goals here is to provide a new and simple proof of Cwikel’s theorem and the CLR inequality. Our starting point is the remarkable paper [@Ru1] by Rumin which contains, among others, the inequality $$\label{eq:ruminrd}
\operatorname{Tr}\gamma^{1/2}(-\Delta)\gamma^{1/2} \geq {\mathrm{const}\ }\int_{{\mathbb{R}}^d} \gamma(x,x)^{d/(d-2)} \,dx$$ for operators $0\leq\gamma\leq (-\Delta)^{-1}$ on $L_2({\mathbb{R}}^d)$, $d\geq 3$. As we shall see, this is a very powerful inequality (for instance, for $\gamma$ of rank one, it reduces to Sobolev’s inequality). Surprisingly, its proof is elementary and uses not much more than the triangle inequality for the Hilbert–Schmidt norm. It also yields a rather good value for the constant. In this paper we shall derive the CLR inequality from and we shall extend to $L_2({\mathbb{R}}^d)\otimes\mathcal G$ with constants independent of the dimension of the auxiliary Hilbert space $\mathcal G$. Both results are new and go beyond [@Ru1; @Ru]. Our results in the operator-valued case improve upon previous results of [@H1] (who follows [@Cw] and has larger constants) and [@FrLiSe1] (who can only deal with $(-\Delta)^s$ for $0<s\leq 1$). Moreover, we show that a modification of Rumin’s proof of yields an easy proof of Cwikel’s theorem mentioned at the beginning. This is the topic of Section \[sec:cwik\].
Besides its simplicity and its good constants, another advantage of Rumin’s inequality is that it is not limited to the Laplacian (or its powers) on ${\mathbb{R}}^d$, but has extensions to a large class of abstract operators. Roughly speaking, the only assumption is the existence of a density of states, and the energy dependence of this density of states determines the way in which $\gamma(x,x)$ enters the right side of . This generality of [@Ru1; @Ru] was of crucial importance for the results in [@FrOl; @FrLeLiSe]. In this paper we do not aim at highest possible generality, but we do include a new theorem about operators $T$ on arbitrary measure spaces $X$. We prove that a diagonal heat kernel bound $\exp(-tT)(x,x) \leq C t^{-\nu/2}$ with $\nu>2$ implies a CLR inequality $N(0,T+V) \leq C_\nu' C \int_X V_-^{\nu/2} \,dx$; see Theorem \[cwikelgen\]. This improves earlier results in [@LeSo; @FrLiSe2] who needed the additional assumption that $\exp(-tT)$ is positivity preserving.
In addition to deriving the CLR inequality from we are able to answer the following conceptual question about . Namely, besides the new inequality Rumin’s papers [@Ru1; @Ru] contain a new proof of the inequality $$\label{eq:ltdens}
\operatorname{Tr}\gamma^{1/2}(-\Delta)\gamma^{1/2} \geq {\mathrm{const}\ }\int_{{\mathbb{R}}^d} \gamma(x,x)^{(d+2)/d} \,dx$$ for operators $\gamma$ on $L_2({\mathbb{R}}^d)$ satisfying $0\leq\gamma\leq 1$. Inequality is due to Lieb and Thirring [@LiTh] and plays an important role in their proof of stability of matter. It is well known that is equivalent to an inequality about eigenvalue sums of Schrödinger operators, namely, $$\label{eq:ltev}
\operatorname{Tr}\left(-\Delta+V\right)_- \leq {\mathrm{const}\ }\int_{{\mathbb{R}}^d} V(x)_-^{(d+2)/2} \,dx \,.$$ By ‘equivalence’ we mean that there is a duality principle between and and that the optimal constant for one inequality determines that for the other inequality. Given the striking similarity between and it is natural to ask whether there is an inequality for Schrödinger operators which is equivalent to . We are able to answer this question completely (Lemma \[dual\]) and see that is ‘essentially’ equivalent to the CLR inequality. More precisely, we prove that is equivalent to a bound on the Birman–Schwinger operator $(-\Delta)^{-1/2} V_- (-\Delta)^{-1/2}$ in the weak trace ideal $\mathfrak S_{d/2,w}(L_2({\mathbb{R}}^d))$, however, not with its standard (quasi-)norm but with an equivalent expression (Lemma \[equiv\]).
For the impatient reader who wants to see immediately and without going through various dualities how implies the CLR bound we finish this introduction with a short derivation of . For fixed ${\varepsilon}>0$ we know that the spectrum of $-\Delta+V$ in the interval $(-\infty,-{\varepsilon})$ is finite if $V_-\in L_{d/2}({\mathbb{R}}^d)$. Let $\psi_1,\ldots,\psi_N$ be linearly independent functions which span the corresponding spectral subspace. Our goal will be to prove an upper bound on $N$ in terms of $V$, independently of ${\varepsilon}$. We may assume that the functions are normalized so that $\langle\sqrt{-\Delta}\psi_j,\sqrt{-\Delta}\psi_k\rangle =\delta_{jk}$. Note that with this normalization, the $\psi_j$’s are linear combinations of eigenfunctions but, in general, not eigenfunctions. Since they span the spectral subspace of $-\Delta+V$ corresponding to $(-\infty,-{\varepsilon})$ we know, however, that $\gamma=\sum_j |\psi_j\rangle\langle\psi_j|$ satisfies $$\label{eq:clrproof1}
0 \geq \operatorname{Tr}\gamma^{1/2}(-\Delta+V)\gamma^{1/2} \,.$$ Because of the normalization of the $\psi_j$’s we also know that $$\sqrt{-\Delta}\gamma\sqrt{-\Delta} \leq 1
\qquad\text{and}\qquad
\operatorname{Tr}\gamma^{1/2}(-\Delta)\gamma^{1/2} = N \,.$$ Thus, we infer from that $$N \geq K_d \int_{{\mathbb{R}}^d} \gamma(x,x)^{d/(d-2)} \,dx$$ for some constant $K_d$ and, therefore, that $$\begin{aligned}
\operatorname{Tr}V\gamma & \geq - \int_{{\mathbb{R}}^d} V(x)_- \gamma(x,x)\,dx
\geq - \left( \int_{{\mathbb{R}}^d} V_-^{d/2} \,dx \right)^{2/d} \left( \int_{{\mathbb{R}}^d} \gamma(x,x)^{d/(d-2)} \,dx \right)^{(d-2)/d} \\
& \geq - N^{(d-2)/d} K_d^{-(d-2)/d} \left( \int_{{\mathbb{R}}^d} V_-^{d/2} \,dx \right)^{2/d} \,.\end{aligned}$$ We insert this bound into and get $$0 \geq \operatorname{Tr}\gamma^{1/2}(-\Delta+V)\gamma^{1/2} = N + \operatorname{Tr}V\gamma
\geq N - N^{(d-2)/d} K_d^{-(d-2)/d} \left( \int_{{\mathbb{R}}^d} V_-^{d/2} \,dx \right)^{2/d} \,.$$ Thus, $$N \leq K_d^{-(d-2)/2} \int_{{\mathbb{R}}^d} V_-^{d/2} \,dx \,,$$ independently of ${\varepsilon}$, which proves .
Cwikel’s theorem {#sec:cwik}
================
To state our main result we recall that $L_{p,w}({\mathbb{R}}^d)$ denotes the space of functions $a$ on ${\mathbb{R}}^d$ for which the (quasi-)norm $$\| a\|_{p,w}^{p} = \sup_{\tau>0} \tau^{p}\, |\{ |a|>\tau \} |$$ is finite. We shall prove
\[main\] Let $0\leq a\in L_{p,w}({\mathbb{R}}^d)$ and $0\leq b\in L_p({\mathbb{R}}^d)$ for some $p>1$. Then for all $\mu>0$ $$\operatorname{Tr}\left( a(-i\nabla)^{1/2} b(X) a(-i\nabla)^{1/2} - \mu\right)_+ \leq \mu^{-p+1} \ \frac{(p+1)^{p-1}}{(p-1)^p} \ (2\pi)^{-d} \ \| a\|_{p,w}^{p} \| b\|_p^p \,.$$
Before proving this result we shall show that Cwikel’s theorem is an easy consequence of it. We recall that ${\mathfrak{S}}_{q,w}(\mathcal H)$ is the space of compact operators $K$ on a separable Hilbert space $\mathcal H$ satisfying $$\| K \|_{q,w}^q = \sup_{\kappa>0} \kappa^{q}\, n(\kappa, (K^* K)^{1/2} ) <\infty \,.$$ Here $n(\kappa,(K^* K)^{1/2})$ denotes the number of eigenvalues of $(K^* K)^{1/2}$ larger than $\kappa$, counting multiplicities.
\[Cwikel’s theorem\] \[cwikel\] If $f\in L_{q}({\mathbb{R}}^d)$ and $g\in L_{q,w}({\mathbb{R}}^d)$ for some $q>2$, then $f(X)\,g(-i\nabla)\in{\mathfrak{S}}_{q,w}( L_2({\mathbb{R}}^d) )$ with $$\| f(X)\, g(-i\nabla) \|_{q,w}^q \leq \left( \frac{q}{q-2} \right)^{q/2} \left( \frac{q+2}{q-2} \right)^{(q-2)/2} (2\pi)^{-d} \ \| f \|_{q}^{q} \ \| g\|_{q,w}^q \,.$$
In order to deduce Corollary \[cwikel\] from Theorem \[main\] we use the following lemma, which shows that the quantity bounded in Theorem \[main\] is indeed equivalent to the norm in a weak Schatten class.
\[Equivalent quasi-norms\] \[equiv\] Let $K$ be a compact operator on a separable Hilbert space $\mathcal H$ and let $q>2$. Then $K\in {\mathfrak{S}}_{q,w}(\mathcal H)$ iff $$|K |_q' := \left( \sup_{\mu>0} \mu^{q/2-1} \operatorname{Tr}(K^*K-\mu)_+ \right)^{1/q} <\infty\,.$$ Moreover, $$|K|_q' \leq \left(\frac{2}{q-2}\right)^{1/q} \|K\|_{q,w}
\leq \left(\frac{q}{q-2}\right)^{1/2} |K|_q' \,.$$
Since $(E-\mu)_+ = \int_\mu^\infty \chi_{(\sigma,\infty)}(E) \,d\sigma$ we have $$\operatorname{Tr}(K^*K-\mu)_+ = \int_\mu^\infty n(\sqrt\sigma,(K^*K)^{1/2}) \,d\sigma \,.$$ If $\|K\|_{q,w}$ is finite, this is bounded by $$\int_\mu^\infty n(\sqrt\sigma,(K^*K)^{1/2}) \,d\sigma \leq \|K\|_{q,w}^q \int_\mu^\infty \sigma^{-q/2} \,d\sigma = (q/2-1)^{-1} \|K\|_{q,w}^q \mu^{-q/2+1} \,.$$ Thus, $|K|_q' \leq (q/2-1)^{-1/q} \|K\|_{q,w}$. Conversely, since $\chi_{(\kappa^2,\infty)}(E) \leq (\kappa^2-\mu)^{-1} (E-\mu)_+$ for any $\mu<\kappa^2$ we have $$n(\kappa,(K^*K)^{1/2}) \leq (\kappa^2-\mu)^{-1} \operatorname{Tr}(K^*K-\mu)_+ \,.$$ If $|K |_q'$ is finite, this is bounded by $$(\kappa^2-\mu)^{-1} \operatorname{Tr}(K^*K-\mu)_+ \leq (\kappa^2-\mu)^{-1} \mu^{-q/2+1} \left( |K|_q' \right)^q \,.$$ We optimize the right side by choosing $\mu=(1-2/q)\kappa^2$ and obtain $$n(\kappa,(K^*K)^{1/2}) \leq \frac{q}{2} \left(1-\frac2q\right)^{-q/2+1} \kappa^{-q} \left( |K|_q' \right)^q \,,$$ that is, $\|K\|_{q,w} \leq \left(q/2\right)^{1/q} \left(1-2/q\right)^{-1/2+1/q} |K|_q'$, as claimed.
After applying unitaries, we may assume that $f$ and $g$ are non-negative. We put $K=f(X)g(-i\nabla)$. Applying Theorem \[main\] with $a=g^2$, $b=f^2$ and $p=q/2$ we infer that $$\left( \| K \|_q'\right)^q = \sup_{\mu>0} \mu^{q/2-1} \operatorname{Tr}(K^*K-\mu)_+ \leq \frac{\left(p+1\right)^{p-1}}{\left(p-1\right)^p} (2\pi)^{-d} \ \| a\|_{p,w}^{p} \| b\|_p^p \,.$$ Lemma \[equiv\] allows to turn this into a bound for $\|K\|_{q,w}$, which is the statement of Corollary \[cwikel\].
We now turn to the proof of Theorem \[main\]. The variational principle for sums of eigenvalues allows us to reformulate it in a dual form, in which we shall actually prove it. The precise statement is the following. (As usual, we write $p'=p/(p-1)$.)
\[Duality\] \[dual\] Let $A$ be a non-negative operator in $L_2(X)$ (where $X$ is a sigma-finite measure space) with $\ker A=\{0\}$ and let $p>1$. Then the following inequalities are equivalent,
1. $\operatorname{Tr}\left( A^{1/2} b A^{1/2} - \mu\right)_+ \leq D \mu^{-p+1} \int_X b^p \,dx$ for every $0\leq b\in L_p(X)$ and $\mu>0$,
2. $\operatorname{Tr}\gamma^{1/2}A^{-1}\gamma^{1/2} \geq K \int_X \gamma(x,x)^{p'} \,dx$ for every operator $0\leq \gamma\leq A$,
in the sense that the optimal constants $D$ and $K$ are related by $$\left( p\ D\right)^{p'} \left(p'\ K \right)^{p} = 1 \,.$$
This is a consequence of the variational characterization for the expression on the left side of (i), namely, $$\operatorname{Tr}\left( A^{1/2} b A^{1/2} - \mu\right)_+ = \sup_{0\leq\delta\leq 1} \operatorname{Tr}\delta^{1/2} \left( A^{1/2} b A^{1/2} - \mu\right)\delta^{1/2} \,.$$ To prove that (ii) implies (i) we change variables from $\delta$ to $\gamma=A^{1/2}\delta A^{1/2}$. Then the conditions $0\leq\delta\leq 1$ imply that $0\leq\gamma\leq A$, and therefore by (ii), $$\operatorname{Tr}\delta^{1/2} \left( A^{1/2} b A^{1/2} - \mu\right)\delta^{1/2}
=\operatorname{Tr}\gamma^{1/2} \left(b - \mu A^{-1} \right)\gamma^{1/2}
\leq \int_X \left( b \rho - \mu K \rho^{p'} \right) \,dx \,,$$ where $\rho(x)=\gamma(x,x)$. Maximizing the right side over all functions $\rho\geq 0$ (i.e., ignoring the fact that $\rho$ was related to $\gamma$) we find that $$\int_X \left( b \rho - \mu K \rho^{p'} \right) \,dx \leq (K\mu)^{-p+1} \frac{(p-1)^{p-1}}{p^p} \int_X b^p \,dx \,,$$ i.e., (i) holds and the optimal constant satisfies $D \leq K^{-p+1} \frac{(p-1)^{p-1}}{p^p}$. The proof of the converse implication is similar and is omitted.
We now prove the dual form of Theorem \[main\]. As we mentioned in the introduction, the proof follows closely some ideas of Rumin [@Ru1; @Ru].
\[rumin\] Let $a\in L_{p,w}({\mathbb{R}}^d)$ with $p>1$ and assume that $a>0$ a.e. Then for any operator $\gamma$ on $L_2({\mathbb{R}}^d)$ satisfying $0 \leq \gamma \leq a(-i\nabla)$, we have $$\operatorname{Tr}\gamma^{1/2}a(-i\nabla)^{-1}\gamma^{1/2} \geq \frac{p-1}{p+1}\ R_{d,p}^{\mathrm{sc}}\ \| a\|_{p,w}^{-p'} \int_{{\mathbb{R}}^d} \gamma(x,x)^{p'} \,dx$$ where $R_{d,p}^{\mathrm{sc}}= (2\pi)^{d/(p-1)} \left(\frac{p-1} p \right)^{p/(p-1)}$.
The superscript ‘sc’ in $R_{d,p}^{\mathrm{sc}}$ stands for ‘semi-classical’. This will be further explored in Section \[sec:concl\].
It is part of the assertion that the assumption $\operatorname{Tr}\gamma^{1/2}a(-i\nabla)^{-1}\gamma^{1/2}<\infty$ implies that the diagonal $\gamma(x,x)$ makes sense for a.e. $x\in{\mathbb{R}}^d$ and belongs to $L_{p'}({\mathbb{R}}^d)$. Note that this diagonal value is well-defined if $\gamma$ is a finite rank operator. Given the bound from the lemma in this case, which is independent of the (finite) rank, the extension to general $\gamma$ can be carried out, for instance, by monotone convergence. We omit the details since the finite rank version is all we need for the proof of Theorem \[main\].
Since $E^{-1}=\int_0^\infty \chi_{(0,\tau]} (E) \tau^{-2} \,d\tau$, the spectral theorem together with Fubini’s theorem implies that $$\label{eq:repr}
\operatorname{Tr}\gamma^{1/2}a(-i\nabla)^{-1}\gamma^{1/2} = \int_0^\infty \operatorname{Tr}\gamma_\tau \, \frac{d\tau}{\tau^2} = \int_{{\mathbb{R}}^d} \int_0^\infty \rho_\tau(x)\, \frac{d\tau}{\tau^{2}} \,dx \,,$$ where $\gamma_\tau = \chi_{(0,\tau]}(a(-i\nabla))\, \gamma\, \chi_{(0,\tau]}(a(-i\nabla))$ and where $\rho_\tau(x)=\gamma_\tau(x,x)$ is its density.
Our next goal is to find a pointwise lower bound on $\rho_\tau$ in terms of $\rho$. To do this, let $\Omega\subset{\mathbb{R}}^d$ be any set of finite measure and note that $$\begin{aligned}
\left( \int_\Omega \rho(x) \,dx \right)^{1/2} \!\! = \| \gamma^{1/2}\chi_\Omega \|_2
\leq \| \gamma^{1/2} \chi_{(0,\tau]}(a(-i\nabla)) \ \chi_\Omega \|_2 + \| \gamma^{1/2} \chi_{(\tau,\infty)}(a(-i\nabla)) \ \chi_\Omega \|_2,\end{aligned}$$ where $\|\cdot\|_2$ is the Hilbert–Schmidt norm. The first term on the right side is $$\| \gamma^{1/2} \chi_{(0,\tau]}(a(-i\nabla)) \ \chi_\Omega \|_2 = \| \gamma_\tau^{1/2} \ \chi_\Omega \|_2
= \left( \int_\Omega \rho_\tau(x) \,dx \right)^{1/2} \,,$$ and the second term, since $\gamma\leq a(-i\nabla)$, is bounded from above by $$\begin{aligned}
\| \gamma^{1/2} \chi_{(\tau,\infty)}(a(-i\nabla))\ \chi_\Omega \|_2 & \leq \| a(-i\nabla)^{1/2} \chi_{(\tau,\infty)}(a(-i\nabla))\ \chi_\Omega \|_2 \\
& = |\Omega|^{1/2} \left( \int_{{\mathbb{R}}^d} a(p) \chi_{\{a>\tau\}}(p) \frac{dp}{(2\pi)^d} \right)^{1/2} \,.\end{aligned}$$ Since $a(p)=\int_0^\infty \chi_{\{ a >\sigma \}}(p) \,d\sigma$ and since $| \{a> t\} | \leq \|a\|_{p,w}^p t^{-p}$, we find $$\begin{aligned}
\int_{{\mathbb{R}}^d} a(p) \chi_{\{a>\tau\}}(p) \,dp & = \int_0^\infty \int_{{\mathbb{R}}^d} \chi_{\{a>\sigma\}}(p) \chi_{\{a>\tau\}}(p) \,dp\,d\sigma
= \int_0^\infty | \{ a> \max\{\sigma,\tau\} \} | \,d\sigma \\
& \leq \|a\|_{p,w}^p \int_0^\infty \min\{\sigma^{-p},\tau^{-p}\} \,d\sigma = \frac p{p-1} \| a\|_{p,w}^p \, \tau^{-p+1} \,.\end{aligned}$$ Thus, we have shown that $$\left( \int_\Omega \rho(x) \,dx \right)^{1/2}
\leq \left( \int_\Omega \rho_\tau(x) \,dx \right)^{1/2} + |\Omega|^{1/2} (2\pi)^{-d/2} \left(\frac p{p-1} \right)^{1/2} \| a\|_{p,w}^{p/2} \, \tau^{-(p-1)/2} \,.$$ Since this is valid for any $\Omega$, Lebesgue’s differentiation theorem implies that $$\rho(x)^{1/2} \leq \rho_\tau(x)^{1/2} + (2\pi)^{-d/2} \left(\frac p{p-1} \right)^{1/2} \| a\|_{p,w}^{p/2} \, \tau^{-(p-1)/2} \quad \text{a.e.} \,,$$ and therefore $$\rho_\tau(x) \geq \left( \rho(x)^{1/2} - (2\pi)^{-d/2} \left(\frac p{p-1} \right)^{1/2} \| a\|_{p,w}^{p/2} \, \tau^{-(p-1)/2} \right)_+^2 \quad \text{a.e.} \,.$$
Finally, we insert this bound into and compute for a.e. $x$ $$\begin{aligned}
& \int_0^\infty \left( \rho(x)^{1/2} - (2\pi)^{-d/2} \left(\frac p{p-1} \right)^{1/2} \| a\|_{p,w}^{p/2} \, \tau^{-(p-1)/2} \right)_+^2 \, \frac{d\tau}{\tau^{2}} \\
& \quad = \rho(x)^{p/(p-1)} (2\pi)^{d/(p-1)} \left(\frac{p-1} p \right)^{p/(p-1)} \| a\|_{p,w}^{-p/(p-1)} \frac{p-1}{p+1} \,.
\qedhere\end{aligned}$$
\[Proof of Theorem \[main\]\] If $a>0$ a.e., then Lemmas \[dual\] and \[rumin\] imply that $$\operatorname{Tr}\left( a(-i\nabla)^{1/2} b(X) a(-i\nabla)^{1/2} - \mu\right)_+
\leq \mu^{-p+1} \left( \frac{p+1}{p-1} \right)^{p-1} D_{d,p}^{\mathrm{sc}}\ \| a\|_{p,w}^{p} \| b\|_p^p \,,$$ where $$D_{d,p}^{\mathrm{sc}}= \frac{(p-1)^{p-1}}{p^p} \ \left( R_{d,p}^{\mathrm{sc}}\right)^{-p+1} \,.$$ For general non-negative $a$ we apply the bound to $a_{\varepsilon}=\max\{a,{\varepsilon}\tilde a\}$, where $\tilde a$ is a fixed, positive function in $L_{p,\infty}({\mathbb{R}}^d)$. Since $$\begin{aligned}
& \operatorname{Tr}\left( a(-i\nabla)^{1/2} b(X) a(-i\nabla)^{1/2} - \mu\right)_+
= \operatorname{Tr}\left( b(X)^{1/2} a(-i\nabla) b(X)^{1/2} - \mu\right)_+ \\
& \quad \leq \operatorname{Tr}\left( b(X)^{1/2} a_{\varepsilon}(-i\nabla) b(X)^{1/2} - \mu\right)_+
= \operatorname{Tr}\left( a_{\varepsilon}(-i\nabla)^{1/2} b(X) a_{\varepsilon}(-i\nabla)^{1/2} - \mu\right)_+ \,,\end{aligned}$$ the assertion follows from the bound for $a_{\varepsilon}$ and the fact that $\lim \| a_{\varepsilon}\|_{p,w} = \|a\|_{p,w}$.
Generalizations
===============
An operator-valued version of Cwikel’s theorem
----------------------------------------------
The works [@La; @LaWe1] have made clear that good constants in CLR and related inequalities in higher dimensions can be derived from operator-valued versions of these inequalities in lower dimensions. In the case of Cwikel’s theorem this strategy was implemented in [@H1]. The constant in the CLR inequality for Schrödinger operators with matrix-valued potentials was improved in [@FrLiSe1]. In this subsection we show that Rumin’s proof can also be modified to yield an operator-valued version of Cwikel’s theorem. This extension is not straightforward and leads, unfortunately, to a somewhat worse constant than that in Corollary \[cwikel\].
Another thing that we show in this subsection is that the structure of ${\mathbb{R}}^d$ is not really relevant for Cwikel’s theorem. Indeed, our theorem holds on a general pair of measure spaces, with the role of the Fourier transform being played by a general unitary operator with bounded integral kernel. Results in this spirit have already appeared in [@BiKaSo], but it is not clear whether their techniques also apply in the operator-valued case.
We begin with some notations. In this subsection, let $(X,dx)$ and $(Y,dy)$ be sigma-finite measure spaces and let $\mathcal H$ and $\mathcal G$ be separable Hilbert spaces. We denote by $L_{p}(X,{\mathfrak{S}}_p(\mathcal H))$ the space of all measurable functions $f$ on $X$ with values in the compact operators in $\mathcal H$ such that $$\| f\|_{L_p({\mathfrak{S}}_p)}^p = \int_X \| f(x)\|_{{\mathfrak{S}}_p(\mathcal H)}^p \,dx <\infty \,.$$ Similarly, $L_{p,w}(Y,{\mathfrak{B}}(\mathcal G))$ is the space of all measurable functions $g$ on $Y$ with values in the bounded operators on $\mathcal G$ such that $$\| g\|_{L_{p,w}({\mathfrak{B}})}^p = \sup_{\tau>0} \tau^p \, |\{ y\in Y:\ \|g(y)\|_{{\mathfrak{B}}(\mathcal G)} >\tau \} | \,.$$
\[Operator-valued version of Cwikel’s theorem\] \[cwikelop\] Let $\Phi: L_2(X,\mathcal H)\to L_2(Y,\mathcal G)$ be a unitary operator, which maps $L_1(X,\mathcal H)$ boundedly into $L_\infty(Y,\mathcal G)$. Let $q>2$. If $f\in L_{q}(X,{\mathfrak{S}}_q(\mathcal H))$, $g\in L_{q,w}(Y,{\mathfrak{B}}(\mathcal H))$, then $f\,\Phi^*\,g\in{\mathfrak{S}}_{q,w}( L_2(Y,\mathcal H),L_2(X,\mathcal G) )$ with $$\| f\, \Phi^* \, g \|_{q,w}^q \leq \frac{q}2 \left( \frac{q}{q-2} \right)^{q-1} C^2 \ \| f \|_{L_{q}({\mathfrak{S}}_q)}^{q} \ \| g\|_{L_{q,w}({\mathfrak{B}})}^q \,,$$ where $C = \|\Phi\|_{L_1\to L_\infty}$.
This constant is worse than that of Corollary \[cwikel\] by a factor of $\frac{q}2 \left( \frac{q}{q+2} \right)^{(q-2)/2}>1$. It is still better, by a factor of $2^{2q-5} q$, than the constant $$\left( \frac{q}{2} \right)^q \left( \frac 8{q-2} \right)^{q-2} \frac q{q-2}\ C^2 \,.$$ from [@H1] (which is the same as in [@Cw] in the scalar case).
The heart of the proof is the following analogue of Lemma \[rumin\]. Namely, if $p>1$ and $a\in L_{p,w}(Y,{\mathfrak{B}}(\mathcal G))$ with $a(y)\geq 0$ and $\ker a(y)=\{0\}$ for a.e. $y\in Y$, then for any operator $\gamma$ on $L_2(X,\mathcal H)$ satisfying $0 \leq \gamma \leq \Phi^*a\Phi$, $$\label{eq:cwikelopgoal2}
\operatorname{Tr}\gamma^{1/2} \Phi^* a^{-1} \Phi \gamma^{1/2} \geq \frac{(p-1)^{(2p-1)/(p-1)}}{p^{2p/(p-1)}}\ C^{-2/(p-1)} \ \| a\|_{L_{p,w}({\mathfrak{B}})}^{-p'} \int_X \operatorname{Tr}_{\mathcal H} \gamma(x,x)^{p'} \,dx \,.$$ Here again $C=\|\Phi\|_{L_1\to L_\infty}$.
Accepting for the moment, we briefly explain how to finish the proof of Theorem \[cwikelop\]. First, via a straightforward extension of Lemma \[dual\] we infer from that $$\label{eq:cwikelopgoal}
\operatorname{Tr}\left( a^{1/2} \Phi b \Phi^* a^{1/2} - \mu\right)_+ \leq \mu^{-p+1} \left( \frac{p}{p-1}\right)^p C^2 \ \| a\|_{L_{p,w}({\mathfrak{B}})}^{p} \| b\|_{L_p({\mathfrak{S}}_p)}^p \,,$$ provided that $a(y)$ and $b(x)$ are non-negative for a.e. $x$ and $y$. This implies Theorem \[cwikelop\] in the same way as Theorem \[main\] implied Corollary \[cwikel\].
We now turn to the proof of . We write, similarly as before, $$\label{eq:reprop}
\operatorname{Tr}\gamma^{1/2} \Phi^* a^{-1}\Phi\gamma^{1/2} = \int_{X} \int_0^\infty \operatorname{Tr}_\mathcal H \gamma_\tau(x,x) \ \frac{d\tau}{\tau^{2}} \,dx$$ with $\gamma_\tau = P_\tau \gamma P_\tau$ and $P_\tau = \Phi^*\chi_{(0,\tau]}(a)\Phi$.
For any Hilbert–Schmidt operator $H$ in $\mathcal H$, any set $\Omega\subset X$ of finite measure and any ${\varepsilon}>0$, we apply the Schwarz inequality to find $$\begin{aligned}
\int_\Omega \operatorname{Tr}_{\mathcal H} H^* \gamma(x,x) H \,dx & = \operatorname{Tr}_{L_2(X,\mathcal H)} \chi_\Omega H^* \gamma H \chi_\Omega \\
& \leq (1+{\varepsilon}) \operatorname{Tr}_{L_2(X,\mathcal H)} \chi_\Omega H^* P_\tau \gamma P_\tau H \chi_\Omega \\
& \qquad + (1+{\varepsilon}^{-1}) \operatorname{Tr}_{L_2(X,\mathcal H)} \chi_\Omega H^* P_\tau^\bot \gamma P_\tau^\bot H \chi_\Omega \,.\end{aligned}$$ (Here $H \chi_\Omega$ is short for $\chi_\Omega\otimes H$ and $P_\tau^\bot$ for $1-P_\tau$.) For the first term on the right side, we notice that $$\operatorname{Tr}_{L_2(X,\mathcal H)} \chi_\Omega H^* P_\tau \gamma P_\tau H \chi_\Omega = \int_\Omega \operatorname{Tr}_{\mathcal H} H^* \gamma_\tau(x,x) H \,dx \,.$$ In order to bound the second term we recall the fact that $\gamma\leq \Phi^* a\Phi$ and that $C=\|\Phi\|_{L_1\to L_\infty}<\infty$, which yields $$\begin{aligned}
\operatorname{Tr}_{L_2(X,\mathcal H)} \chi_\Omega H^* P_\tau^\bot \gamma P_\tau^\bot H \chi_\Omega
& \leq \operatorname{Tr}_{L_2(X,\mathcal H)} \chi_\Omega H^* \Phi^* a \chi_{\{a>\tau\}} \Phi H \chi_\Omega \\
& = \int_\Omega \int_Y \operatorname{Tr}_\mathcal H H^* \Phi(y,x)^* a(y) \chi_{\{a(y)>\tau\}} \Phi(y,x) H \,dy\,dx \\
& \leq \int_\Omega \int_Y \| a(y) \chi_{\{a(y)>\tau\}} \|_{{\mathfrak{B}}} \| \Phi(y,x) \|_{{\mathfrak{B}}}^2 \operatorname{Tr}_\mathcal H H^* H \,dy\,dx \\
& \leq |\Omega|\, C^2 \operatorname{Tr}_{\mathcal H} H^* H \ \int_Y \| a(y)\|_{{\mathfrak{B}}} \ \chi_{\{\|a(y)\|_{{\mathfrak{B}}}>\tau\}} \,dy \,.\end{aligned}$$ Here we used the fact that $\| a(y) \chi_{\{a(y)>\tau\}} \|_{{\mathfrak{B}}}= \| a(y)\|_{{\mathfrak{B}}} \, \chi_{\{\|a(y)\|_{{\mathfrak{B}}}>\tau\}}$. Now the same weak $L_p$ bound as in the proof of Lemma \[rumin\] leads to $$\operatorname{Tr}_{L_2(X,\mathcal H)} \chi_\Omega H^* P_\tau^\bot \gamma P_\tau^\bot H \chi_\Omega
\leq |\Omega|\, C^2 \operatorname{Tr}_{\mathcal H} H^* H \ \frac p{p-1} \| a\|_{L_{p,w}({\mathfrak{B}})}^p \tau^{-p+1} \,.$$ To summarize, we have shown that $$\begin{aligned}
\int_\Omega \operatorname{Tr}_{\mathcal H} H^* \gamma(x,x) H \,dx \leq & (1+{\varepsilon}) \int_\Omega \operatorname{Tr}_{\mathcal H} H^* \gamma_\tau(x,x) H \,dx \\
& + (1+{\varepsilon}^{-1}) |\Omega|\, C^2 \operatorname{Tr}_{\mathcal H} H^* H \ \frac p{p-1} \| a\|_{L_{p,w}({\mathfrak{B}})}^p \tau^{-p+1} \,.\end{aligned}$$ Since this is valid for any $\Omega$ and for any $H$, we have for a.e. $x\in X$ the operator inequality $$\gamma(x,x) \leq (1+{\varepsilon}) \gamma_\tau(x,x) + (1+{\varepsilon}^{-1}) C^2 \frac p{p-1} \| a\|_{L_{p,w}({\mathfrak{B}})}^p \tau^{-p+1} \,.$$ We now use the fact that an operator inequality $A\geq B$ implies $\operatorname{Tr}f(A)\geq \operatorname{Tr}f(B)$ for $f$ non-decreasing. In our case $f(t)=t_+$, the positive part, and therefore $$\operatorname{Tr}_{\mathcal H} \gamma_\tau(x,x) \geq (1+{\varepsilon})^{-1} \operatorname{Tr}_{\mathcal H} \left( \gamma(x,x) - (1+{\varepsilon}^{-1}) C^2 \frac p{p-1} \| a\|_{L_{p,w}({\mathfrak{B}})}^p \tau^{-p+1} \right)_+ \,.$$ It remains to do the $\tau$ integration, $$\int_0^\infty \operatorname{Tr}_{\mathcal H} \gamma_\tau(x,x) \,\frac{d\tau}{\tau^2} \geq
\frac{{\varepsilon}^{1/(p-1)}}{(1+{\varepsilon})^{p/(p-1)}}
\left( \frac{p-1}{p} \right)^{p/(p-1)} C^{-2/(p-1)} \| a\|_{L_{p,w}({\mathfrak{B}})}^{-p'} \operatorname{Tr}_{\mathcal H} \gamma(x,x)^{p'} \,,$$ and to optimize in ${\varepsilon}$ by choosing ${\varepsilon}=(p-1)^{-1}$. This, together with proves and completes the proof.
The CLR inequality for general Schrödinger-like operators
---------------------------------------------------------
Next, we show that for a large class of ‘kinetic energies’ $T$ the number $N(0,T+V)$ of negative eigenvalues (counting multiplicities) of the Schrödinger-type operator $T+V$ can be bounded in terms of an integral of the potential $V$. We shall see how the exponent with which $V$ enters into this bound is determined by $T$. The improvement of this result as compared to those in [@LeSo; @FrLiSe2] is that we do not require the potential to be scalar and that we do not require $\exp(-tT)$ to be positivity preserving.
Again, throughout this subsection we assume that $X$ is a sigma-finite measure space and $\mathcal{H}$ a separable Hilbert spaces.
\[cwikelgen\] Let $T$ be a non-negative operator in $L_2(X,\mathcal H)$ with $\ker T=\{0\}$. Assume that there are constants $\nu>2$ and $A<\infty$ such that for every $E>0$, every $\Omega\subset X$ of finite measure and every $\phi\in\mathcal H$, $$\label{eq:ass}
\operatorname{Tr}_{L_2(X)} \chi_\Omega \left(\phi, T^{-1} \chi_{(0,E]}(T) \phi \right)_{\mathcal H} \chi_\Omega \leq A E^{(\nu-2)/2} |\Omega| \|\phi\|_{\mathcal H}^2 \,.$$ Then for any measurable function $V$ on $X$, taking values in the self-adjoint compact operators on $\mathcal H$, $$N(0,T+V) \leq C_\nu \, A \int_X \operatorname{Tr}_{\mathcal{H}} V(x)_-^{\nu/2} \,dx$$ with $$C_\nu = \frac\nu 2 \left( \frac{\nu}{\nu-2} \right)^{\nu-2} \,.$$ If $\dim\mathcal H=1$, then $C_\nu$ can be replaced by $$C_\nu = \left( \frac{\nu(\nu+2)}{(\nu-2)^2} \right)^{(\nu-2)/2} \,.$$
Roughly speaking, assumption means that $T^{-1} \chi_{(0,E]}(T)$ has an integral kernel (taking values in the bounded operators on $\mathcal H$) which on the diagonal satisfies the bound $$\| T^{-1} \chi_{(0,E]}(T)(x,x) \|_{\mathfrak B(\mathcal H)} \leq A E^{(\nu-2)/2}$$ We discuss the equivalence of this assumption with more standard assumptions in Lemma \[ass\] below.
Before turning to the proof of Theorem \[cwikelgen\] we illustrate it by
Let $T=(-\Delta)^s$, $0<s<d/2$, in $L_2({\mathbb{R}}^d)$. Then by explicit diagonalization via Fourier transform one sees that holds with $\nu=d/s$ and $$A= \int_{{\mathbb{R}}^d} |p|^{-2s} \chi_{\{|p|<1\}} \frac{dp}{(2\pi)^d}
= \frac{\omega_d}{(2\pi)^{d}} \frac{d}{d-2s} \,.$$ Thus Theorem \[cwikelgen\] implies that $$\label{eq:clrrdop}
N(0,(-\Delta)^s +V) \leq \frac{d}{2s} \left( \frac{d}{d-2s} \right)^{(d-2s)/s} \frac{\omega_d}{(2\pi)^{d}} \frac{d}{d-2s} \int_{{\mathbb{R}}^d} \operatorname{Tr}_\mathcal H V(x)_-^{d/2s}\,dx$$ in the operator-valued case and $$\label{eq:clrrdscal}
N(0,(-\Delta)^s +V) \leq \left( \frac{d (d+2s)}{(d-2s)^2} \right)^{(d-2s)/2s} \frac{\omega_d}{(2\pi)^{d}} \frac{d}{d-2s} \int_{{\mathbb{R}}^d} V(x)_-^{d/2s}\,dx$$ in the scalar case. These constants are rather good. In the cases which are most relevant in applications the bounds are about a factor of two worse than the best available bounds. Indeed, for $d=3$, gives 0.196 for $s=1$ (to be compared with 0.116 from [@Li]) and 0.228 for $s=1/2$ (to be compared with 0.103 from [@Da]) and gives 0.228 for $s=1$ (to be compared with 0.174 from [@FrLiSe1]). We emphasize again that the methods of [@Li; @Da; @FrLiSe1] are restricted to $s\leq 1$. The above constants are the best ones available for $1<s<d/2$; see the comparison with the constant from [@Cw; @H1] after Theorem \[cwikelop\].
By the variational principle and the Birman–Schwinger principle, $$N(0,T+V) \leq N(0,T-V_-) = n(1,T^{-1/2} V_- T^{-1/2}) \,.$$ Thus, by the same argument as in the proof of Lemma \[equiv\], Theorem \[cwikelgen\] will follow if we can show that $$\operatorname{Tr}\left( T^{-1/2} V_- T^{-1/2} -\mu \right)_+ \leq \mu^{-\nu/2 +1} A D \int_X \operatorname{Tr}_\mathcal H V(x)_-^{\nu/2} \,dx \,.$$ Here, $D= (\nu/(\nu-2))^{(\nu-2)/2}$ in the general case, which can be improved to $D=(2/\nu)((\nu+2)/(\nu-2))^{(\nu-2)/2}$ for $\dim\mathcal H=1$. By the argument of Lemma \[dual\] the latter inequality is, in turn, equivalent to the inequality $$\operatorname{Tr}\gamma^{1/2} T \gamma^{1/2} \geq A^{-2/(\nu-2)} K \int_X \operatorname{Tr}_{\mathcal H} \gamma(x,x)^{\nu/(\nu-2)} \,dx$$ for every operator $0\leq\gamma\leq T^{-1}$. Here $$K= \frac{2^{2/(\nu-2)}(\nu-2)^2}{\nu^{2(\nu-1)/(\nu-2)}}$$ in the general case, which can be improved to $K=(\nu-2)^2/(\nu(\nu+2))$ for $\dim\mathcal H=1$. In the scalar case $\dim\mathcal H=1$, this bound follows from [@Ru1] (with the improved constant of [@Ru]) and the modifications to treat the general case are similar to our arguments in the proof of Theorem \[cwikelop\].
For the sake of completeness, we briefly sketch the proof. We introduce $P_E=\chi_{(E,\infty)}(T)$ and $P_E^\bot=\chi_{(0,E]}(T)$. The key is, as before, the bound $$\operatorname{Tr}_{ L_2(X,\mathcal H)} \chi_\Omega H^* P_E^\bot \gamma P_E^\bot H \chi_\Omega
\leq \operatorname{Tr}_{ L_2(X,\mathcal H)} \chi_\Omega H^* P_E^\bot T^{-1} P_E^\bot H \chi_\Omega$$ for any set $\Omega\subset X$ of finite measure and any Hilbert–Schmidt operator $H$ on $\mathcal H$. By assumption the right side is bounded by $A E^{(\nu-2)/2} |\Omega| \operatorname{Tr}_{\mathcal H} H^*H$. This implies, as before, $$\gamma(x,x) \leq (1+{\varepsilon}) \left( P_E\gamma P_E \right) (x,x) + \left(1+{\varepsilon}^{-1}\right) A E^{(\nu-2)/2}$$ for every ${\varepsilon}>0$. In the special case $\dim\mathcal H=1$ the bound can be somewhat improved using the argument of Lemma \[rumin\] to $$\sqrt{\gamma(x,x)} \leq \sqrt{\left( P_E\gamma P_E \right) (x,x)} + A^{1/2} E^{(\nu-2)/4} \,.$$ With these bounds at hand the proof is completed as before by integration over $E$.
We now give sufficient conditions for assumption , which can be verified in applications. Similar results are contained in [@Ru1].
\[ass\] Let $T$ be a non-negative operator in $L_2(X,\mathcal H)$, let $\Omega\subset X$ have finite measure and let $\phi\in\mathcal H$. If, for some constants $\nu>0$ and $C'$ and all $t>0$ $$\label{eq:assheat}
\operatorname{Tr}_{L_2(X)} \chi_\Omega \left(\phi, \exp(-tT) \phi \right)_{\mathcal H} \chi_\Omega \leq C' t^{-\nu/2} \,,$$ then for all $E>0$ $$\label{eq:assproj}
\operatorname{Tr}_{L_2(X)} \chi_\Omega \left(\phi, \chi_{(0,E]}(T) \phi \right)_{\mathcal H} \chi_\Omega \leq B' E^{\nu/2}$$ with $B'=C'( 2e/\nu)^{\nu/2}$. Moreover, if holds for some constants $\nu>2$ and $B'$ and all $E>0$, then for all $E>0$ $$\label{eq:asslem}
\operatorname{Tr}_{L_2(X)} \chi_\Omega \left(\phi, T^{-1} \chi_{(0,E]}(T) \phi \right)_{\mathcal H} \chi_\Omega \leq A' E^{(\nu-2)/2}$$ with $A'=B'\nu/(\nu-2)$.
To prove the first assertion of the lemma we use the bound $\chi_{(0,E]}(\lambda) \leq e^{t E} e^{-t\lambda}$. Thus implies $$\operatorname{Tr}_{L_2(X)} \chi_\Omega \left(\phi, \chi_{(0,E]}(T) \phi \right)_{\mathcal H} \chi_\Omega \leq C' t^{-\nu/2} e^{t E}$$ for all $t>0$. We optimize the right side by choosing $t=\nu/(2E)$.
To prove the second assertion we write $$\lambda^{-1} \chi_{(0,E]}(\lambda) = \int_0^\infty \chi_{(0,\min\{s,E\}]}(\lambda) \frac{ds}{s^2} \,.$$ Thus implies $$\operatorname{Tr}_{L_2(X)} \chi_\Omega \left(\phi, T^{-1} \chi_{(0,E]}(T) \phi \right)_{\mathcal H} \chi_\Omega
\leq B' \int_0^\infty \min\{s,E\}^{\nu/2} \frac{ds}{s^2}
= B' \frac{\nu}{\nu-2} E^{(\nu-2)/2} \,,$$ as claimed.
Assumption is a standard assumption in works on ultra-contractivity. In the work of Levin and Solomyak [@LeSo] (see also [@FrLiSe2]) it was used to extend the proof of Li and Yau [@LiYa] to general Dirichlet forms generating submarkovian semi-groups. The important difference, however, is that here, as in [@Ru1; @Ru], we do *not* need the heat kernel to be positivity preserving and a contraction on $L_1$.
One application of Lemma \[ass\] concerns magnetic Schrödinger operators. That is, take $X={\mathbb{R}}^d$, $\mathcal H={\mathbb{C}}$ and $T=(-i\nabla +A)^2$ for some $A\in L_{2,{{\rm loc}}}({\mathbb{R}}^d,{\mathbb{R}}^d)$. While we do not know how to verify directly, we know from the diamagnetic inequality that holds with $C'= (4\pi)^{-d/2} |\Omega| \|\phi\|_{\mathcal H}^2$. Thus, in dimension $d\geq 3$, holds with $A'= (e/(2\pi d))^{d/2} (d/(d-2)) |\Omega| \|\phi\|_{\mathcal H}^2$ and with $A= (e/(2\pi d))^{d/2} (d/(d-2))$. While this constant is worse than that without magnetic field, it is independent of the magnetic field, as it should be.
Concluding remarks {#sec:concl}
==================
In this final subsection we discuss the problem of finding the optimal (i.e., largest possible) constant $K_{s,d}$ in Rumin’s inequality $$\label{eq:const}
\operatorname{Tr}\gamma^{1/2}(-\Delta)^s\gamma^{1/2} \geq K_{s,d} \int_{{\mathbb{R}}^d} \gamma(x,x)^{d/(d-2s)} \,dx$$ for operators $\gamma$ on $L_2({\mathbb{R}}^d)$ satisfying $0\leq\gamma\leq(-\Delta)^{-s}$. We assume throughout that $2s<d$.
Lemma \[rumin\] (with $a(\xi)=|\xi|^{-2s}$ and $p=d/2s$) implies that this inequality holds and that the optimal constant satisfies $$K_{s,d} \geq \frac{d-2s}{d+2s} \ (2\pi)^{2ds/(d-2s)} \left( \frac{d-2s}{d} \right)^{d/(d-2s)} \omega_d^{-2s/(d-2s)}$$ Here $\omega_d=|\{\xi\in{\mathbb{R}}^d:\ |\xi|<1\}|$. In the following subsections we derive two upper bounds for $K_{s,d}$ and discuss a non-obvious symmetry.
The semi-classical constant
---------------------------
Here we show that $$\label{eq:constbdsc}
K_{s,d} \leq (2\pi)^{2ds/(d-2s)} \left( \frac{d-2s}{d} \right)^{d/(d-2s)} \omega_d^{-2s/(d-2s)} \,.$$ Note that this upper bound differs from the constant in Lemma \[rumin\] only by a factor of $(d-2s)/(d+2s)$. There are two ways to prove . The first one consists in noting that a Weyl-type semi-classical formula yields a lower bound on the optimal constant $D_{s,d}$ in the inequality $$\operatorname{Tr}\left((-\Delta)^{-s/2} V_- (-\Delta)^{-s/2} - \mu \right)_+ \leq D_{s,d} \ \mu^{-d/2s+1} \int_{{\mathbb{R}}^d} V(x)_-^{d/2s} \,dx$$ and then using Lemma \[dual\] to convert this into an upper bound on $K_{s,d}$. Since this is standard, we explain a less known, but more direct approach. Instead of finding the best constant $K_{s,d}$ in we look for the best constant $K_{s,d}'$ in the inequality $$\label{eq:constsc}
\iint_{{\mathbb{R}}^d\times{\mathbb{R}}^d} |p|^{2s} M(p,x) \frac{dp\,dx}{(2\pi)^d}
\geq K_{s,d}' \int_{{\mathbb{R}}^d} \left( \int_{{\mathbb{R}}^d} M(p,x) \frac{dp}{(2\pi)^{d}} \right)^{d/(d-2s)} \,dx$$ for all functions $M$ on ${\mathbb{R}}^d\times{\mathbb{R}}^d$ satisfying $0\leq M(p,x) \leq |p|^{-2s}$ for all $x$ and $p$. Using coherent states it is easy to verify that $K_{s,d}\leq K_{s,d}'$. It is elementary to compute the optimal constant $K_{s,d}'$. It is given by the right side of . Optimizers $M$ are of the form $M(p,x) = |p|^{-2s}\chi_{\{|p|<R(x)\}}$ for an arbitrary function $R$.
The Sobolev constant
--------------------
Applying to an operator $\gamma=\alpha |\psi\rangle\langle\psi|$ of rank one with $\alpha = \|(-\Delta)^{s/2}\psi\|^{-2}$ we obtain $$\label{eq:sob}
\left\|(-\Delta)^{s/2}\psi\right\|^2 \geq K_{s,d}^{(d-2s)/d} \left( \int_{{\mathbb{R}}^d} |\psi|^{2d/(d-2s)} \,dx \right)^{(d-2s)/d} \,.$$ This is Sobolev’s inequality. The best constant in this inequality for general $s$ has been determined by Lieb [@Li2] (in a dual formulation). Using this value, we infer that $$\label{eq:constbdsob}
K_{s,d} \leq (4\pi)^{ds/(d-2s)} \left( \frac{\Gamma((d+2s)/2)}{\Gamma((d-2s)/2)} \right)^{d/(d-2s)} \left( \frac{\Gamma(d/2)}{\Gamma(d)} \right)^{2s/(d-2s)} \,.$$ Numerically, it is easy to determine which one of the upper bounds and is better. It seems like is better for $d=1$ and is better for $d\geq 3$. In $d=2$, is better for $s<1/2$ and is better for $s>1/2$. We also remark that the constants on the right sides of and are asymptotically equal as $s\to 0$ and as $s\to d/2$.
Conformal invariance
--------------------
Lieb [@Li2] has shown that (or an equivalent version thereof) is conformally invariant in the following sense. If $h$ is a conformal transformation of ${\mathbb{R}}^d\cup\{\infty\}$ and if $\phi(x) = J_h(x)^{(d-2s)/2d} \psi(h(x))$, where $J_h$ is the Jacobian of $h$, then $$\left\|(-\Delta)^{s/2}\phi\right\|^2 = \left\|(-\Delta)^{s/2} \psi\right\|^2
\qquad\text{and}\qquad
\int_{{\mathbb{R}}^d} |\phi|^{2d/(d-2s)} \,dx = \int_{{\mathbb{R}}^d} |\psi|^{2d/(d-2s)} \,dx \,.$$
Similarly, we now argue that is conformally invariant under replacing $\gamma(x,y)$ by $J_h(x)^{(d-2s)/2d} \gamma(h(x),h(y)) J_h(y)^{(d-2s)/2d}$. We first observe that is equivalent to the following inequality. For any sequence of functions $(\psi_j)\subset \dot H^s({\mathbb{R}}^d)$ satisfying $\langle(-\Delta)^{s/2}\psi_j,(-\Delta)^{s/2}\psi_k\rangle = \delta_{j,k}$ and for any sequence of numbers $(\lambda_j)$ satisfying $0\leq\lambda_j\leq 1$, we have $$\sum_j \lambda_j \geq K_{s,d} \int_{{\mathbb{R}}^d} \left( \sum_j \lambda_j |\psi_j|^2 \right)^{d/(d-2s)} \,dx \,.$$ This equivalence follows by expanding the trace class operator $(-\Delta)^{s/2}\gamma(-\Delta)^{s/2}=\sum_j \lambda_j |f_j\rangle\langle f_j|$ into its eigenfunctions and setting $\psi_j=(-\Delta)^{-s/2}f_j$.
If we now let $\phi_j(x) = J_h(x)^{(d-2s)/2d} \psi_j(h(x))$, then, by polarization of the above identity, $$\langle(-\Delta)^{s/2}\phi_j,(-\Delta)^{s/2}\phi_k\rangle
= \langle(-\Delta)^{s/2}\psi_j,(-\Delta)^{s/2}\psi_k\rangle \,,$$ and clearly $$\int_{{\mathbb{R}}^d} \left( \sum_j \lambda_j |\phi_j|^2 \right)^{d/(d-2s)} \,dx = \int_{{\mathbb{R}}^d} \left( \sum_j \lambda_j |\psi_j|^2 \right)^{d/(d-2s)} \,dx \,.$$ This proves that Rumin’s inequality is invariant under replacing $\gamma(x,y)$ by $J_h(x)^{(d-2s)/2d} \gamma(h(x),h(y)) J_h(y)^{(d-2s)/2d}$ for any conformal transformation.
One consequence of this conformal invariance is that the inequality has an equivalent formulation on the sphere ${\mathbb{S}}^d$ via stereographic projection as in [@Li2]. In light of previous results about conformally invariant trace inequalities [@Mo] it is natural to wonder about the sharp constant in .
[FrLeLiSe]{}
M. Sh. Birman, G. E. Karadzhov, M. Z. Solomyak, *Boundedness conditions and spectrum estimates for the operators b(X)a(D) and their analogs*, In: Estimates and asymptotics for discrete spectra of integral and differential equations, Adv. Soviet. Math. **7**, Amer. Math. Soc., 1991, 85–106.
J. G. Conlon, *A new proof of the Cwikel–Lieb–Rosenbljum bound*. Rocky Mountain J. Math. **15** (1985), no. 1, 117–122.
M. Cwikel, *Weak type estimates for singular values and the number of bound states of Schrödinger operators*. Ann. Math. **106** (1977), 93–102.
I. Daubechies, *An uncertainty principle fermions with a generalized kinetic energy*. Comm. Math. Phys. **90** (1983), 511–520.
C. L. Fefferman, *The uncertainty principle*. Bull. Amer. Math. Soc. **9** (1983), no. 2, 129–206.
R. L. Frank, M. Lewin, E. H. Lieb, R. Seiringer, *A positive density analogue of the Lieb–Thirring inequality*. Duke Math. J., to appear. Preprint (2011): arXiv:1108.4246
R. L. Frank, E. H. Lieb, R. Seiringer, *Number of bound states of Schrödinger operators with matrix-valued potentials*. Lett. Math. Phys. **82** (2007), 107–116.
R. L. Frank, E. H. Lieb, R. Seiringer, *Equivalence of Sobolev inequalities and Lieb–Thirring inequalities*. In: XVIth International Congress on Mathematical Physics, Proceedings of the ICMP held in Prague, August 3-8, 2009, P. Exner (ed.), 523–535, World Scientific, Singapore, 2010.
R. L. Frank, R. Olofsson, *Eigenvalue bounds for Schrödinger operators with a homogeneous magnetic field*. Lett. Math. Phys. **97** (2011), no. 3, 227–241.
D. Hundertmark, *On the number of bound states for Schrödinger operators with operator-valued potentials*. Ark. Mat. **40** (2002), 73–87.
D. Hundertmark, *Some bound state problems in quantum mechanics.* In: Spectral theory and mathematical physics: a Festschrift in honor of Barry Simon’s 60th birthday, 463–496, Proc. Sympos. Pure Math. **76**, Part 1, Amer. Math. Soc., Providence, RI, 2007.
A. Laptev, *Dirichlet and Neumann eigenvalue problems on domains in Euclidean spaces*. J. Funct. Anal. **151** (1997), no. 2, 531–545.
A. Laptev, T. Weidl, *Sharp Lieb–Thirring inequalities in high dimensions*. Acta Math. **184** (2000), 87–111.
A. Laptev, T. Weidl, *Recent results on Lieb–Thirring inequalities*. Journées ‘Équations aux Dérivées Partielles’ (La Chapelle sur Erdre, 2000), Exp. No. XX, Univ. Nantes, Nantes, 2000.
D. Levin, M. Solomyak, *The Rozenblum–Lieb–Cwikel inequality for Markov generators*. J. Anal. Math. **71** (1997), 173–193.
P. Li, S. T. Yau, *On the Schrödinger equation and the eigenvalue problem*. Comm. Math. Phys. **88** (1983), no. 3, 309–318.
E. H. Lieb, [*Bounds on the eigenvalues of the Laplace and Schrödinger operators*]{}, Bull. Amer. Math. Soc. [**82**]{} (1976), 751–752. and: *The number of bound states of one body Schrödinger operators and the Weyl problem*. Proc. A.M.S. Symp. Pure Math. **36** (1980), 241–252.
E. H. Lieb, *Sharp constants in the Hardy–Littlewood–Sobolev and related inequalities*. Ann. of Math. (2) **118** (1983), no. 2, 349–374.
E. H. Lieb, W. Thirring, *Inequalities for the moments of the eigenvalues of the Schrödinger Hamiltonian and their relation to Sobolev inequalities*. Studies in Mathematical Physics, 269–303. Princeton University Press, Princeton, NJ, 1976.
C. Morpurgo, *Sharp inequalities for functional integrals and traces of conformally invariant operators*. Duke Math. J. **114** (2002), no. 3, 477–553.
G. V. Rozenblum, *Distribution of the discrete spectrum of singular differential operators*. Soviet Math. Dokl. **13** (1972), 245–249, and Soviet Math. (Iz. VUZ) **20** (1976), 63–71.
M. Rumin, *Spectral density and Sobolev inequalities for pure and mixed states*. Geom. Funct. Anal. **20** (2010), 817–844.
M. Rumin, *Balanced distribution-energy inequalities and related entropy bounds*. Duke Math. J. **160** (2011), no. 3, 567–597.
B. Simon, *Analysis with weak trace ideals and the number of bound states of Schrödinger operators*. Trans. Amer. Math. Soc. **224** (1976), 367–380.
[^1]: © 2012 by the author. This paper may be reproduced, in its entirety, for non-commercial purposes.\
U.S. National Science Foundation grant PHY-1068285 is acknowledged. The author is grateful to A. Laptev, M. Lewin, E. Lieb, R. Seiringer and T. Weidl for helpful discussions.
|
---
abstract: 'Based on quantitative complementarity relations (QCRs), we analyze the multipartite correlations in four-qubit cluster-class states. It is proven analytically that the average multipartite correlation $E_{ms}$ is entanglement monotone. Moreover, it is also shown that the mixed three-tangle is a correlation measure compatible with the QCRs in this kind of quantum states. More arrestingly, with the aid of the QCRs, a set of hierarchy entanglement measures is obtained rigorously in the present system.'
author:
- 'Yan-Kui Bai and Z. D. Wang'
title: 'Multipartite entanglement in four-qubit cluster-class states'
---
introduction
============
Entanglement, first noted by Einstein and Schrödinger, is one of the most important features of a many-body quantum system. Nowadays, it is a crucial physical resource widely used in quantum information processing (QIP), as in quantum communication [@eke91; @ben93] and quantum computation [@ben00; @rau01; @llb01]. Therefore, the characterization of entanglement, especially at a quantitative level, is fundamentally important. Compared with bipartite entanglement, which is now well understood in many aspects, the characterization of multipartite entanglement is still very challenging though a lot of effort has been made (c.f. [@hhh07]).
It is widely accepted that a good entanglement measure should be non-negative, invariant under local unitary (LU) transformation, and nonincreasing on average under local operations and classical communications (LOCC), i.e., entanglement monotone [@ved97]. Recently, based on quantitative complementarity relations (QCRs) [@qcrs3], an average multipartite correlation measure $E_{ms}$ is introduced, which was proved to satisfy the first two conditions [@byw07]. From much numerical analysis, it was conjectured that $E_{ms}$ also has the entanglement monotone property and thus may be able to characterize the multipartite entanglement in a four-qubit pure state [@byw07]. However, the analytical proof of the conjecture is extremely difficult for a general quantum state. In this sense, it seems helpful to look into the conjecture in certain cases, which, on one hand, allows us to obtain exact results, and, on the other hand, gives us useful information beyond bipartite entanglement.
Cluster states, which are typically multipartite entangled states, are utilized in quantum error-correcting codes [@dsw02] and tests of quantum nonlocality [@ogu05]. Moreover, they are also a universal resource in one-way quantum computation [@rau01]. In optical systems, a four-qubit cluster state has been prepared and applied to the Grover search algorithm [@natr2; @kie05] More recently, a six-photon cluster state was also produced [@cyl07]. So, in order to make better use of the cluster state, it is quite desirable to explore quantitatively the entanglement in this kind of system.
In this paper, we analyze the multipartite quantum correlations in four-qubit cluster-class states. Here, by a cluster-class state , we mean the output state of a cluster state under stochastic LOCC (SLOCC [@ben01; @dur00]). For this class of quantum states, we prove exactly that the average multipartite correlation $E_{ms}$ is entanglement monotone. Moreover, it is shown that the three- and four-qubit correlations $t_3$ and $t_4$ are also entanglement monotone when setting $t_3$ to be a mixed three-tangle. More intriguingly, a set of hierarchy entanglement measures are thus obtained rigorously in the system. The paper is organized as follows. In Sec. II, the entanglement monotone property of multipartite correlations in the cluster-class states is proven exactly. In Sec. III, we address several relevant key issues and give a brief conclusion.
multipartite quantum correlations in four-qubit cluster-class states
====================================================================
Before analyzing these quantum correlations, we first recall the QCRs and the definition of average multipartite quantum correlation. As an essential principle of quantum mechanics, complementarity often refers to mutually exclusive properties. The quantitative version of the complementarity relation in an $N$-qubit pure state is also provided and formulated as [@qcrs3] $\tau_{k(R_k)}+S^{2}_{k}=1$, where the linear entropy $\tau_{k(R_k)}$ characterizes the total quantum correlation of qubit $k$ with the remaining qubits $R_k$ and $S^{2}_{k}$ is a measure of single-particle property. For an $N$-qubit pure state, the linear entropy is contributed by the different levels of quantum correlation, i.e., $\{t_2,t_{3},...,t_N\}$, in which $t_m$ represents the genuine $m$-qubit correlation for $m=2,3,...,N$ [@czz06; @byw07]. Based on the QCRs, an average multipartite correlation measure in a four-qubit pure state is introduced [@byw07]: $$\label{1}
E_{ms}(\Psi_{4})=\frac{M}{4}=\frac{M_A+M_B+M_C+M_D}{4},$$ where $M$ is the sum of the single residual correlations and $M_k$ is defined as $M_k=\tau_{k(R_k)}-\sum_{l\in R_k} C_{kl}^2$ (here, the square of the concurrence quantifies the two-qubit correlation). It is conjectured that $E_{ms}$ is entanglement monotone and can characterize the multipartite entanglement in the system. However, the proof of this property is extreme difficult for a generic quantum state, although a numerical analysis supports the conjecture.
Due to the important applications in QIP, cluster states have been paid more and more attention in recent years. As shown in Fig.1, these states are associated with graphs where each vertex represents a qubit prepared in the initial state $({|0\rangle}+{|1\rangle})/\sqrt{2}$ and each edge represents a controlled phase gate applyed between two qubits [@rau01]. In this paper, we will consider the multipartite quantum correlations in four-qubit cluster-class states that are related to the cluster states by SLOCC. In the following, we will analyze the entanglement monotone property of the average multipartite correlation $E_{ms}$ and the three-, and four-qubit correlations $t_3$, and $t_4$ in this class of quantum states.
Average multipartite quantum correlation and entanglement monotone
------------------------------------------------------------------
In one-dimensional (1D) lattices, the four-qubit cluster state can be written as ${|\mathcal{C}_{4}^{(1)}\rangle}=({|0000\rangle}+{|0011\rangle}+{|1100\rangle}-{|1111\rangle})/2$ after LU transformation. The entanglement monotone property requires that the correlation $E_{ms}$ does not increase on average under LOCC. It is known that any local operation can be implemented by a sequence of two-outcome positive operator-valued measures (POVMs) such as $\{A_1,A_2\}$ which satisfies $A_1^{\dagger}A_1+A_2^{\dagger}A_2=I$ [@dur00]. According to the singular-value decomposition [@dur00], the POVM operators can be written as $A_1=U_1\mbox{diag}\{\alpha,\beta\}V$ and $A_2=U_2\mbox{diag}\{\sqrt{1-\alpha^2},\sqrt{1-\beta^2}\}V$, respectively, where $U_i$ and $V$ are unitary matrices, and $\alpha$ and $\beta$ are real numbers in range $(0,1)$. Due to the LU invariance of the $E_{ms}$, we need only to consider the diagonal matrices. The output state of ${|\mathcal{C}_4^{(1)}\rangle}$ under a general POVM operator (i.e., the SLOCC operation) has the form $$\label{2}
{|\Psi^{(1)}\rangle}=a{|0000\rangle}+b{|0011\rangle}+c{|1100\rangle}-d{|1111\rangle},$$ where the normalized parameters $a,b,c$, and $d$ are complex numbers and we refer to ${|\Psi^{(1)}\rangle}$ as the cluster-class state [@note1]. Furthermore, since the form of this quantum state is not changed under the next POVM, the entanglement monotone property of $E_{ms}(\Psi^{(1)})$ will be satisfied only if the quantity is nonincreasing under the first level of the POVM.
For the quantum state ${|\Psi^{(1)}\rangle}$, the two-qubit reduced density matrix of subsystem $AB$ reads $$\label{3}
\rho_{AB}=\left(\begin{array}{cccc}
|a|^{2}+|b|^{2} & 0 & 0 & ac^{*}-bd^{*} \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
a^{*}c-b^{*}d & 0 & 0 & |c|^{2}+|d|^{2} \\
\end{array}\right).$$ Note that the two-qubit quantum correlation may be defined as $t_2(\rho_{AB})=C^2(\rho_{AB})$, where the concurrence $C(\rho_{AB})=\mbox{max}[0,(\sqrt{\lambda_1}-\sqrt{\lambda_2}-
\sqrt{\lambda_3}-\sqrt{\lambda_4})]$ with the decreasing positive real numbers $\lambda_{i}$ being the eigenvalues of the matrix $\rho_{AB}(\sigma_y\otimes\sigma_y)\rho_{AB}^{\ast}(\sigma_y\otimes\sigma_y)$ [@woo97]. After a simple calculation, we get $C_{AB}=2|a^{*}c-b^{*}d|$. Similarly, we have $C(\rho_{CD})=2|a^{*}b-c^{*}d|$ and $C(\rho_{ij})=0$ for other subsystems. The linear entropy of qubit-$A$, $\tau_{A(R_A)}(=4det\rho_{A}$) [@san00] can quantify the total quantum correlation between two subsystems $A$ and $BCD$. So, the multipartite correlation related to qubit $A$, i.e., the residual correlation, is $$\label{4}
M_A(\Psi^{(1)})=\tau_{A(R_A)}-C_{AB}^2=4|ad+bc|^2.$$ With a similar derivation, we can obtain $M_B=M_C=M_D=M_A$, which means that the single residual correlation $M_k(\Psi^{(1)})$ is invariant under permutations of qubits and the average correlation $E_{ms}(\Psi^{(1)})=M_A(\Psi^{(1)})$.
Under the POVM $\{A_1,A_2\}$ performed on the subsystem $A$, two quantum states ${|\Phi_{1}^{(1)}\rangle}=A_1{|\Psi^{(1)}\rangle}/\sqrt{p_1}$ and ${|\Phi_{2}^{(1)}\rangle}=A_2{|\Psi^{(1)}\rangle}/\sqrt{p_2}$ are available with probabilities $p_i=\mbox{tr}[A_i{{|\Psi^{(1)}\rangle}{\langle \Psi^{(1)}|}}A_i^{\dagger}]$ for $i=1,2$. Note that the linear entropy and the concurrence are invariant under determinant one SLOCC operation (i.e., for the quantum states ${|\Psi^{(1)}\rangle}$, ${|\Phi^{(1)}\rangle}$, and ${|\Phi^{(2)}\rangle}$, the two meaures are invariant if the POVM operator satisfies $\mbox{det}(A_i)=1$) [@vdd01]; we can obtain $M_A(\Phi_1^{(1)})=\frac{\alpha^2\beta^2}{p_1^2}M_A(\Psi^{(1)})$ and $M_A(\Phi_1^{(2)})=\frac{(1-\alpha^2)(1-\beta^2)}{p_2^2}M_A(\Psi^{(1)})$. With a similar deduction as that in Ref. [@dur00], we can derive the following relation: $$\label{5}
M_A(\Psi^{(1)})-p_1M_A(\Phi_1^{(1)})-p_2M_A(\Phi_2^{(1)})\geq 0.$$ Combining the permutation invariance of the $M_k(\Psi^{(1)})$, we can draw the conclusion that the single residual correlation $M_A(\Psi^{(1)})=E_{ms}(\Psi^{(1)})$ is entanglement monotone and can characterize the multipartite entanglement in the system.
For this kind of quantum state, the contour plot of $E_{ms}$ versus the non-normalized real parameters $a'$ and $d'$ is depicted in Fig.2.1, where the parameters $b'=c'=0.5$ are fixed. In the regions near $(a'=d'=0)$ and $(a', d'\gg0.5)$ , the multipartite entanglement has larger values, as the quantum state ${|\Psi^{(1)}\rangle}$ tends to the Greenberger-Horne-Zeilinger (GHZ) state. In the regions $(a'\gg b',c',d')$ and $(d'\gg a',b',c')$, $E_{ms}$ has smaller values, as the quantum state approaches the product state. In particular, when the real parameters $a'=d'$ and $b'=c'$, the multipartite entanglement reaches the maximum $E_{ms}=1$. In this case, the quantum state can be rewritten as $$\label{6}
{|\Pi_{4}\rangle}=({|00\rangle}\otimes{|\varphi\rangle}+{|11\rangle}\otimes
{|\varphi^{\bot}\rangle})/\sqrt{2}$$ where ${|\varphi\rangle}=(a'{|00\rangle}+b'{|11\rangle})/\sqrt{a^{'2}+b^{'2}}$ and ${|\varphi^{\bot}\rangle}=b'{|00\rangle}-a'{|11\rangle}/\sqrt{a^{'2}+b^{'2}}$. This state is the generalized Bell state, i.e., the maximal bipartite entangled state between subsystems $AB$ and $CD$. When ${|\varphi\rangle}$ is a product state, ${|\Pi_{4}\rangle}$ is a GHZ state. When ${|\varphi\rangle}$ is a Bell state, ${|\Pi_{4}\rangle}$ is a cluster state ${|\mathcal{C}_{4}^{(1)}\rangle}$.
In two-dimensional lattices, the four-qubit cluster-class state has the form $$\label{7}
{|\Psi^{(2)}\rangle}=a{|0000\rangle}-b{|0111\rangle}-c{|1010\rangle}+d{|1101\rangle},$$ where the parameters $a,b,c$, and $d$ are also complex. This kind of quantum state is related to the box cluster state (${|\mathcal{C}_{4}^{(2)}\rangle}=({|0000\rangle}-{|0111\rangle}-{|1010\rangle}+{|1101\rangle})/2$) via SLOCC. For the cluster-class state, we can obtain the concurrences $C_{AC}^{2}=4(|ac|-|bd|)^2$ and $C_{ij}^{2}=0$ for the other subsystems. Unlike in the 1D case, the single residual correlation $M_{k}(\Psi^{(2)})$ is not permutation invariant and does not satisfy the entanglement monotone property. As an example, we consider the quantum state ${|\Psi^{(2)}\rangle}$, where the non-normalized coefficients $a'=b'=2$, $c'=0.2$, and $d'=3$. After a simple calculation, we have $M_A=0.5643$ and $M_C=0.2915$. Under the POVM performed on qubit $A$ (here $\alpha=0.9$ and $\beta=0.2$), the change of the residual correlation is $\Delta
M_C=M_C(\Psi^{(2)})-p_1M_C(\Phi_1)-p_2M_C(\Phi_2)=-0.1151$.
However, the average multipartite correlation $$\label{8}
E_{ms}(\Psi^{(2)})=3(|a|^2+|c|^2)(|b|^2+|d|^2)+4|abcd|$$ is entanglement monotone, which can be proven as follows. First, we consider the POVM $\{A_1,A_2\}$ performed on the subsystem $A$. Due to the LU-invariant property of the $E_{ms}$, we need only consider the diagonal matrices in the singular-value decomposition form, as the output states, ${|\Phi_1\rangle}$ and ${|\Phi_2\rangle}$ are obtained with the probabilities $p_1$ and $p_2$, respectively. The correlation $E_{ms}(\Psi^{(2)})$ can be separated into two components $\zeta_1=(\tau_{A(R_A)}-2C_{AC}^2)/4$ and $\zeta_2=(\tau_{B(R_B)}+\tau_{C(R_C)}+\tau_{D(R_D)})/4$, on which the effects are different under the POVM. The component $\zeta_1$ is invariant under the determinant one SLOCC. With this property, we can derive $\Delta
\zeta_1=\zeta_1(\Psi^{(2)})-p_1\zeta_1(\Phi_1)-p_2\zeta_1(\Phi_2)=
[1-\frac{\alpha^2\beta^2}{p_1}-\frac{(1-\alpha^2)(1-\beta^2)}{p_2}]\zeta_1(\Psi^{(2)})$, where $\zeta_1(\Psi^{(2)})=(|ad|+|bc|)^2-(|ac|-|bd|)^2$ (in the general case, this quantity is not guaranteed to be non-negative). For the component $\zeta_2$, the change is $\Delta\zeta_2=\zeta_2(\Psi^{(2)})-p_1\zeta_2(\Phi_1)-p_2\zeta_2(\Phi_2)=\sum_{k\neq
A}[\tau(\rho_k)-p_1\tau(\rho_k^1)-p_2\tau(\rho_k^2)]$, which is equivalent to the changes of the linear entropies induced by the mixed state decomposition of subsystems $\rho_{k}$ for $k=B,C,D$ [@note2]. After some tedious calculation, the change of the average multipartite correlation is $$\begin{aligned}
\label{9}
\Delta_{A} E_{ms}&=&\Delta\zeta_1+\Delta\zeta_2\nonumber\\
&=&(\alpha^2-\beta^2)^2[4|abcd|(|a|^2+|b|^2)(|c|^2+|d|^2)\nonumber\\
&&+3(|bc|^2-|ad|^2)^2]/p_1p_2,\end{aligned}$$ which is obviously a non-negative number. This means that the correlation $E_{ms}(\Psi^{(2)})$ does not increase on average under the POVM performed on qubit $A$. But, since the quantities $\zeta_1$ and $\zeta_2$ are variant under the permutation of two qubits, we still need to consider the POVMs performed on the subsystems $B,C$, and $D$. After a similar analysis, we can derive the change of the correlation under the POVM on qubit $C$ as $\Delta_{C}
E_{ms}=(\alpha^2-\beta^2)^2[4|abcd|(|a|^2+|d|^2)(|b|^2+|c|^2)
+3(|ab|^2-|cd|^2)^2]/p_1p_2$, which is also non-negative. For the POVM on the subsystem $B$, one can separate the correlation $E_{ms}$ into two components $\kappa_1=(\tau_{B(R_B)})/4$ and $\kappa_2=(\sum_{k\neq B}\tau_{k(R_k)}-2C_{AC}^{2})/4$ (the non-negative property of $\kappa_2$ is guaranteed by the monogamy relation [@osb06]). $\kappa_1$ is nonincreasing due to the SLOCC invariance, and $\kappa_2$ is non-increasing because of the concave and convex properties of the linear entropy and the concurrence, respectively. Therefore, $E_{ms}$ is also nonincreasing under this POVM. The case of the POVM on the subsystem $D$ is similar. According to the above analysis, we can draw the conclusion that the correlation $E_{ms}(\Psi^{(2)})$ is entanglement monotone and can characterize the multipartite entanglement in the system.
In Fig.2.2, the change of $E_{ms}(\Psi^{(2)})$ with the non-normalized real parameters $a'$ and $d'$ ($b'=c'=0.5$ are fixed) is plotted. When $(a'\gg b',c',d')$ and $(d'\gg a',b',c')$, $E_{ms}\approx0$ and the quantum states tend to the four-qubit product state. When $(a',d'\approx 0)$ and $(a',d'\gg b',c')$, the multipartite entanglement has rather large values ($E_{ms}\approx
0.75$), where the quantum state approximates to the product state of a single-qubit state and a three-qubit GHZ state. The maximum $E_{ms}=1$ appears at the point $a'=d'=0.5$, where the quantum state is just the box cluster state ${|\mathcal{C}_4^{(2)}\rangle}$.
Finally, we address the entanglement monotone property of $E_{ms}$ in a three-dimensional cluster-class state, which is a trivial case. This state has the form $$\label{10}
{|\Psi^{(3)}\rangle}=a{|0000\rangle}+b{|1111\rangle},$$ and relates to the four-qubit Greenberger-Horne-Zeilinger state via the SLOCC operation. The quantum state ${|\Psi^{(3)}\rangle}$ is invariant under the permutation of qubits and all its two-qubit concurrences are zeros. Under the next level of the POVM, the same properties still hold. So, the single residual correlation $M_k=\tau_{k(R_k)}$ is entanglement monotone and satisfies $M_A=M_B=M_C=M_D$. It is obvious that the average correlation $E_{ms}(\Psi^{(3)})=M_k=4|ab|^2$ is also entanglement monotone and can characterize the multipartite entanglement in the system.
Three- and four-qubit entanglement measures
-------------------------------------------
In a four-qubit pure state ${|\Psi\rangle}_{ABCD}$, there are five multipartite correlation parameters (c.f. the Venn diagram in [@byw07]), i.e., one genuine four-qubit correlation $t_4({|\Psi\rangle}_{ABCD})$ and four three-qubit correlations $t_3(\rho_{ijk})$. According to the QCRs, we have a set of equations [@byw07] $$\label{11}
t_4({|\Psi\rangle})+\sum_{i<j\neq k}t_3(\rho_{ijk})=M_k,$$ where $M_k$ is the single-residual correlation related to qubit $k$, and the subscripts $i,j,k=A,B,C,D$. Note that these four equations are unable to determine completely the five correlation parameters. In fact, at least one additional independent relation for either $t_3$ or $t_4$ is needed in this case.
As is known, the mixed three-tangle is a good entanglement measure for a three-qubit mixed state; it is defined as [@won01] $$\label{12}
\tau_{3}(\rho_{ijk})=\mbox{min}\sum_{\{p_{x},\phi_{x}\}}p_{x}\tau(\phi_{x}),$$ where $\tau$ is the pure state three-tangle [@ckw00] and the minimum runs over all the pure state decompositions of $\rho_{ijk}$. However, it is shown in Ref. [@byw07] that $\tau_3$ is not compatible with the QCRs in some specific four-qubit pure states \[for example, the quantum state ${|\psi\rangle}_{ABCD}=({|0000\rangle}+{|1011\rangle}+{|1101\rangle}+{|1110\rangle})/2$ [@ver02]\]. So, for the cluster-class states, it is necessary to check whether or not the $\tau_3$ can quantify correctly the $t_3$ in the QCRs. If $\tau_3$ does this, we are able to obtain the genuine four-qubit correlation $t_4$ in terms of Eq. (11).
For the cluster-class state ${|\Psi^{(1)}\rangle}$ in 1D lattices, the three-qubit reduced density matrices have the form $\rho_{ijk}=p_1{{|0\rangle}{\langle 0|}}_i\otimes{{|\phi\rangle}{\langle \phi|}}_{jk}+p_2{{|1\rangle}{\langle 1|}}_i\otimes{{|\psi\rangle}{\langle \psi|}}_{jk}$, in which ${|\phi\rangle}$ and ${|\psi\rangle}$ are two-qubit entangled states. If one uses the mixed three-tangle to quantify the three-qubit correlation, the relation $t_3(\rho_{ijk})=\tau_3(\rho_{ijk})=0$ can be obtained. Substituting this relation into Eq. (11), one can solve the genuine four-qubit correlation $t_4=M_k=4|ad+bc|^2$. According to the analysis in Sec. IIA, we know that the quantity $\tau_4=t_4$ satisfies all three requirements of an entanglement measure. Therefore, for the cluster-class state ${|\Psi^{(1)}\rangle}$, a set of correlation measures $\{\tau_2,\tau_3,\tau_4\}$ which all are entanglement monotone \[we define $\tau_2(\rho_{ij})=C_{ij}^2$\] can characterize the genuine two-, three-, and four-qubit entanglement in the system. For the cluster-state ${|\Psi^{(3)}\rangle}$ in 3D lattices, the case is similar. Its three-qubit reduced density matrix is $\rho_{ijk}=|a|^2{{|000\rangle}{\langle 000|}}+|b|^2{{|111\rangle}{\langle 111|}}$ and the corresponding three-tangle $\tau_3$ is zero. After using $\tau_3$ to quantify the correlation $t_3$, one can solve the correlation $t_4=\tau_4=M_k=4|ab|^2$, which is also entanglement monotone. So, the correlation measures $\{\tau_2,\tau_3,\tau_4\}$ can characterize the different levels of entanglement in the cluster-class state ${|\Psi^{(3)}\rangle}$.
In the cluster-class state ${|\Psi^{(2)}\rangle}$, the situation is non-trivial. If one uses the mixed three-tangle to quantify the correlation $t_3$, it is straightforward to find that $\tau_3(\rho_{ABC})=0$ and $\tau_3(\rho_{ACD})=0$. Substituting the two zero $t_3$s into Eq. (11), one can obtain the other three multipartite correlations $t_4(\Psi^{(2)})=16|abcd|$, $t_3(\rho_{ABD})=4(|ad|-|bc|)^2$, and $t_3(\rho_{BCD})=4(|ab|-|cd|)^2$. At this stage, we need to consider *whether or not the mixed three-tangle $\tau_3$ is compatible with the QCRs in this system and whether the correlation $t_4$ is appropriate to characterize the genuine four-qubit entanglement*.
We first analyze the compatibility of $\tau_3$ with the QCRs in the system. The decomposition of $\rho_{ABD}$ into its eigenstates can be written as $$\label{13}
\rho_{ABD}=p{{|\psi_1\rangle}{\langle \psi_1|}}+(1-p){{|\psi_2\rangle}{\langle \psi_2|}},$$ where ${|\psi_1\rangle}=(a{|000\rangle}+d{|111\rangle})/\sqrt{p}$, ${|\psi_2\rangle}=(b{|011\rangle}+c{|100\rangle})/\sqrt{1-p}$, and $p=|a|^2+|d|^2$. It is well known that any other decomposition can be obtained with a unitary transformation on the eigenvectors [@los06]. Hence, the vectors of any decomposition of $\rho_{ABD}$ are linear combination of ${|\psi_1\rangle}$ and ${|\psi_2\rangle}$, i.e., $$\begin{aligned}
\label{14}
{|Z(q,\phi)\rangle}&=&\sqrt{q}{|\psi_1\rangle}-e^{i\phi}\sqrt{1-q}{|\psi_2\rangle}\\
&=&\tilde{a}{|000\rangle}-e^{i\phi}\tilde{b}{|011\rangle}-e^{i\phi}\tilde{c}{|100\rangle}
+\tilde{d}{|111\rangle},\nonumber\end{aligned}$$ where $\tilde{a}=a\gamma$, $\tilde{b}=b\eta$, $\tilde{c}=c\eta$, and $\tilde{d}=d\gamma$, with $\gamma=\sqrt{q/p}$ and $\eta=\sqrt{(1-q)/(1-p)}$. For this pure state, the reduced density matrix of qubits $AB$ is $$\label{15}
\rho_{AB}(Z)=\left(\begin{array}{cccc}
|\tilde{a}|^2 & 0 & -\tilde{a}\tilde{c}^{*}e^{-i\phi} & 0 \\
0 & |\tilde{b}|^2 & 0 & -\tilde{b}\tilde{d}^{*}e^{i\phi} \\
-\tilde{a}^{*}\tilde{c}e^{i\phi} & 0 & |\tilde{c}|^2 & 0 \\
0 & -\tilde{b}^{*}\tilde{d}e^{-i\phi} & 0 & |\tilde{d}|^2 \\
\end{array}\right)$$ and its concurrence is zero (in fact, $\rho_{AB}$ is a mix of two product states). Similarly, for the quantum state $\rho_{AD}(Z)$, we can obtain $C_{AD}=0$ as well. So, in any pure state decomposition of $\rho_{ABD}$, the entanglements of subsystems $AB$ and $AD$ are both zeros. Then, according to the definition of the mixed state three-tangle, we have the following relation: $$\begin{aligned}
\label{16}
\tau_3(\rho_{ABD}) &=& \mbox{min}\sum_{\{p_x,Z_x\}}p_x\tau(Z_x(q,\phi)) \nonumber\\
&=& \mbox{min}\sum_{\{p_x,Z_x\}}p_x[\tau_{A(R_A)}^{(x)}-(C_{AB}^{(x)})^2
-(C_{AD}^{(x)})^2]\nonumber\\
&=&\mbox{min}\sum_{\{p_x,Z_x\}}p_x\tau_{A(R_A)}^{(x)}\nonumber\\
&=&C_{A:BD}^2(\rho_{ABD})\nonumber\\&=&4(|ad|-|bc|)^2,\end{aligned}$$ where we have replaced the basis $\{{|00\rangle},{|11\rangle}\}_{BD}$ with $\{{|\tilde{0}\rangle},{|\tilde{1}\rangle}\}_{BD}$ for the calculation of the last equation. This value coincides with the correlation $t_3(\rho_{ABD})$ obtained using the QCRs. For the quantum state $\rho_{BCD}$, we can get $\tau_3(\rho_{BCD})=4(|ab|-|cd|)^2=t_3(\rho_{BCD})$ after a similar derivation. Therefore, in the cluster-class state ${|\Psi^{(2)}\rangle}$, the mixed three-tangle $\tau_3$ can quantify correctly the correlation $t_3$ and is compatible with the QCRs.
With the QCRs, we solve the genuine four-qubit correlation $t_4(\Psi^{(2)})=16|abcd|$, which is obviously non-negative. The LU-invariant property is guaranteed by the corresponding property of the correlations $M_k$ and $t_3$ in Eq. (11). Before using $t_4(\Psi^{(2)})$ to characterize the genuine four-qubit entanglement in the system, we should prove first that it is entanglement monotone. Since the correlation $t_4$ is invariant under the permutations of qubits, we only need consider the POVM $\{A_1,A_2\}$ performed on the subsystem $A$ in which the diagonal matrices are $\mbox{diag} \{\alpha,\beta\}$ and $\mbox{diag}
\{\sqrt{1-\alpha^2},\sqrt{1-\beta^2}\}$, respectively. After the POVM, two output states are available with probabilities $p_1$ and $p_2$, respectively, and the change of the correlation is $\Delta
t_4(\Psi^{(2)})=(1-\frac{\alpha^2\beta^2}{p_1}-\frac{(1-\alpha^2)
(1-\beta^2)}{p_2})t_4(\Psi^{(2)})$. Due to the non-negativity of the two factors in $\Delta t_4$ [@dur00], the correlation $t_4(\Psi^{(2)})=\tau_4$ is entanglement monotone. Therefore, the set of correlation measures $\{\tau_2,\tau_3,\tau_4\}$ is able to characterize the entanglements of two, three, and four qubits in the cluster-class state ${|\Psi^{(2)}\rangle}$, namely they can be good entanglement measures for the corresponding multi-body systems.
In Fig.3, the variations of the two-, three-, and four-qubit entanglements with the non-normalized parameters $a'$ and $b'$ are plotted. The behaviors of $C_{AC}^2$ and $\tau_3(\rho_{ABD})$ are the same and both attain the maximum $0.4999$ when $(a'=0,b'=0.7)$ and $(a'=0.7,b'=0)$. The value of $\tau_3(\rho_{BCD})$ tends to 1 when $(a'=b'\approx 0)$ and $(a'=b'\gg 0.5)$, because the quantum state $\rho_{BCD}$ approximates the pure GHZ state in these regions. The genuine four-qubit entanglement $\tau_4$ will be 1 when $a'=b'=0.5$. At this point, the quantum state is just the box cluster state ${|\mathcal{C}^{(2)}_{4}\rangle}$.
Based on the above analysis, we conclude that not only is the mixed three-tangle $\tau_3$ a compatible correlation measure with the QCRs but also a set of hierarchy measures $\{\tau_2,\tau_3,\tau_4\}$ can, respectively, quantify the two-, three-, and four-qubit entanglement in the cluster-class states, as listed in Table I.
-----------------------------------------------------------------------------------------------------
$\begin{array}{cc} ${|\Psi^{(1)}\rangle}$ ${|\Psi^{(2)}\rangle}$ ${|\Psi^{(3)}\rangle}$
& \mbox{state} \\
\mbox{parameter} & \\
\end{array}$
-------------------------- ------------------------ ------------------------ ------------------------
$\tau_4$ $4|ad+bc|^2$ $16|abcd|$ $4|ab|^2$
$\tau_3(\rho_{ABD})$ $0$ $4(|ad|-|bc|)^2$ $0$
$\tau_3(\rho_{BCD})$ $0$ $4(|ab|-|cd|)^2$ $0$
$\tau_2(\rho_{AB})$ $4|a^{*}c-b^{*}d|^2$ $0$ $0$
$\tau_2(\rho_{AC})$ $0$ $4(|ac|-|bd|)^2$ $0$
$\tau_2(\rho_{CD})$ $4|a^{*}b-c^{*}d|^2$ $0$ $0$
-----------------------------------------------------------------------------------------------------
: Entanglement measures in different four-qubit cluster-class states.
discussion and conclusion
=========================
For the cluster-class state ${|\Psi^{(2)}\rangle}$, the single residual correlation $M_{C}$ is not entanglement monotone as we showed in Sec. IIA. Here, we explain the reason. This residual correlation can be written as $M_{C}=\tau_4+\tau_3(\rho_{BCD})$ in terms of the analysis in Sec. IIB. Although the two components are both entanglement monotone functions under the POVMs performed on the subsystems $B,C$, and $D$, the effects of the POVMs on the subsystem $A$ are different from them. Due to the invariance of the qubit permutations, $\tau_4$ is still monotone under this POVM. For the reduced density matrix $\rho_{BCD}$, the effect of the POVM on qubit $A$ is equivalent to a mixed state decomposition of $\rho_{BCD}$. Because the mixed three-tangle is a convex function, the parameter $\tau_3(\rho_{BCD})$ is nondecreasing under this POVM. Therefore, when the decrease of $\tau_4$ is less than the increase of $\tau_3$, the residual correlation $M_C$ will not be monotone. Just as in the example in Sec. IIA, the changes of the three-, and four-qubit correlations are $\Delta \tau_3(\rho_{BCD})=-0.1964$ and $\Delta
\tau_4=0.08127$, respectively, which results in $\Delta
M_C=-0.1151$. It should be pointed out that, for quantum states that do not have three-qubit correlations under LOCC (like the cluster-class states ${|\Psi^{(1)}\rangle}$ and ${|\Psi^{(3)}\rangle}$), the residual correlation $M_k$ could be entanglement monotone.
In this paper, we prove analytically that $E_{ms}$ is entanglement monotone for the four-qubit cluster-class states, and thus it can characterize the multipartite entanglement in the system. For general four-qubit states, $E_{ms}$ is conjectured to be entanglement monotone according to the numerical analysis in Ref. [@byw07]. Moreover, for a type of four-qubit state, numerical analysis of Bell inequalities [@syu03; @end05] shows a similar property to that of $E_{ms}$, which also supports our conjecture. A proof or disproof for an arbitrary $N$-qubit case is still awaited. At present, we know that, in a kind of quantum state whose two-qubit concurrences are zeros under the POVMs, the average correlation $E_{ms}=\frac{\sum_k\tau_{k(R_k)}}{N}$ is entanglement monotone. A trivial example is the $N$-qubit GHZ-class state ${|\mathcal{G}\rangle}_N=a{|00\cdots0\rangle}_N+b{|11\cdots1\rangle}_N$. A nontrivial example is a type of six-qubit cluster-class state ${|\Psi_6\rangle}=a{|000000\rangle}+b{|000111\rangle}+c{|111000\rangle}-d{|111111\rangle}$, where the parameters $a,b,c$, and $d$ are complex numbers; the corresponding cluster state has been prepared recently by Lu *et al.* with a photon system [@cyl07].
In the four-qubit cluster-class states, the mixed three-tangle $\tau_3$ is shown to be a compatible measure for quantifying the correlation $t_3$ in the QCRs. With this evaluation, the genuine four-qubit entanglement measure $\tau_4$ can be obtained. Based on this pure cluster state entanglement, we are able to introduce a mixed state entanglement measure by the convex roof extension [@uhl00], $$\label{17}
\tau_4(\rho_{ABCD})=\mbox{min}\sum_{\{p_x, \phi_x^{(\mathcal{C})}\}}
p_x\tau_4(\phi_x^{(\mathcal{C})}),$$ where an extra restriction is that the general vector ${|\phi_x^{(\mathcal{C})}\rangle}$ in the pure state decomposition has the form of cluster-class states. As an example, we analyze the quantum state $\rho_{ABCD}=1/2({{|\psi_1\rangle}{\langle \psi_1|}}+{{|\psi_2\rangle}{\langle \psi_2|}})$, in which ${|\psi_1\rangle}=({|0000\rangle}+{|1111\rangle})/\sqrt{2}$ and ${|\psi_2\rangle}=({|0011\rangle}+{|1100\rangle})/\sqrt{2}$. The general decomposition vector ${|Z(q_k,\varphi_k)\rangle}=(\sqrt{q_k}{|\psi_1\rangle}-e^{i\varphi_{k}}\sqrt{1-q_k}{|\psi_2\rangle}$ has the form of the cluster-class state ${|\Psi^{(1)}\rangle}$. After choosing $q_1=q_2=0.5$, $\varphi_{1}=0$ and $\varphi_2=\pi$, we can obtain $\tau_4(\rho_{ABCD})=0$ in terms of the formula in Eq. (17). Furthermore, via the mixed state parameter $\tau_4$, one can solve the five-qubit correlation $t_5$ with the help of the QCRs, which can possibly be entanglement monotone in a kind of five-qubit pure state.
In conclusion, we have explored the multipartite quantum correlations in four-qubit cluster-class states. It is shown that the average multipartite correlation $E_{ms}$ is entanglement monotone in these systems, partly supporting our previous conjecture [@byw07]. Moreover, we find a set of hierarchy measures $\{\tau_2,\tau_3,\tau_4\}$ that can characterize the different levels of entanglement in the cluster-class states. The entanglement monotone property of $E_{ms}$ in a general $N$-qubit pure state is still an open problem, which is worth study in the future.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors would like to thank Dong Yang and Heng Fan for many useful discussions and suggestions. The work was supported by the RGC of Hong Kong under HKU Grants No. 7051/06P, 7012/06P, and 3/05C, the URC fund of HKU, and NSF-China Grant No. 10429401.
[99]{}
A. K. Ekert, Phys. Rev. Lett. **67**, 661 (1991). C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, Phys. Rev. Lett. **70**, 1895 (1993). C. H. Bennett and D. P. Divincenzo, Nature **404**, 247 (2000). R. Raussendorf and H. J. Briegel, Phys. Rev. Lett. **86**, 5188 (2001). S.-S. Li, G.-L. Long, F.-S. Bai, S.-L. Feng, and H.-Z. Zheng, Pro. Natl. Acad. Sci. USA, **98**(21), 11847 (2001). R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, arXiv:quant-ph/0702225; M. B. Plenio and S. Virmani, Quantum Inf. Comput. **7**, 1 (2007). V. Vedral, M. B. Plenio, M. A. Rippin, and P. L. Knight, Phys. Rev. Lett. **78**, 2275 (1997). M. Jakob and J. A. Bergou, arXiv:quant-ph/0302075; X. Peng, X. Zhu, D. Suter, J. Du, M. Liu, and K. Gao, Phys. Rev. A **72**, 052109 (2005); T. E. Tessier, Found. Phys. Lett. **18**, 107 (2005). Y.-K. Bai, D. Yang, and Z. D. Wang, Phys. Rev. A **76**, 022336 (2007). D. Schlingemann and R. F. Werner, Phys. Rev. A **65**, 012308 (2001). O. Gühne, G. Tóth, P. Hyllus, and H. J. Briegel, Phys. Rev. Lett. **95**, 120405 (2005). P. Walther, K. J. Resch, T. Rudolph, E. Schenck, H. Weinfurter, V, Vedral, M. Aspelmeyer and A. Zeilinger, Nature **434**, 169 (2005); P. Walther, F. Tiefenbacher, P. Böhi, R. Kaltenbaek, T. Jennewein and A. Zeilinger, Nature **445**, 65 (2007). N. Kiesel, C. Schmid, U. Weber, G. Tóth, O. Gühne, R. Ursin, and H. Weinfurter, Phys. Rev. Lett. **95**, 210502 (2005); G. Vallone, E. Pomarico, P. Mataloni, F. De Martini, and V. Berardi, Phys. Rev. Lett. **98**, 180502 (2007); K. Chen, C.-M. Li, Q. Zhang, Y.-A. Chen, A. Goebel, S. Chen, A. Mair, and J.-W. Pan, Phys. Rev. Lett. **99**, 120503 (2007). C.-Y. Lu, X.-Q. Zhou, O. Gühne, W.-B. Gao, J. Zhang, Z.-S. Yuan, A. Goebel, T. Yang and J.-W. Pan, Nature Physics **3**, 91 (2007). C. H. Bennett, S. Popescu, D. Rohrlich, J. A. Smolin, and A. V. Thapliyal, Phys. Rev. A **63**, 012307 (2000). W. Dür, G. Vidal, and J. I. Cirac, Phys. Rev. A **62**, 062314 (2000). J.-M. Cai, Z.-W. Zhou, X.-X. Zhou, and G.-C. Guo, Phys. Rev. A **74**, 042338 (2006). Although the parameters $a,b,c$, and $d$ are real in the general case, we can enlarge them into complex numbers without loss of generality. S. Hill and W. K. Wootters, Phys. Rev. Lett. **78**, 5022 (1997); W. K. Wootters, Phys. Rev. Lett. **80**, 2245 (1998). E. Santos and M. Ferrero, Phys. Rev. A **62**, 024101 (2000). F. Verstraete, J. Dehaene, and B. DeMoor, Phys. Rev. A **64**, 010101(R) (2001). After the POVM $\{A_1,A_2\}$, the output states of the subsystem $\rho_{B}$ will be $\rho_B^i=\mbox{Tr}_{ACD}[A_i{{|\Psi^{(2)}\rangle}{\langle \Psi^{(2)}|}}A_i^\dagger]/p_i$, for $i=1,2$, for which the relation $\rho_B=p_1\rho_B^1+p_2\rho_B^2$ holds. The cases for $\rho_C$ and $\rho_D$ are similar. T. J. Osborne and F. Verstraete, Phys. Rev. Lett. **96**, 220503 (2006). A. Wong and N. Christensen, Phys. Rev. A **63**, 044301 (2001). V. Coffman, J. Kundu, and W. K. Wootters, Phys. Rev. A **61**, 052306 (2000). F. Verstraete, J. Dehaene, B. De Moor, and H. Verschelde, Phys. Rev. A **65**, 052112 (2002). R. Lohmayer, A. Osterloh, J. Siewert, and A. Uhlmann, Phys. Rev. Lett. **97**, 260502 (2006). S. Yu, Z.-B. Chen, J.-W. Pan, and Y.-D. Zhang, Phys. Rev. Lett. **90**, 080401 (2003). J. Endrejat and H. Büttner, Phys. Rev. A **71**, 012305 (2005). A. Uhlmann, Phys. Rev. A **62**, 032307 (2000).
|
---
abstract: 'We perform a study on the evolution of helical quantum turbulence at different temperatures by solving numerically the Gross-Pitaevskii and the Stochastic Ginzburg-Landau equations, using up to $4096^3$ grid points with a pseudospectral method. We show that for temperatures close to the critical the fluid described by these equations can act as a classical viscous flow, with the decay of the incompressible kinetic energy and the helicity becoming exponential. The transition from this behavior to the one observed at zero temperature is smooth as a function of temperature. Moreover, the presence of strong thermal effects can inhibit the development of a proper turbulent cascade. We provide anzats for the effective viscosity and friction as a function of the temperature.'
author:
- 'Patricio Clark Di Leoni$^{1,2}$, Pablo D. Mininni$^1$, & Marc E. Brachet$^3$'
bibliography:
- 'ms.bib'
title: Finite temperature effects in helical quantum turbulence
---
[^1]
Introduction
============
In experiments of superfluids and Bose-Einstein condensates (BECs) a highly disorganized and chaotic behavior, known as quantum turbulence, can be observed [@Vinen02; @Henn09; @Barenghi14]. At zero temperature quantum flows are characterized by their lack of viscosity, and by having all of their vorticity concentrated along vortex filaments with quantized circulation [@Feynman55; @Donnelly]. But at finite temperatures dissipative effects creep in. Landau and Tisza’s two fluid model [@Landau], where a mixture of superfluid and normal fluid coexist and interact (with the ratio between the two determined by the temperature), is perhaps the most simple way to represent the finite temperature dynamics of superfluids and BECs.
Based on the two fluid model, the Hall-Vinen-Bekarevich-Khalatnikov (HVBK) model [@Hall56; @Bekarevich61] adds a term accounting for the “mutual friction” between the normal and superfluid components. This model has been successful in, for example, explaining the Taylor-Couette instability in liquid helium [@Barenghi87]. It has also been used to study turbulent flows; for example, @Roche09 found that there is a strong locking between both fluid components and that both develop a turbulent cascade, @Shukla15 found the existence of both an inverse and a forward cascade in the two dimensional case, and shell models based on the HVBK model were developed and used to study the mutual friction terms [@Wacks11; @Boue15], intermittency [@Boue13], and scaling exponents [@Shukla16]. An alternative to the HVBK model is the vortex filament model [@Schwarz85], which, as the name implies, takes the vortex filaments into account explicitly by modeling them as classical Eulerian vortices of negligible width which evolve under the Biot-Savart law. As mutual friction can also be added to this model, it has been used to study quantum turbulence at finite temperatures [@Khomenko15; @Khomenko16]. But two important aspects of quantum turbulence are omitted in these two models. One is the lack of compressibility effects, and thus, of sound waves. The other is vortex reconnection. While in the HVBK model the former is omitted completely, as the fluid is averaged over volumes larger than the vortex width, in the vortex filament model it is introduced phenomenologically.
There is another family of models to study finite temperature effects based on extensions of the Gross-Pitaevskii equation (GPE). At zero or near zero temperatures the GPE, for which quantized vortices are exact solutions which can reconnect with no extra [*ad-hoc*]{} assumptions, is a very succesful model for BECs [@Proukakis08]. Moreover, a hydrodynamic analogy can be easily obtained from the GPE by means of the Madelung transformation, and it has been shown that at the larger scales its turbulent solutions match those of classical turbulence [@Nore97b; @Clark17]. There are various ways of generalizing the GPE for studying finite-temperature effects [@Berloff14]. These include solving the spectrally truncated version of the equations [@Davis01; @Connaughton05], coupling them with a Boltzmann equation describing the evolution of the thermalized modes as in the Zaremba-Nikuni-Griffin model [@Zaremba99], or simply adding a phenomenological dissipation term [@Pitaevskii59; @Choi98]. Previous studies of these models have concentrated on understanding the thermalization processes [@Davis01; @Krstulovic11a; @Krstulovic11b; @Shukla13], on investigating single vortex decay [@Kobayashi06; @Jackson09; @Rooney10; @Allen14; @Rooney16], or on modelling traps with several vortices [@Rooney13; @Stagg15] in configurations similar to experiments of BECs [@Neely13; @Moon15; @Kim16; @Seo17]. However, few studies have focused on the properties of the turbulent motions and on how finite temperature effects come into play in this regime.
In this context, it is worth noting that the study of quantum turbulence has garnered much interest in the past years. Two of the main areas of work have been establishing the differences between classical and quantum turbulence [@Barenghi14; @Paoletti08], and understanding the dynamics of Kelvin waves [@Fonda14; @Clark15a]. The usual picture of quantum turbulence (see, for example, [@Vinen02]) goes by the following: while at the larger scales the nonlinear energy transfer in quantum flows is mediated by the interaction between vortices and reconnection processes [@Meichle12], and the turbulent flow resembles that of a classical fluid, at scales smaller than the mean intervortex length Kelvin waves are believed to be the ones responsible for the energy transfer, thus generating Kelvin wave turbulence [@Kozik04; @Lvov10; @Boue11; @Boue15; @Clark15a]. Nonlinear interaction of Kelvin waves leads to the creation of phonons [@Vinen03], which are finally responsible for the depletion of incompressible kinetic energy in quantum turbulence [@Nore97a; @Clark17]. Additionally, recently it was shown that at zero temperature helical quantum turbulence (i.e., for flows with non-zero large-scale helicity) develops a dual cascade of energy and of helicity reminiscent of the dual cascade observed in classical helical flows, and that the emission of phonons also result in the depletion of helicity [@Clark17]. The presence of such a dual cascade, where both energy and helicity are being transferred from the larger to the smaller scales to be finally dissipated, has a strong impact in the evolution and decay of turbulence. In classical flows, this dual cascade has received significant attention (see, e.g., [@Brissaud73; @Chen03; @Teitelbaum09; @Moffatt14]), as well as the effects of helicity in the evolution and statistical properties of turbulence. As a result, understanding how helical flows and their dual cascade are affected by the interaction with the effective thermal dissipation in finite temperature models will be the first main objective of the present work.
Indeed, the overall purpose of this paper is to study finite temperature effects on a helical quantum flow in high resolution numerical simulations of the truncated Gross-Pitaesvkii equation, with the thermal states being generated by the Stochastic Ginzburg Landau method [@Berloff14]. Our results show that for high temperatures the quantum fluid described by this model can behave as a classical viscous flow, with the decay of energy and of helicity becoming exponential in time, and with the development of the dual turbulent cascade being hindered. The transition from the zero to the high temperature behavior is smooth as a function of the temperature, as long as the temperature is smaller than the critical. As a second objective, we will profit from the high spatial resolution of our simulations to provide anzats for the effective viscosity as a function of the temperature. The structure of the paper is as follows. In Sec. \[model\] we outline the physical model used, and describe the simulations we performed. The main results are presented in Sec. \[results\]. Finally, closing comments are presented in Sec. \[conclusions\].
The finite temperature model {#model}
============================
In this section we first present a brief summary of some key concepts and definitions of the zero temperature model (the GPE), used in this work as the starting point for the finite temperature model. Then, we explain how to generate finite temperature states using the Stochastic Ginzburg Landau equation (SGLE), following the method outlined in [@Krstulovic11a; @Krstulovic11b], and how to use these states in quantum turbulence simulations solving the GPE. Finally, we give details of a large number of high resolution simulations performed for the present study.
The Gross-Pitaevskii equation
-----------------------------
At zero (or near zero) temperatures, a field of weakly interacting bosons can be appropriately described by the GPE, $$i \hbar \frac{\partial \Psi}{\partial t}
=
- \frac{\hbar^2}{2m} \nabla^2 \Psi
+ g \vert \Psi \vert^2 \Psi,
\label{gpe}$$ where $\Psi$ is the wavefunction of the condensate, $m$ is the mass of the bosons, and $g$ is proportional to the bosons scattering length. The GPE conserves the total energy $$E = \int_V dV \left( \frac{\hbar^2}{2m} \vert \nabla \Psi \vert^2 + \frac{g}{2}
\vert \Psi \vert^4 \right),$$ the momentum $${\bf P} = \int_V dV \frac{i\hbar}{2} \left( \Psi \nabla \bar{\Psi} -
\bar{\Psi} \nabla \Psi \right),$$ (where the overbar denotes complex conjugate), and the total number of particles $$\mathcal{N} = \int_V dV \vert\Psi\vert^2 .$$
A hydrodynamical description of the flow can be recovered via the Madelung transformation $$\Psi ({\bf r},t) = \sqrt{\frac{\rho ({\bf r},t)}{m}} e^{i m \phi
({\bf r},t)/\hbar},$$ where $\rho({\bf r},t)$ is the fluid mass density, and $\phi({\bf r},t)$ is the velocity potential. Applying this transformation to the GPE yields the equations for an ideal barotropic fluid plus an extra term with the gradient of the so-called quantum pressure. This hydrodynamical description is useful to separate the total energy into different components [@Nore97a]. These are respectively the kinetic energy $$E_k = \int_V dV \frac12 \rho \vert {\bf v} \vert^2,$$ (which in turn can be separated into an incompressible component $E^i_k$ and a compressible one $E^c_k$ using a Helmholtz decomposition of the velocity field), the quantum energy $$E_q = \int_V dV \frac{\hbar^2}{2 m^2} (\nabla \sqrt{\rho})^2,$$ and the internal (or potential) energy $$E_p = \int_V dV \frac{g}{2 m^2} \rho^2.$$
![[*(Color online)*]{} Condensate fraction $\mathcal{N}_0/\langle\mathcal{N}\rangle$ of constant total density scans as a function of the temperature at two different spatial resolutions. The simulations with $N^3$ grid points, with $N=1024$, are marked with (blue) circles, while the simulations with $N=128$ are marked with (green) triangles. The solid black line indicates the usual ideal BEC theory prediction for the condensate fraction as a function of temperature.[]{data-label="condf"}](fig1.pdf){width="8.5cm"}
By linearising Eq. around $\Psi = \Psi_0$ (constant), one can obtain the Bogoliubov dispersion relation $\omega_B(k) = c k (1 + \xi^2 k^2/2)^{1/2}$, where $c=[g \vert \Psi_0 \vert^2/m]^{1/2}$ is the speed of sound and $\xi = [\hbar^2 / (2m \vert \Psi_0 \vert^2 g) ]^{1/2}$ is the healing length. The GPE can also sustain Kelvin waves, which are helical perturbations that travel along the quantum vortices. As stated in Sec. \[introduction\], Kelvin waves play a major role in zero temperature quantum turbulence, where they are responsible for the energy transfer at scales smaller than the intervortex distance.
![[*(Color online)*]{} From top to bottom, spectra of mass fluctuations, of the incompressible kinetic energy, and of the compressible kinetic energy, for several initial conditions of the GPE at different temperatures. All simulations have $N=1024$ linear spatial resolution.[]{data-label="massk"}](fig2a.pdf "fig:"){width="8.5cm"} ![[*(Color online)*]{} From top to bottom, spectra of mass fluctuations, of the incompressible kinetic energy, and of the compressible kinetic energy, for several initial conditions of the GPE at different temperatures. All simulations have $N=1024$ linear spatial resolution.[]{data-label="massk"}](fig2b.pdf "fig:"){width="8.5cm"} ![[*(Color online)*]{} From top to bottom, spectra of mass fluctuations, of the incompressible kinetic energy, and of the compressible kinetic energy, for several initial conditions of the GPE at different temperatures. All simulations have $N=1024$ linear spatial resolution.[]{data-label="massk"}](fig2c.pdf "fig:"){width="8.5cm"}
One last aspect of the GPE dynamics of relevance for this work is the concept of helicity. In classical fluids the helicity is defined as $$H = \int_V dV {\bm v} \cdot {\bm \omega} ,$$ where ${\bf \omega}$ stands for the vorticity field. Helicity is a measure of the mean alignment between velocity and vorticity (and thus, of the depletion of nonlinearities), of the topological complexity of vorticity field lines, as well as a measure of the departure of mirror-symmetry of the flow [@Moffatt69; @Moffatt92; @Moffatt92b; @Moffatt14]. In classical turbulence the presence of helicity in a turbulent flow can have multiple consequences, such as the depletion of the nonlinearities and energy transfers [@Kraichnan73], the slowing down of the onset of dissipation [@Andre77], and it can even affect the evolution of convective storms [@Lilly86]. It has also been show that the helicity, just like the energy, develops a turbulent cascade where it is transferred from the larger to the smaller scales [@Brissaud73]. Moreover, the form of the cascade implies that it is a [*dual*]{} cascade, meaning that both energy and helicity have simultaneously non-zero transfer rates in the inertial range. In quantum fluids, Kelvin waves are helical and thus $H$ could in principle be used as a proxy to quantify the excitation of Kelvin waves at small scales. However, both ${\bf v}$ and ${\bf \omega}$ are singular along the vortex lines of a quantum fluid, where all the vorticity is concentrated. To overcome this problem, many authors have chosen to work with a definition of helicity based on its topological interpretation [@Scheeler14]. These geometric decompositions can result in zero net helicity [@Hanninen16] but recover a classical non-zero value at large-scales [@Salman17; @Kedia17]. Other authors have chosen to work with filtered fields [@Zuccher15]. Here we will use the [*regularized*]{} helicity introduced in [@Clark16], where the velocity field is regularized before being used to compute $H$. This method was shown to give results compatible with other methods in the literature to estimate the helicity of a quantum flow, and was used successfully to study helical quantum turbulence at zero temperature in massive numerical simulations in [@Clark17], where the existence of a dual cascade of energy and helicity was confirmed for the quantum case.
![[*(Color online)*]{} Volume rendering of the density field for the simulation with $N=4096$ and $T=0.64T_\lambda$. Similar to the zero temperature ABC flow [@Clark17], large structures and regions of quiescencenot present in the initial conditions are spontaneously formed within the flow. At the large scales, the flow resembles the structure of a classical ABC flow. The possibility of seeing these large scale structures formed by the quantized vortices (the smallest structures in the flow) in such detail is in part due to the large scale separation, product of the high resolution used in the simulation. At low resolution there is not enough scale separation between the large scales and the thermal fluctuations for such a structures to develop.[]{data-label="fullbox_dvr"}](fig3.jpg){height="7cm"}
The Stochastic Ginzburg Landau equation
---------------------------------------
The spatially truncated version of different conservative systems of partial differential equations can achive, after long time integration, states of thermodynamic equilibrium known as thermalized states where energy is equipartitioned among all the possible spatial modes [@Lee52; @Kraichnan89]. A common way to truncate a system is via a Galerkin projector. Given a Fourier series expansion of the wavefunction $$\Psi({\bm r},t) = \sum^{\infty}_{k=-\infty} \hat{\Psi}_{\bm
k}(t) e^{i{{\bm k}\cdot \bm{r}}},$$ where $\hat\Psi_{\bm k}$ are the Fourier coefficients and ${\bm k}$ are the wavevectors, the projector has the form $${P}_{k_G} [ \Psi ({\bm r},t)] = \sum_{|k|\leq k_G} \hat{\Psi}_{\bm
k}(t) e^{i{{\bm k}\cdot \bm{r}}} .
\label{galerkin}$$ Applying it to Eq. would give the so-called Fourier (or Galerkin) truncated version of the GPE.
![[*(Color online)*]{} Isosurfaces of the density field for the simulation with $N=4096$ and $T=0.64T_\lambda$. Contrary to the zero temperature case [@Clark17], it is not possible to discern individual vortices now. But their presence in the flow is still evident when looking at the fine-grain structures. Also, the formation of the vortex bundles observed in the zero temperature case is hampered in this case.[]{data-label="fullbox_iso"}](fig4.jpg){height="7cm"}
The studies of @Davis01 and of @Connaughton05 showed that if the Fourier truncated version of the GPE is integrated for long enough, the system indeed reaches a thermodynamic equilibrium. The statistical properties of this state are given by the microcanonical ensamble defined with fixed energy $E$, momentum ${\bf P}$, and number of particles $\mathcal{N}$. Moreover, if $E$ is varied, a phase transition akin to that of BECs can be observed, where the zero-wavenumber $A_0 = \langle\Psi\rangle$ mode becomes equal to zero for finite $E$. But there are two problems with generating thermal states in this way. One is that the truncated GPE takes a very long time to converge to the equilibrium state, making it computationally expensive. The other is that the temperature is not easily accessed nor controlled in this way, given the complicated expression for the entropy in the microcanonical state of the the system. In order to overcome these problems, @Krstulovic11a [@Krstulovic11b] suggested using a Langevin process to generate grand-canonical states with distribution probability $\mathbb{P}_{\rm st}$ given by a Boltzmann weight $\mathbb{P}_{\rm st}=e^{-\beta F}/\mathcal{Z}$, where $\mathcal{Z}$ denotes the grand partition function and $$F = E - \mu
\mathcal{N} - {\bf W}\cdot{\bf P},$$ is a free energy with $\beta$ the inverse temperature, $\mu$ is the chemical potential, and ${\bf W}$ is related to the counterflow velocity. These grand-canonical states are faster to generate than microcanonical states, and give easy access and control of the temperature in the equilibrium.
![[*(Color online)*]{} Mass density correlation function $C(d) = \left< (\rho({\bf x} +d\hat{x}) - \rho_0)(\rho({\bf x})
- \rho_0) \right>$ for the simulation with $N=4096$ and $T=0.64T_\lambda$ at $t\approx 1$, with the displacement $d$ normalized in units of the healing length $\xi$. Note the slow decay of the correlation up to distances $\approx 10^3 \xi$.[]{data-label="fullbox_corr"}](fig5.pdf){width="8.5cm"}
The Langevin process that generates these states has a Ginzburg-Landau equation of the type $$\begin{gathered}
\hbar \frac{\partial A_{\bf k}}{\partial t} = - \frac{\partial
F}{\partial A^*_{\bf k}}
+ \sqrt{\frac{2 \hbar}{\beta}} \hat{\xi} ({\bf k},t), \label{sglespec1}
\\
\langle \xi ({\bf r},t) \bar{\xi} ({\bf r}',t')\rangle = \delta(t-t')
\delta({\bf r} - {\bf r}'),\label{sglespec2}\end{gathered}$$ where $A_{\bf k}$ are the Fourier modes of the wavefunction, and $\hat{\xi}({\bf k},t)$ is the Fourier transform of the Gaussian delta-correlated noise $\xi({\bf r},t)$. In [@Krstulovic11b] it is shown that the stationary probability of the solutions of Eq. is indeed $\mathbb{P}_{\rm st}$. Thus, the grand-canonical states are simply generated by integrating the Langevin Eq. in time until statistical convergence is obtained.
In physical space, the Langevin equation reads $$\begin{aligned}
\hbar \frac{\partial \Psi}{\partial t} = &\left[ \frac{\hbar^2}{2m}
\nabla^2 \Psi + \mu \Psi - g \vert \Psi \vert^2 \Psi - i \hbar {\bf
W}\cdot \nabla \Psi \right]
\nonumber
\\
&+ \sqrt{\frac{2\hbar}{\beta}} \xi .
\label{sgle}\end{aligned}$$ This equation will be referred to as the Stochastic Ginzburg-Landau equation (SGLE). The chemical potential $\mu$ controls the total number of particles $\mathcal{N}$. Different solutions obtained by varying $\beta$ will have different ratios of condensed fraction $|A_0|^2/\mathcal{N} = \mathcal{N}_0/\langle\mathcal{N}\rangle$, except below a critical $\beta$ (or, in terms of temperature, above the transition temperature $T_\lambda$) where this ratio will be equal to zero.
The thermal states obtained from the SGLE can then be fed to the GPE, in combination with an initial condition for the large-scale flow, to simulate a quantum turbulent flow at finite temperature. The total initial condition for the GPE $\Psi$ then has the form $$\Psi = \Psi_{\mathrm{flow}} \times \Psi_{\mathrm{SGLE}},
\label{inipsi}$$ where $\Psi_{\mathrm{flow}}$ is an initial wavefunction describing the flow, and $\Psi_{\mathrm{SGLE}}$ is a thermal solution of the SGLE which gives account of the occupation numbers of the different energy levels in the thermal state at a given temperature.
Although for simplicity the projector defined in Eq. is not explicity written in Eqs. and , in the following we will indeed solve the truncated versions of each equation. It is also worth noting that [*every*]{} time one solves a system of partial differential equations numerically, one is actually solving for truncated equations. Depending on the numerical method used for spatial discretization, the integration can preserve the conservation properties of the truncated system or not. The method used here, and described next, preserves all quantities conserved by the Galerkin truncated Eqs. and in the continuum-time case (i.e., before time discretization).
![[*(Color online)*]{} Evolution of the incompressible kinetic energy (top) and of the helicity (bottom) as function of time at different temperatures. All simulations have $N=1024$ linear resolution, except the the one indicated with the solid black line which has $N=4096$. The early “inviscid-like” behavior seen at low temperature, in which energy and helicity remain approximately constant, is lost as the temperature is increased.[]{data-label="eht"}](fig6.pdf){width="8.5cm"}
![[*(Color online)*]{} Evolution of the incompressible kinetic energy (top) and of the helicity (bottom) as a function of time and at different temperatures in semi-logarithmic scale. All simulations have $N=1024$, except the one indicated with the solid black line which has $N=4096$. At the highest temperatures quantities decay exponentially in time.[]{data-label="ehtlog"}](fig7.pdf){width="8.5cm"}
Initial conditions and numerical simulations
--------------------------------------------
To solve numerically Eqs. and in three dimensions we used GHOST [@Mininni11], which uses a pseudospectral method combined with a fourth order Runge-Kutta scheme to solve Eq. , and an implicit Euler scheme to solve Eq. . Boundary conditions are periodic, each side of the simulation box is of size $2\pi L$ (where $L$ is a characteristic scale of the flow), and the “2/3 rule” is used for dealising. A hybrid OpenMP-MPI scheme is used for the parallelization. Multiple simulations were done at three different spatial resolutions $N^3$, with linear resolutions $N=128$, $N=1024$ and $N=4096$. In all cases, the speed of sound is chosen to be twice the characteristic flow velocity. For the simulation at the largest resolution ($N=4096$), a total of 8192 processors were used with 4096 MPI jobs and 2 threads per MPI job, and over 16 million CPU hours were used for the integration of this simulation.
All quantities are made dimensionless using a characteristic length, a speed, and a mass. Quantities with units can be determined at any time by doing $L=L'/(2\pi)$, $U=c'/2$, and $M=M'/(2\pi)^3$, where $L'$ is the characteristic length of the physical system, $c'$ is the speed of sound, and $M'$ is the fluid or gas mass (note all primed quantities have units). With this choice, the length of the simulation domain is equal to $2\pi L$, the speed of sound $c$ is equal to $2U$, and the mean density $\rho_0$ is equal to $1 \, M/L^3$. The healing length $\xi$ is such that $k_{\max}\xi=1.5$, where $k_{\max} = N/3$ (in units of $1/L$) is the largest resolved wavenumber in each simulation (the equivalent of $k_G$ in Eq. ). In the highest resolved simulation, the healing length then is $\xi \approx 0.0011 L$. As a referece, in superfluid $^4$He experiments the characteric system size is $L' \approx 10^{-2}$ m, the speed of sound is $c'\approx 230$ m/s, the fluid density is $\approx 125$ kg/m$^3$ (thus $M' \approx 1.25 \times 10^{-4}$ kg), and the healing length is $\xi' \approx 10^{-8} \, \textrm{m} \approx 10^{-6} L$ [@Barenghi14]. The insufficient scale separation of our highest resolved simulation (even with the massive resolution considered) is however much better suited for comparisons with BECs. In this case $L' \approx 10^{-4}$ m, $c'\approx 2\times10^{-3}$ m/s, and $\xi \approx 5 \times 10^{-7} \, \textrm{m} \approx 0.005 L$ [@White14]. For the sake of simplicity, in the following all quantities are quoted using $L=M=U=1$, units can be added later using the procedure explained above. Finally, temperatures in the following will be always expressed explicitly in units of the transition temperature $T_\lambda$. More details on how units can be handled in GPE and SGLE simulations can be found in [@Nore97a; @Krstulovic11a; @Krstulovic11b].
Simulations with $N=128$ and with $N=1024$ were performed at different temperatures, while only one simulation at a fixed temperature was performed at $N=4096$. All simulations were performed with no counterflow, so ${\bf W}$ in Eq. is always set to zero, and the normal and superfluid components are in all cases in perfect coflow.
![[*(Color online)*]{} Confirmation of exponential decay for the high temperature simulations, akin to that of a viscous classical fluid: incompressible kinetic energy exponential decay rate $-(1/E^i_k) dE^i_k/dt$ for different temperatures in $N=1024$ runs (top), and same for the helicity $-(1/H) dH/dt$ (bottom). While simulations at low temperature display oscillations at early times and growth at late times, at the highest temperatures these quantities remain approximately constant for long times, allowing us to estimate an exponential decay rate.[]{data-label="exponential"}](fig8.pdf){width="8.5cm"}
In order to get a helical flow at large-scales, for the flow initial conditions $\Psi_{\mathrm{flow}}$ we used a superposition of two quantum Arnold-Beltrami-Childress (ABC) flows [@Clark17]. The velocity field is a superposition of an ABC flow at $k=1$ and of an ABC flow at $k=2$: ${\bf v}_{\rm ABC}={\bf v}_{\rm ABC}^{(1)}+{\bf v}_{\rm ABC}^{(2)}$, with $$\begin{aligned}
{\bf v}_{\rm ABC}^{(k)} = & \left[ B \cos(k y) + C \sin(k z) \right]
{\bf i}
+ \left[ C \cos(k z) + \right. \nonumber \\
{}& \left. A \sin(k x) \right] {\bf j} +
\left[ A \cos(k x) + B \sin(k y) \right] {\bf k}
\label{ABC}\end{aligned}$$ with $(A,B,C)=(0.9,1,1.1)/\sqrt{3}$, and where ${\bf i}$, ${\bf
j}$, and ${\bf k}$ are the three Cartesian vectors. The wavefunction that generates this flow after a Madelung transformation is obtained by the following procedure, detailed in [@Clark16]. First, we set $\Psi_{\rm flow}=\Psi_{\rm ABC}^{(1)} \times \Psi_{\rm ABC}^{(2)}$, with $\Psi_{\rm ABC}^{(k)}= \Psi_{A,k}^{x,y,z} \times \Psi_{B,k}^{y,z,x}
\times \Psi_{C,k}^{z,x,y}$, and with $\Psi_{A,k}^{x,y,z} =\exp\{i [A \sin(k x)\,m/\hbar] y
+i [A \cos(k x)\,m/\hbar] z\}$, where $[a]$ stands for the nearest integer to $a$. In order to minimize the amount of energy in acoustic modes at the initial condition, we then evolve $\Psi_{\mathrm{flow}}$ using the advected real Ginzburg-Landau equation (ARGLE), whose stationary solutions are solutions of the GPE with minimal amount of phonons. The ARGLE explicitly reads $$\begin{aligned}
\partial_t \Psi =& \frac{\hbar}{2 m} \nabla^2 \Psi
+(\frac{g\rho_0}{m}-g|\Psi |^2
-\frac{m {\bf v}_{\rm ABC}^2}{2 \hbar})\Psi \nonumber \\
& -i {\bf v}_{\rm ABC} \cdot \nabla \Psi.
\end{aligned}$$ More information on the ARGLE can be found in [@Nore97a], while the details of the quantum ABC flow are discussed in [@Clark16; @Clark17]. The resulting flow has maximal helicity, and was used in [@Clark17] to study helical quantum turbulence at zero temperature.
Once $\Psi_{\mathrm{flow}}$ has been computed, we solve Eq. to obtain a thermal solution at a given temperature, and finally we compute the initial conditions for the GPE using Eq. .
![[*(Color online)*]{} Estimation of the effective viscosity from the energy decay rate, $\nu_\textrm{eff} = -(\tilde{L}^2/E^i_k) dE^i_k/dt$ in the vicinity of $t\approx 1$, as a function of the temperature. Two choices for the characteristic scale are shown: the lengthscale of the initial ABC flow $\tilde{L}=L_0$ (top), and the correlation length of the incompressible velocity field $\tilde{L}=L_i$ in the vicinity of $t\approx 1$ (bottom). The (blue) circles indicate the simulations with $N=1024$, and the (green) triangle the simulation with $N=4096$.[]{data-label="scaling"}](fig9a.pdf "fig:"){width="8.5cm"} ![[*(Color online)*]{} Estimation of the effective viscosity from the energy decay rate, $\nu_\textrm{eff} = -(\tilde{L}^2/E^i_k) dE^i_k/dt$ in the vicinity of $t\approx 1$, as a function of the temperature. Two choices for the characteristic scale are shown: the lengthscale of the initial ABC flow $\tilde{L}=L_0$ (top), and the correlation length of the incompressible velocity field $\tilde{L}=L_i$ in the vicinity of $t\approx 1$ (bottom). The (blue) circles indicate the simulations with $N=1024$, and the (green) triangle the simulation with $N=4096$.[]{data-label="scaling"}](fig9b.pdf "fig:"){width="8.5cm"}
Numerical results {#results}
=================
Temperature scans
-----------------
![[*(Color online)*]{} Evolution of the total energy and its different components for the simulation with $T=0.64 T_\lambda$ and $N=4096$.[]{data-label="evoltot"}](fig10.pdf){width="8.5cm"}
In order to characterize the system we first perform a temperature scan solving the SGLE with the chemical potential $\mu$ adjusted to keep the total density $\rho_0=1$. In Fig. \[condf\] we show the condensate fraction $\mathcal{N}_0/\langle \mathcal{N} \rangle$ (with $\mathcal{N}_0 = |A_0|^2$) at late times in the evolution, as a function of the temperature $T$. As reported before in @Krstulovic11a [@Krstulovic11b], the typical behavior for second order transitions can be observed, with $\mathcal{N}_0/\langle \mathcal{N} \rangle \approx 0$ for $T>T_\lambda$, and $\mathcal{N}_0/\langle \mathcal{N} \rangle$ growing as in a phase transition for $T<T_\lambda$. The value of the critical temperature $T_\lambda$ was determined from this analysis. The scans were performed at two different linear resolutions $N=128$ and $N=1024$. The results from both coincide, showing the simulations are well converged. Also shown in the figure is the usual prediction for the condensate fraction coming from ideal BEC theory [@Pathria] where $\mathcal{N}_0/\langle \mathcal{N}
\rangle = 1 - (T/T_\lambda)^{3/2}$. This prediction does not match our results exactly as it is derived for non-interacting bosons, which is not our case. Nonetheless, the behaviors are similar.
As explained above, these thermal states were coupled to solutions of the ARGL to generate initial conditions for the GPE at different temperatures. In Fig. \[massk\] we show the spectrum of the mass fluctuations $\rho(k)$ of the initial condition for five different temperatures, as well as the incompressible kinetic energy spectrum $E_k^i(k)$, and the compressible kinetic energy spectrum $E_k^c(k)$. In all cases, the increasing amplitude of high wavenumber (small scale) modes as $T$ is increased (but specially in $E_k^c(k)$, associated with phonon excitations) accounts for the increasing thermal effects. Note however that the low wavenumber (large scale) spectrum of $E_k^i(k)$, associated with the initial ABC flow, remains largely unaffected by the thermal fluctuations, a result of the sufficient scale separation in these runs.
{width="90.00000%"}
{width="90.00000%"}
Dynamical evolution
-------------------
We now focus on understanding finite temperature effects on the evolution of the GPE. We thus show results from six different simulations. Five of them were done at a linear resolution of $N=1024$, with temperatures ranging from zero to $T=0.63T_\lambda$, while the sixth simulation was performed at a linear resolution of $N=4096$ and at $T=0.64T_\lambda$.
### Large-scale flow structure
We begin by showing two visualizations of the density field for the simulation with $N=4096$ and $T=0.64T_\lambda$ at time $t\approx1$. In Fig. \[fullbox\_dvr\], a volumetric rendering of mass density is shown using VAPOR [@Clyne07]. Similarly to the zero temperature quantum ABC flow [@Clark17], large vortex bundles are formed within the flow, and regions of quiescence (with almost no vorticity) appear. At the larger scales the structure of the flow looks similar to that of a classical ABC flow, as expected. Moreover, although the thermal fluctuations blur the small scales, the large scale flow is clearly discernible. In Fig. \[fullbox\_iso\] isosurfaces of the density field are shown. As in Fig. \[fullbox\_dvr\], and contrary to the zero temperature case where it is easy to spot individual vortices (see [@Clark17]), the thermal noise lumps the vortices inside the bundles, making it difficult to discern individual structures from visual inspection, although traces of their presence are evident.
To further confirm the coexistence of large-scale correlations (associated with the flow) with small-scale thermal fluctuations and vortices, we show in Fig. \[fullbox\_corr\] the spatial correlation function of mass density fluctuations $$C(d) = \left< (\rho({\bf x} +d\hat{x}) - \rho_0)(\rho({\bf x}) -
\rho_0) \right> / \rho_0^2 ,$$ for the simulation with $N=4096$ and $T=0.64T_\lambda$ at time $t\approx1$, where $d$ is the spatial displacement (which in Fig. \[fullbox\_corr\] is normalized in units of the healing length $\xi$). The function $C(d)$ is also proportional, by the Wiener-Khinchin theorem, to the Fourier transform of the internal energy spectrum. Note $C(d)$ decays rapidly with $d/\xi$ in a distance proportional to the vortex core size, thus further confirming the presence of quantized vortices in the flow. Then, $C(d)$ remains almost constant up to very long-range distances ($d\approx 10^3 \xi$), confirming the presence of a large-scale structure in the system.
### Energy and helicity decay
In the zero temperature case nonlinear interactions of Kelvin waves lead to the emission of phonons [@Lvov10; @Vinen02], which deplete the incompressible kinetic energy [@Nore97a] and the helicity [@Clark17]. The presence of thermal noise adds a new depletion mechanism. In order to study it, we show in Figs. \[eht\] and \[ehtlog\] the evolution of the incompressible kinetic energy $E^i_k$ and of the helicity $H$ for five different temperatures, in linear and in semi-logarithmic scales respectively.
As expected, for all temperatures both $E^i_k$ and $H$ decay in time. At very early times a short transient can be seen (due to the system correcting frustration effects coming from the initial conditions), after which the different dynamical mechanisms come into play. This transient is similar for all the runs, and almost independent of the temperature. After this transient, at low temperatures both the incompressible energy and the helicity decay very slowly or remain approximately constant (see in particular the case with $T=0$ in Fig. \[ehtlog\]), up until $t \approx 3$. This is similar to what is observed in freely decaying classical turbulence: in that case the early “inviscid-like” phase corresponds to the build up of the turbulent cascade while dissipation remains negligible, and which (in the classical case) ends when small scale excitations reach the viscous dissipation scale. In classical turbulent flows, the presence of helicity is known to extend the duration of this “inviscid-like” phase (see, e.g., [@Teitelbaum09] and references therein). As explained in [@Clark17], in the quantum case and for $T$ close to zero this inviscid phase corresponds to the time during which vortices interact and the Kelvin wave cascade builds up, so after $t\approx 4$ the emission of phonons becomes prominent and the incompressible kinetic energy and the helicity start being depleted. Note that during this phase both energy and helicity are transferred towards smallers scales, as will be confirmed later by the energy and helicity spectra.
Unlike the simulations at low temperature, the simulations at the highest temperatures go directly from the short initial transient to a seemingly exponential decay, without an inviscid-like phase in between (see Fig. \[ehtlog\]). At late times ($t>6$), all simulations show similar exponential decay rates (see Fig. \[ehtlog\]) as a significant fraction of the energy has already thermalized, with the exception of the simulation with $T=0.64 T_\lambda$ and $N=1024$, which has a higher initial temperature and thus can reach a thermal equilibrum faster. The exponential decay observed in these runs is reminiscent of what is observed in the free decay of low Reynolds classical flows. To further verify this we show estimations of the exponential decay rates $-(1/E^i_k) d E^i_k /dt$ and $-(1/H)dH/dt$ in Fig. \[exponential\]. For the higher temperatures these magnitudes become close to constant for long periods of time, confirming an exponential decay. This is not the case for the lower temperatures, where oscillations and a late growth of these quantities are present at all times. Moreover, the exponential decay behavior at high temperatures is compatible with weak nonlinearities and can be used, as explained below, to estimate an effective eddy viscosity of the flow by assuming a governing equation for the velocity of the Stokes form, $\partial{\bf v}/\partial t \approx
\nu_\textrm{eff}\nabla^2{\bf v}$.
![[*(Color online)*]{} Spectra of the incompressible kinetic energy and of the helicity at $T=0.64T_\lambda$ in the simulation with $N=4096$ at different times. A turbulent scaling law is shown as a reference. Although nonlinear excitations develop, the range of scales compatible with the turbulent scaling is short and at late times the spectrum decays rapidly.[]{data-label="spectra"}](fig13.pdf){width="8.5cm"}
So far, these results indicate several things: The effects of the thermal states generated with the SGLE upon the quantum turbulent flow can be modeled, at least for global quantities and in the simplest scenario, using an effective viscous dissipation. This effect can be, at the highest temperatures considered, strong enough (even at the highest resolution) that nonlinear interactions and Kelvin wave turbulence cannot fully develop, such that (pseudo) viscous effects dominate the dynamics. As observed from Fig. \[exponential\], the rate of change of the energy at zero temperature is almost negligible at $t\approx 1$, but increases and becomes considerable in the other cases. Thus, we can draw from this fact to construct anzats for the effective viscosity as a function of temperature. In a freely decaying classical flow an eddy viscosity can be estimated as $\nu_\textrm{eff} = -(\tilde{L}^2/E^i_k) dE^i_k/dt$, where $\tilde{L}$ is some large-scale correlation length. Here we have several choices for a characteristic length $\tilde{L}$: a fixed length $L_0$ given by the length scale of the large-scale flow at $t=0$, the integral scale (i.e., the correlation length) of the incompressible velocity field $$L_i = \frac{2\pi}{E_k^i} \int{ \frac{E_k^i(k)}{k} dk } ,$$ (where $E_k^i(k)$ is the spectrum of the incompressible kinetic energy), the intervortex distance $\ell$, or the healing length $\xi$. We verified that the behavior with temperature of $\nu_\textrm{eff}$ with all these choices for $\tilde{L}$ is qualitatively similar, except for a prefactor, and thus show in Fig. \[scaling\] two estimations of $\nu_\textrm{eff}$ based on large-scale correlation lengths: the fixed length $L_0$ and the integral length $L_i$. The viscosity estimates are close to zero for $T=0$, grow linearly with temperature up to $T/T_\lambda \approx 0.3$, and then either keep growing at a lower rate or decrease for larger temperatures, depending on the choice of $L$. Moreover, the estimations of $\nu_\textrm{eff}$ for the $N=1024$ and the $4096$ runs at the highest temperature are similar.
These results can be interpreted as follows. The viscosity of the normal fluid $\nu_{n}$ can be expected to be proportional to mean free path $\lambda_m$ times the sound velocity, i.e., $\nu_{n} \sim \lambda_m c$. When we increase the resolution fixing $\xi k_{\rm max}$ (as done here), $c$, and the temperature $T$, the mean free path depending only on the temperature is constant (in units of $\xi$), i.e., $$\lambda_m \sim \xi f(T/T_\lambda) ,$$ where $f(T/T_\lambda)$ is a dimensionless function. Therefore $\nu_{n}$ should scale as the inverse of the spatial resolution, $\nu_{n} \sim 1/N$. But this argument holds as long as the mean free path is smaller than the box size, $\lambda_m<2 \pi$, while the mean free path diverges when $T\to 0$. Thus, at a given temperature the viscosity of the normal fluid should first remain constant with resolution, and then after a certain critical resolution go to zero as $1/N$. This is for the normal fluid alone, and its contribution to the total flow should scale as $\rho_n/\rho \sim T$. Thus, we can expect an effective viscosity measured on the total fluid to scale as $$\nu_{\rm eff}\sim \nu_{n} \rho_n/\rho \sim \nu_{n} T/T_\lambda ,$$ which should first grow like $T$ and then decrease when the mean free path becomes less than the box size. Further confirmation of this scaling would require a direct measure of the mean free path; we discuss possible methods to achieve this goal in the conclusions.
Finally, it is important to note that, as in the zero temperature case, the Galerking truncated GPE conserves the total energy, and that our spatial discretization method is also conservative (although time discretization introduces new errors as discussed next). So, while the incompressible kinetic energy is depleted, the other components of the energy can be expected to grow. As an illustration, the evolution of the total energy, the incompressible kinetic energy, the compressible kinetic energy, the quantum energy, and the potential energy for the simulation with $N=4096$ and $T=T0.64T_\lambda$ is shown in Fig. \[evoltot\]. Note a fraction of the total energy is indeed lost due to numerical errors, resulting from the fact that the great cost of doing such a high resolution simulation did not allow us to use a very small time step. Nonetheless, energy is conserved up to 95% when $t\approx1$ (which is when most of the physics we are interested occurs) and up to 82% at the very end of the simulation.
### Spatial spectra
Finally, we study the effect of temperature on the evolution of the spatial spectra of the two components of the kinetic energy (compressible and incompressible), and of the helicity. This should give further confirmation that for large enough temperatures, the nonlinear cascade of energy and of helicity is strongly arrested. The results for the simulations with $N=1024$ are shown in Fig. \[specscan\] (with compensated versions of the spectra shown in Fig. \[compensated\]), while the results for the simulation with $N=4096$ are shown in Fig. \[spectra\]. Note that the compensated spectra stemming from the simulations with $N=1024$ shown in Fig. \[compensated\] are expected to be flat in regions that follow Kolmogorov-like scaling; animations showing the evolution of each spectra can be also found in [@SI].
While, as shown in Fig. \[massk\], the initial spectra at small wavenumbers (large scales) are relatively similar for all temperatures, differences can already be seen in Fig. \[specscan\] at $t=2.5$ for the simulations at different temperatures. In particular, the simulations with the largest temperatures have less power at small wavenumbers (for all quantities $E_k^i$, $H$, and $E_k^c$), and more power at large wavenumbers (specially for $E_k^c$), as can be expected from the larger thermal fluctuations. As the flow evolves and nonlinear interactions take place (see the spectra at $t=5$), the low temperature simulations develop a range of wavenumbers compatible with a Frisch-Brissaud dual cascade of energy and of helicity towards small scales [@Brissaud73] (which corresponds to Kolmogorov-like scaling for both spectra), previously observed in zero temperature simulations [@Clark17]. However, the simulations with the highest temperatures do not develop a broad spectrum, and although excitations grow at intermediate and at small wavenumbers in $E_k^i(k)$ and $H(k)$, the spectra drops faster confirming the effect of damping discussed in the previous section, and in agreement with the effect expected for a large effective viscosity. The compensated spectra shown in Fig. \[compensated\] confirm this. The $N=4096$ simulation (see Fig. \[spectra\]) also shows this damped behavior, and as estimated from the results in Fig. \[scaling\] has an effective viscosity of the same order as the simulation with $N=1024$ at a similar temperature.
All spectra at all temperatures have a pronounced change (or knee) at around $k\approx10$. At low temperatures, this bump (which in the case of the spectrum of $E_k^i$ is followed by a range of wavenumbers with decreasing amplitude as $k$ increases) can be associated with a bottleneck produced by the Kelvin wave cascade at scales smaller than the mean intervortex distance [@Clark17]. This is seen more clearly in the compressible kinetic energy spectra. For the simulations at the highest temperatures this second range is swallowed up by the presence of the thermalized modes. As it can be expected, the flat portion of the spectra between the cascading part at small wavenumers and the thermalized part at large wavenumbers is wider in the $N=4096$ (Fig. \[spectra\]) simulation compared to the ones at $N=1024$ (Fig. \[specscan\]). The spectra of the helicity fluctuate around zero with fast changes in sign above this wavenumber in all cases, a result of the depletion of helicity by phonons (as the spectra are plotted in logarithmic scale only positive values are shown, the missing parts correspond to the negative values). The spectrum of compressible kinetic energy grows as $\sim k^2$, which can be expected for a thermalized state (and its amplitude increases with increasing temperature).
Conclusions
===========
Modeling quantum flows at nonzero temperatures is key to understand recent experimental results of quantum turbulence. However, models for quantum flows at finite temperature are limited, sometimes derived from phenomenological models, in other cases obtained from coarse approximations, and in many cases their dynamics have not been fully characterized. Here we presented a study of helical quantum turbulence at various temperatures using very large resolution simulations and a model based on the Gross-Pitaevskii equation with thermal states generated by the Stochastic Guinzburg-Landau equation.
Our results show that in this model, under the presence of thermal noise, a quantum flow can behave as a viscous classical flow, with exponential decay of the incompressible kinetic energy and of the helicity. A smooth transition between the behavior at zero temperatures and at large temperatures (for temperatures lower than the critical) was reported. Moreover, the (pseudo) viscous effects can strongly quench the formation of a turbulent cascade, even at the largest spatial resolution considered. However, when the temperature is not too high, a dual cascade of energy and of helicity (as also observed in classical turbulence and in quantum flows at zero temperature) can be reobtained.
We presented a phenomenological estimation of the effective viscosity in this model, which shows linear scaling with increasing temperature, and a saturation for very high temperatures. An argument based on the mean free path accounts for this behavior, and opens the door to better estimations of the effective viscosity by measuring directly this lengthscale. This can be done by studying the spatio-temporal spectrum of the flow as a function of the temperature, which gives access to the spectrum of phonons in the system [@Clark15a]. However, as computation of this spectrum is computationally intensive, it can only be done at lower resolutions or using a different flow configuration, and is thus left for future work.
The authors acknowledge financial support from Grant No. ECOS-Sud A13E01, and computing hours in the CURIE supercomputer granted by Project TGCC-GENCI No. T20162A711. P.C.dL. acknowledges funding from the European Research Council under the European Community’s Seventh Framework Program, ERC Grant Agreement No. 339032.
[^1]: Postprint version of the manuscript published in Phys. Rev. A [**97**]{}, 043629 (2018).
|
---
abstract: 'We determine the rank of the fundamental group of those hyperbolic 3–manifolds fibering over the circle whose monodromy is a sufficiently high power of a pseudo-Anosov map. Moreover, we show that any two generating sets with minimal cardinality are Nielsen equivalent.'
address: |
Department of Mathematics\
University of Chicago\
5734 S University Avenue\
Chicago, Ill 60637\
USA
author:
- Juan Souto
bibliography:
- 'link.bib'
title: |
The rank of the fundamental group of certain\
hyperbolic 3–manifolds fibering over the circle
---
We determine the rank of the fundamental group of those hyperbolic 3-manifolds fibering over the circle whose monodromy is a sufficiently high power of a pseudo-Anosov map. Moreover, we show that any two generating sets with minimal cardinality are Nielsen equivalent.
Introduction
============
Probably the most basic invariant of a finitely generated group is its [*rank*]{}, ie the minimal number of elements needed to generate it. In general the rank of a group is not computable. For instance, there are examples, due to Baumslag, Miller and Short [@Baumslag-Miller-Short], of hyperbolics groups showing that there is no uniform algorithm solving the rank problem. Everything changes in the setting of 3–manifold groups and recently Kapovich and Weidmann [@Kapovich-Weidmann-rank] gave an algorithm determining $\operatorname{rank}(\pi_1(M))$ when $M$ is a 3–manifold with hyperbolic fundamental group. However, it is not possible to give a priori bounds on the complexity of this algorithm and hence it seems difficult to use it to obtain precise results in concrete situations. The goal of this note is to determine the rank of the fundamental group of a particularly nice class of 3–manifolds.
Let $\Sigma_g$ be the closed (orientable) surface of genus $g\ge 2$, $F\co \Sigma_g\to\Sigma_g$ a mapping class and $$M(F)=\Sigma_g\times[0,1]/(x,1)\simeq(F(x),0)$$ the corresponding mapping torus. By construction, $\pi_1(M(F))$ is a HNN-extension of $\pi_1(\Sigma_g)$ and hence, considering generating sets of $\pi_1(\Sigma_g)$ with $$\operatorname{rank}(\pi_1(\Sigma_g))=2g$$ elements and adding a further element corresponding to the extension we obtain generating sets of $\pi_1(M(F))$ with $2g+1$ elements. We will say that the so-obtained generating sets are [*standard*]{}. In this note we prove:
\[main\] Let $\Sigma_g$ be the closed surface of genus $g\ge 2$, $F\in\operatorname{Map}(\Sigma_g)$ a pseudo-Anosov mapping class and $M(F^n)$ the mapping torus of $F^n$. There is $n_F$ such that for all $n\ge n_F$ $$\operatorname{rank}(\pi_1(M(F^n)))=2g+1.$$ Moreover for any such $n$ any generating set of $\pi_1(M(F^n))$ with minimal cardinality is Nielsen equivalent to a standard generating set.
Recall that two (ordered) generating sets ${\mathcal S}=(g_1,\dots,g_r)$ and ${\mathcal S}'=(g_1',\dots,g_r')$ are [*Nielsen equivalent*]{} if they belong to the same class of the equivalence relation generated by the following three moves: $$\begin{array}{ll}
\hbox{Inversion of}\ g_i&\hspace{1pt}
\left\{\begin{array}{ll}g_i'=g_i^{-1} & \\ g_k'=g_k & k\neq
i\end{array}\right. \\
\hbox{Permutation of}\ g_i\ \hbox{and}\ g_j\ \hbox{with}\ i\neq j&
\left\{\begin{array}{ll}g_i'=g_j & \\ g_j'=g_i & \\ g_k'=g_k & k\neq
i,j\end{array}\right. \\
\hbox{Twist of}\ g_i\ \hbox{by}\ g_j\ \hbox{with}\ i\neq j &\hspace{1pt}
\left\{\begin{array}{ll}g_i'=g_ig_j & \\ g_k'=g_k & k\neq
i\end{array}\right.
\end{array}$$ It is due to Zieschang [@Zieschang] that any two generating sets of $\pi_1(\Sigma_g)$ with cardinality $2g$ are Nielsen equivalent. This implies that any two standard generating sets of a mapping torus $M(F)$ are also Nielsen equivalent. We deduce:
Let $\Sigma_g$ be the closed surface of genus $g\ge 2$, $F\in\operatorname{Map}(\Sigma_g)$ a pseudo-Anosov mapping class and $M(F^n)$ the mapping torus of $F^n$. There is $n_F$ such that any two minimal generating sets of $M(F^n)$ are Nielsen equivalent for all $n\ge n_F.\qed$
In we recall the relation between Nielsen equivalence classes of generating sets of the fundamental group of a manifold $M$ and free homotopy classes of graphs in $M$. Choosing such a graph with minimal length we obtain a link between the algebraic problem on the rank of $\pi_1(M)$ and the geometry of the manifold. In we prove which is essentially a generalization of the fact that paths in hyperbolic space ${\mathbb H}^3$ which consist of large geodesic segments meeting at large angles are quasi-geodesic. Hyperbolic geometry comes into the picture through a theorem of Thurston who proved that the mapping torus $M(F)$ of a pseudo-Anosov mapping class admits a metric of constant negative curvature; equivalently, there is a discrete torsion-free subgroup $\Gamma\subset\operatorname{PSL}_2{\mathbb C}=\operatorname{Isom}_+({\mathbb H}^3)$ with $M(F)$ homeomorphic to ${\mathbb H}^3/\Gamma$. The geometry of the manifolds $M(F^n)$ is well understood and in we review very briefly some facts needed in to prove .
The method of proof of is suggested by the proof of a result of White [@White] who proved that the rank of the fundamental group of a hyperbolic 3–manifold yields an upper bound for the injectivity radius. Similar ideas appear also in the work of Delzant [@Delzant] on subgroups of hyperbolic groups with two generators, in the proof of a recent result of Ian Agol relating rank and Heegaard genus of some 3–manifolds and in the work of Kapovich and Weidmann [@Kapovich-Weidmann-rank]. It should be said that in fact most arguments here are found in some form in the papers of Kapovich and Weidmann and that the main result of this note cannot come as a surprise to these authors. It should also be mentioned that a more general result in the spirit of , but in the setting of Heegaard splittings, is due to Bachmann and Schleimer [@Bachmann-Schleimer].
Recently Ian Biringer has obtained, using methods similar to those in this paper, the following extension of :
For every $\epsilon$ positive, the following holds for all but finitely many examples: If $M$ is a hyperbolic 3–manifold fibering over ${\mathbb S}^1$ with fiber $\Sigma_g$ and with $\operatorname{inj}(M)\ge\epsilon$ then $\operatorname{rank}(\pi_1(M))=2g+1$ and any two generating sets of $\pi_1(M)$ are Nielsen equivalent.
Other related results can be found in Namazi and Souto [@Hossein] and Souto [@3rank].
I would like to thank to Ian Agol, Michel Boileau, Yo’av Moriah and Richard Weidmann for many very helpful and motivating conversations. I also thank Ian Biringer and the referee for useful comments enhancing the exposition. This paper was written while the author was a member of the Laboratoire de mathématiques Emile Picard at the Université Paul Sabatier.
Nielsen equivalence of generating sets and carrier graphs {#sec:nielsen}
=========================================================
Let $M$ be a hyperbolic 3–manifold.
A map $f\co X\to M$ of a connected graph $X$ into $M$ is a [*carrier graph*]{} if the homomorphism $f_*\co \pi_1(X)\to\pi_1(M)$ is surjective. Two carrier graphs $f\co X\to M$ and $g\co Y\to M$ are [*equivalent*]{} if there is a homotopy equivalence $h\co X\to Y$ such that $f$ and $g\circ h$ are free homotopic.
To every generating set ${\mathcal S}=(g_1,\dots,g_r)$ of $\pi_1(M)$ one can associate an equivalence class of carrier graphs as follows: Let ${\mathbb F}_{\mathcal S}$ be the free non-abelian group generated by the set ${\mathcal S}$, $\phi_{\mathcal S}\co {\mathbb F}_{\mathcal S}\to\pi_1(M)$ the homomorphism given by mapping the free bases ${\mathcal S}\subset{\mathbb F}_{\mathcal S}$ to the generating set ${\mathcal S}\subset\pi_1(M)$ and $X_{\mathcal S}$ a graph with $\pi_1(X_{\mathcal S})={\mathbb F}_{\mathcal S}$. The homomorphism $\phi_{\mathcal S}\co {\mathbb F}_{\mathcal S}\to\pi_1(M)$ determines a free homotopy class of maps $f_{\mathcal S}\co X_{\mathcal S}\to M$, ie a carrier graph, and any two carrier graphs obtained in this way are equivalent. The so determined equivalence class is said to be the [*equivalence class of carrier graphs associated to ${\mathcal S}$*]{}.
\[Nielsen\] Let ${\mathcal S}$ and ${\mathcal S}'$ be finite generating sets of $\pi_1(M)$ with the same cardinality. Then the following are equivalent:
1. ${\mathcal S}$ and ${\mathcal S}'$ are Nielsen equivalent.
2. There is a free basis $\bar{\mathcal S}$ of ${\mathbb F}_{{\mathcal S}'}$ with ${\mathcal S}=\phi_{{\mathcal S}'}(\bar{\mathcal S})$.
3. There is an isomorphism $\psi\co {\mathbb F}_{\mathcal S}\to{\mathbb F}_{{\mathcal S}'}$ with $\phi_{\mathcal S}=\phi_{{\mathcal S}'}\circ\psi$.
4. ${\mathcal S}$ and ${\mathcal S}'$ have the same associated equivalence classes of carrier graphs.
The implications (1) $\Rightarrow$ (2) $\Leftrightarrow$ (3) $\Leftrightarrow$ (4) are almost tautological. The implication (2) $\Rightarrow$ (1) follows from a Theorem of Nielsen, who proved that any two free basis of a free group are Nielsen equivalent (see for example Collins et al [@CGKZ]).
The natural bijection given by between the set of Nielsen equivalence classes of generating sets of $\pi_1(M)$ and the set of equivalence classes of carrier graphs $f\co X\to M$ plays a central role in the proof of .
Given a carrier graph $f\co X\to M$ and a path $I$ in $X$ we say that its length is the length, with respect to the hyperbolic metric, of the path $f(I)$ in $M$. Measuring the minimal length of a path joining two points in $X$ we obtain a semi-distance $d_{f\co X\to M}$ on $X$ and we define the [*length*]{} $l_{f\co X\to M}(X)$ of the carrier graph $f\co X\to M$ as the sum of the lengths of the edges of $X$ with respect to $d_{f\co X\to M}$. The semi-distance $d_{f\co X\to M}$ induced on $X$ is not always a distance since there may be some edges of length $0$ but minimality of the generating set ensures that by collapsing these edges we obtain an equivalent carrier graph on which the induced semi-distance is in fact a distance. Moreover, this collapsing process does not change the length of the carrier graph. From now on we will assume without further remark that the semi-distance $d_{f\co X\to M}$ is in fact a distance.
A carrier graph $f\co X\to M$ has [*minimal length*]{} if $$l_{f\co X\to M}(X)\le l_{f'\co X'\to M}(X')$$ for every equivalent carrier graph $f'\co X'\to M$.
If $M$ is closed then it follows from the Arzela–Ascoli Theorem that every equivalence class of carrier graphs contains a carrier graph with minimal length:
\[minimal\] If $M$ is a closed hyperbolic 3–manifold, then every equivalence class of carrier graphs contains a carrier graph with minimal length. Moreover, every such minimal length carrier graph is trivalent, hence it has $3(\operatorname{rank}(\pi_1(M))-1)$ edges, the image in $M$ of its edges are geodesic segments, the angle between any two adjacent edges is $\frac{2\pi}3$ and every simple closed path in $X$ represents a non-trivial element in $\pi_1(M). \qed$
See White [@White Section 2] for a proof of .
Quasi-convex subgraphs {#sec:meat}
======================
Recall that a map $\phi\co X_1\to X_2$ between two metric spaces is a $(L,A)$–quasi-isometric embedding if $$\frac 1Ld_{X_1}(x,y)-A\le d_{X_2}(\phi(x),\phi(y))\le
Ld_{X_1}(x,y)+A$$ for all $x,y\in X_1$. A $(L,A)$–quasi-isometric embedding $\phi\co {\mathbb R}\to X$ is said to be a quasi-geodesic. Observe that a $(L,0)$–quasi-isometric embedding is nothing more than a $L$–bi-Lipschitz embedding. Before going further, we state here and for further reference the following well-known fact:
\[constant\] There are constants $l_0,A>0$ such that for all $L\ge l_0$ the following holds:
- Every path in hyperbolic space ${\mathbb H}^3$ which consists of geodesic segments of at least length $L$ and such that all the angles are at least $\frac\pi 4$ is a $A$–bi-Lipschitz embedding.
- If $K\subset{\mathbb H}^3$ is convex then every geodesic ray $\gamma\co [0,\infty)\to{\mathbb H}^3$ with $\gamma(0)\in K$ meets the boundary ${\partial}{\mathcal N}_L(K)$ of the neighborhood ${\mathcal N}_L(K)$ of radius $L$ around $K$ with at least angle $\frac\pi 4$.
It is surprising that the author didn’t find any reference in the literature for the second claim of . Here is a proof. Choose $l_0$ and $A$ as in the first claim of the lemma. Up to increasing $l_0$ once we may also assume that the image of every $A$–bi-Lipschitz embedding $\phi\co [0,T]\to{\mathbb H}^3$ is within at most distance $\frac 12l_0$ of the geodesic segment joining $\phi(0)$ and $\phi(T)$. Given a convex set $K\subset{\mathbb H}^3$, $L\ge l_0$ and $\gamma$ a ray as in the lemma which exists ${\mathcal N}_L(K)$ then let $t_0$ be be the unique time with $\gamma(t_0)\in{\partial}{\mathcal N}_L(K)$ and let $p\in K$ be the point closest to $\gamma(t_0)$. If the angle between $\gamma$ and ${\partial}{\mathcal N}_L(K)$ is less than $\frac\pi 2$, then the curve obtained by juxtaposition of $\gamma[0,t_0]$ and the geodesic segment $[\gamma(t_0),p]$ consists of two geodesic segments of at least length $l_0$ and with a corner with angle at least $\frac\pi 4$. In particular, by the first part of the lemma, it is an $A$–bi-Lipschitz embedding and hence by the choice of $l_0$ its image is within of the geodesic segment $[\gamma(0),p]$. However, by convexity of $K$ we have that the latter segment is contained in $K$; a contradiction.
If $f\co X\to M$ is a carrier graph in a hyperbolic 3–manifold $M$ we denote by $\tilde f\co \tilde X\to{\mathbb H}^3$ the lift of $f$ to a map between the universal covers of $X$ and $M$. We will be mainly interested in manifolds whose fundamental group is not free; in this case, the map $\tilde f$ cannot be an embedding. However, subgraphs of $X$ may well be quasi-isometrically embedded.
A connected subgraph $Y\subset X$ of a carrier graph $f\co X\to M$ is [*$A$–quasi-convex*]{} for some $A>0$ if:
- The restriction $\tilde f\vert_{\tilde Y}\co \tilde Y\to{\mathbb H}^3$ of the map $\tilde f$ to the universal cover $\tilde Y$ of $Y$ is an $(A,A)$–quasi-isometric embedding.
- Every point in $\tilde Y$ is at most at distance $A$ from the axis of some element of $\pi_1(Y)$.
- The translation length of every element $f_*(\gamma)$ in ${\mathbb H}^3$ is at least $\frac 1A$ for every $\gamma\in\pi_1(Y)$.
Recall that a discrete subgroup $G$ of $\operatorname{PSL}_2{\mathbb C}$ is [*convex–cocompact*]{} if there is a convex $G$–invariant subset $C\subset{\mathbb H}^3$ of hyperbolic space with $C/G$ compact. The smallest such convex subset of ${\mathbb H}^3$ is the [*convex-hull*]{} $CH(G)$ of $G$ and it is well-known that $CH(G)$ is the closure of the union of all axis of elements in $G$.
If $Y$ is a graph and $g\co Y\to M$ is a map whose lift $\tilde g\co \tilde Y\to{\mathbb H}^3$ is a quasi-isometric embedding then the image $g_*(\pi_1(Y))$ is a free convex–cocompact subgroup. Intuitively, considering $A$–quasi-convex graphs amounts to considering uniformly convex–cocompact free subgroups. More precisely, if $Y\subset X$ is $A$–quasi-convex and $\gamma\in\pi_1(Y)$ is non-trivial then the image $\tilde f(\operatorname{Axis}(\gamma))$ is an $(A,A)$–quasi-geodesic and hence it is at uniformly bounded distance of the axis $\operatorname{Axis}(f_*(\gamma))$ of $f_*(\gamma)$. In particular, there is a $d$ depending only on $A$ with $$\tilde f(\tilde
Y)\subset{\mathcal N}_d(CH(f_*(\pi_1(Y))))\subset{\mathcal N}_{2d}(\tilde f(\tilde Y))$$ This fact, together with the last condition in the definition of $A$–quasi-convex, implies:
\[good-convex\] For all $A$ there is $d$ such that for every hyperbolic manifold $M$ and every $A$–quasi-convex subgraph $Y$ of a minimal length carrier graph $f\co X\to M$ there is a $f_*(\pi_1(Y))$–invariant convex subset $\bar C(Y)$ with $$\tilde f(\tilde Y)\subset\bar
C(Y)\subset{\mathcal N}_d(\tilde f(\tilde Y)),$$ and such that $d_{\mathbb H}^3(x,\gamma
x)\ge l_0$ for all $x\in{\partial}\bar C(Y)$ and $\gamma\in
f_*(\pi_1(Y))$. Here $l_0$ is the constant provided by .
The following result is the main technical point of the proof of .
\[meat\] For all $A,s>0$ there is $L$ such that whenever $M$ is a hyperbolic 3–manifold, $f\co X\to M$ is a minimal length carrier graph with $s$ edges, $Y_1,\dots,Y_k$ are disjoint connected $A$–quasi-convex subgraphs of $X$ then either
- $\tilde f\co \tilde X\to{\mathbb H}^3$ is a quasi-isometric embedding and hence $\pi_1(M)$ is free, or
- the graph $X\setminus\cup_i Y_i$ contains an edge of at most length $L$.
The author suggests to the reader that he or she proves this proposition him or herself. In fact, a proof by picture takes two not particularly complicated drawings and this is clearly much more economic than the proof written below.
As mentioned by the referee, is a particular case of the main technical result of Kapovich and Weidmann [@Kapovich-Weidmann-freely] and that it can also be derived from their [@Kapovich-Weidmann-rank Theorem 2.5].
Let $l_0$ and $d$ be the constants provided by Lemmas \[constant\] and \[good-convex\]. We are going to show that $\tilde f\co \tilde
X\to{\mathbb H}^3$ is a quasi-isometric embedding whenever every edge in $X\setminus\cup Y_i$ has at least length $6l_0+4d$. Seeking a contradiction, assume that this is not the case. Then there is an infinite geodesic ray $\gamma\co [0,\infty)\to\tilde X$ whose image $\tilde f(\gamma)$ is not a quasi-geodesic. If there is some $t\in(0,\infty)$ such that $\gamma(t,\infty)$ is disjoint from the union of the preimages of the graphs $Y_i$, then $\tilde
f(\gamma(t,\infty))$ consists of a perhaps short starting segment and geodesic segments of length at least $6l_0+4d$ meeting with angle $\frac{2\pi}3$; implies that $\tilde
f(\gamma(t,\infty))$, and hence $\tilde f(\gamma)$, is a quasi-geodesic ray, contradicting our assumption. Similarly, if there is $t\in(0,\infty)$ such that $\gamma(t,\infty)$ is contained in a preimage $\tilde Y_i$ of some $Y_i$ then the assumption that $\tilde
f\vert_{\tilde Y_i}$ is a quasi-isometric embedding implies again that $\tilde f(\gamma(t,\infty))$ is quasi-geodesic, contradicting again our assumption. This implies that the curve $\gamma$ has to enter and leave the union of the preimages of $Y_i$ infinitely often.
Let $a_1<b_1<a_2<b_2<a_3<\dots$ be such that $\gamma(a_j,b_j)$ is contained in and $\gamma(b_j,a_{j+1})$ is disjoint from the preimage of $\cup_i Y_i$ for all $j\ge 1$. Let also $Z_j$ be the component of the preimage of $\cup_iY_i$ containing $\gamma(a_j,b_j)$. For every $j$, the path $\gamma(b_j,a_{j+1})$ consists of edges which by assumption have length at least $6l_0+4d$. Let $\gamma(b_j,c_j)$ be the first edge of this path. We claim that most of the length of $\gamma(b_j,c_j)$ is outside of ${\mathcal N}_{2l_0}(\bar C(\tilde f(Z_j)))$. In fact, by , every point in the boundary of ${\mathcal N}_{2l_0}(\bar C(\tilde f(Z_j)))$ is at most distance $2l_0+d$ from $\tilde f(Z_j)$; the assumption that $f\co X\to M$ is a minimal length graph implies that $\tilde f(\gamma(b_j,c_j))$ spends at most $2l_0+d$ time within ${\mathcal N}_{2l_0}(\bar C(\tilde f(Z_j)))$. Let $b_j^+$ be the exit time. Then $\tilde f(\gamma(b_j^+,c_j))$ is a geodesic segment of at least length $4l_0+3d$ which, by has at least angle $\frac\pi 4$ with the boundary of ${\mathcal N}_{2l_0}(\bar C(\tilde f(Z_j)))$. A similar discussion applies not when exiting but when entering ${\mathcal N}_{2l_0}(\bar C(\tilde f(Z_{j+1})$; let $a_{j+1}^-$ be the entry time.
Setting $I_1=\gamma(a_1,b_1^+)$, $J_1=\gamma(b_1^+,a_2^-)$, $I_2=\gamma(a_2^-,b_2^+)$, $J_2=\gamma(b_2^+,a_3^-)$,... we obtain a decomposition of $\gamma(a_1,\infty)$ in segments with the following properties:
- $\tilde f(I_j)\subset{\mathcal N}_{2l_0}(\bar C(Z_j))$ and is $A'$–quasi-geodesic for some $A'$ and all $j$
- For all $j$, $\tilde f(J_j)$ is a path consisting of geodesic segments of at least length $2l_0$, with at least angle $\frac {2\pi}3$ at the vertices, with endpoints in the boundaries of ${\mathcal N}_{2l_0}(\bar C(Z_j))$ and ${\mathcal N}_{2l_0}(\bar C(Z_{j+1}))$ and such that the angles with these boundaries at the endpoints are at least $\frac\pi 4$.
Before going further we observe that for all $j$ we have $\tilde f(a_j^-)\neq\tilde f(b_j^+)$ because the homomorphism $f_*\vert_{\pi_1(Y_i)}$ is injective for all $i$. Assume now that the distance of $\tilde f(a_j^-)$ and $\tilde f(b_j^+)$ is less than $l_0$. Then, by we have that the images in $M$ of $\tilde f(a_j^-)$ and $\tilde f(b_j^+)$, and hence the images of the segments $\tilde f(a_j^-,a_j)$ and $\tilde f(b_j,b_j^+)$, are different. This implies that we can replace equivariantly the segment $\tilde f(a_j^-,a_j)$ by the geodesic segment $[\tilde f(a_j^-),\tilde f(b_j^+)]$, getting a new carrier graph $f'\co X'\to M$ with length $$\begin{aligned}
l_{f'\co X'\to M'}(X')&\le l_{f\co X\to M}(X)-l(\tilde f(a_j^-,a_j))+l([\tilde f(a_j^-),\tilde f(b_j^+)])\\
&\le l_{f\co X\to M}(X)-2l_0+l_0<l_{f\co X\to M}(X)\end{aligned}$$ This contradicts the minimality of $l_{f\co X\to M}(X)$ and proves that the distance between the points $\tilde f(a_j^-)$ and $\tilde
f(b_j^+)$ of $I_j$ is less than $l_0$. Let $I_j'$ be the geodesic segment joining the endpoints of $I_j$; observe that the length of this homotopy is bounded by some constant $A''$ because $I_j$ is $A'$–quasi-geodesic for all $j$. Then the path $\gamma$ is properly homotopic to the path $\gamma'$ obtained as the juxtaposition of the segments $I_1'\cup J_1\cup I_2'\cup J_2\cup\dots$. This path consists now of geodesic segments of at least length $l_0$ and meeting with angles at least $\frac\pi 4$. implies that $\gamma'$ is a quasi-geodesic. Then the same holds for $\gamma$ because the homotopy from $\gamma$ to $\gamma'$ has at most length $A''$. This yields the desired contradiction.
Some facts on the geometry of mapping tori {#sec:manifolds}
==========================================
As mentioned in the introduction, the following is the starting point of our considerations:
Let $\Sigma_g$ be the closed surface of genus $g\ge 2$ and $F\in\operatorname{Map}(\Sigma_g)$ a pseudo-Anosov mapping class. Then the mapping torus $$M(F)=\Sigma_g\times[0,1]/(x,1)\simeq(F(x),0)$$ admits a hyperbolic metric.
The manifold $M(F)$ fibers over the circle with fiber $\Sigma_g$ and monodromy $F$. Let $\pi\co \pi_1(M(F))\to{\mathbb Z}$ be the homomorphism given by this fibering and observe that $M(F^n)$ is homeomorphic, and hence isometric by Mostow’s rigidity theorem, to the cover of $M(F)$ corresponding to the kernel of the composition of $\pi$ and the canonical homomorphism ${\mathbb Z}\to{\mathbb Z}/n{\mathbb Z}$. Let $M'$ be the infinite cover of $M(F)$ corresponding to the kernel of $\pi$; in the sequel we will always consider $M'$ with the unique hyperbolic metric such that the covering $M'\to M(F)$ is Riemannian. Before going further we observe the following fact that we state here for further reference:
\[lifting\] For every $D$ there is $n_D$ such that the following holds for all $n\ge n_D$: Every subset $K\subset M(F^n)$ of diameter at most $D$ lifts homeomorphically to $M'. \qed$
Many of the arguments used in the present paper rely on properties of finitely generated subgroups of the fundamental group of $M'$.
\[covering\] Every proper subgroup $G$ of $\pi_1(M')\simeq\pi_1(\Sigma_g)$ of rank at most $2g$ is free and convex–cocompact.
The manifold $M'$ is homeomorphic to $\Sigma_g\times{\mathbb R}$. In particular, every proper subgroup of $\pi_1(M')\simeq\pi_1(\Sigma_g)$ is either free or isomorphic to the fundamental group of a closed surface which covers $\Sigma_g$ with at least degree 2. Any such surface has genus greater than $g$ and hence its fundamental group has rank greater than $2g$. We have proved that the group $G$ is free. A result due to Thurston in this case and to Agol [@Agol] and Calegari–Gabai [@Calegari-Gabai] in much more generality asserts that ${\mathbb H}^3/G$ is homeomorphic to the interior of a handlebody. Now, Canary’s generalization of Thurston’s covering theorem [@Canary-covering] implies that $G$ is convex–cocompact.
Proof of {#sec:proofmain}
=========
As the kind reader may have deduced from the title of this section, we prove here . But first, as a warm-up, we show the result of White mentioned in the introduction:
For all $r$ there is $R$ such that every closed hyperbolic 3–manifold $M$ with $\operatorname{rank}(\pi_1(M))\le r$ has $\operatorname{inj}(M)\le R$.
Let $f\co X\to M$ be a minimal length carrier graph in the class of a minimal generating set of $\pi_1(M)$; observe that $X$ has at most $s=3(r-1)$ edges. Denote by $X^{<t}$ the (possibly empty) subgraph of $X$ consisting of the union of all the edges with length less than $t$. Every simple closed circuit in $X^{<t}$ represents a non-trivial element in $\pi_1(M)$ by and has at most length $3t(r-1)$. In particular, it suffices to show that there is $t_r$ depending only on $r$ such that some component $Y$ of $X^{<t_r}$ is not a tree.
Let $l_0$ be the constant provided by . Since $M$ is closed we have that $\pi_1(M)$ is not free and in particular $\tilde f\co \tilde X\to{\mathbb H}^3$ cannot be a quasi-isometric embedding. In particular, $X^{<l_0}$ is not empty by and . If every component $Y$ of $X^{<l_0}$ is a tree then $\operatorname{diam}(\tilde Y)=\operatorname{diam}(Y)\le 3(r-1)l_0$ and hence the map $$\tilde f\vert_{\tilde Y}\co \tilde Y\to{\mathbb H}^3$$ is a $(3(r-1)l_0,3(r-1)l_0)$–quasi-isometric embedding. We obtain from a constant $l_1=l_1(r)$ depending only on $r$ such that $X^{<l_0}$ is a proper subgraph of $X^{<l_1}$. If again every connected component of $X^{<l_1}$ is tree then we get $l_2=l_2(r)$ depending only on $r$ such that $X^{<l_1}$ is a proper subgraph of $X^{<l_2}$. This process can be repeated at most $3(r-1)$ times since this is the number of edges in $X$; this concludes the proof of White’s Theorem.
As we see, the proof of White’s Theorem yields in fact that every generating set $(g_1,\dots,g_r)$ is Nielsen equivalent to a generating set $(g_1',\dots,g_r')$ such that the translation length of $g_1'$ is uniformly bounded. The idea of the proof of is to show that every generating set of $\pi_1(M(F^n))$ is Nielsen equivalent to a generating set such that the translation lengths of all elements but 1 are uniformly bounded.
Let $\Sigma_g$ be the closed surface of genus $g\ge 2$, $F\in\operatorname{Map}(\Sigma_g)$ a pseudo-Anosov mapping class and $M(F^n)$ the mapping torus of $F^n$. There is $n_F$ such that for all $n\ge n_F$ $\operatorname{rank}(\pi_1(M(F^n)))=2g+1$. Moreover for any such $n$ any generating set of $\pi_1(M(F^n))$ with minimal cardinality is Nielsen equivalent to a standard generating set.
For all $n$ let ${\mathcal S}_n$ be a generating set of $\pi_1(M(F^n))$ with minimal cardinality and $f_n\co X_n\to M(F^n)$ a minimal length carrier graph in the equivalence class determined by ${\mathcal S}_n$. As remarked in the introduction $\operatorname{rank}(\pi_1(M(F^n)))\le 2g+1$ and hence $X_n$ has at most $6g$ edges. As in the proof of White’s Theorem, we denote by $X_n^{<t}$ the subgraph of $X_n$ consisting of all the edges of $X_n$ of length less than $t$.
For every $D$ there are $n_D$ and $A_D$ such that for every subgraph $Y_n$ of $X_n$ of length less than $D$ and such that the image of $\pi_1(Y_n)$ is convex–cocompact one has: $Y_n$ is $A_D$–quasi-convex for all $n\ge n_D$.
To begin with observe that the injectivity radius of the manifold $M(F^n)$ is bounded from below by $\operatorname{inj}(M(F))$ for all $n$. In particular, the last condition in the definition of $A$–quasi-convex is automatically satisfied for every $A$ with $$A^{-1}\le\operatorname{inj}(M(F)).$$ Seeking a contradiction assume that for some $D$ there are sequences $A_i,n_i\to\infty$ such that for all $i$ there is a subgraph $Y_{n_i}$ of $X_{n_i}$ which has length less than $D$ and fails to be $A_i$–quasi-convex but such that $(f_{n_i})_*(\pi_1(Y_{n_i}))$ is convex–cocompact. Composing the map $f_{n_i}\co X_{n_i}\to
M(F^{n_i})$ with the covering $M(F^{n_i})\to M(F)$ we obtain from the Arzela–Ascoli Theorem that, up to conjugacy in $\pi_1(M(F))$ and passing to a subsequence, we may assume that $(f_{n_i})_*(\pi_1(Y_{n_i}))=(f_{n_j})_*(\pi_1(Y_{n_j}))$ are conjugated for all $i,j$. In particular, the desired contradiction follows if we show that the map $\pi_1(Y_{n_i})\to
f_*(\pi_1(Y_{n_i}))$ is an isomorphism.
By there is $i_D$ such that for all $i\ge i_D$ the graph $Y_{n_i}$ lifts to $M'$. In particular, we obtain from that $(f_{n_i})_*(\pi_1(Y_{n_i}))$ is a free subgroup of $\pi_1(M')$ which has in particular at most the same rank as $\pi_1(Y_{n_i})$. Minimality of the generating set ensures that $$\operatorname{rank}((f_{n_i})_*(\pi_1(Y)))=\operatorname{rank}(\pi_1(Y_{n_i})).$$ We are done, since every surjective homomorphism between two free groups of the same rank is an isomorphism.
We use now an argument similar to the one in the proof of White’s Theorem to show:
There are $n_1$ and $t$ such that for all $n\ge n_1$ there is a connected component $Y_n$ of $X_n^{<t}$ such that the image of $\pi_1(Y_n)$ into $\pi_1(M(F^n))$ is not convex–cocompact.
As in the proof of White’s Theorem we obtain a first constant $t_1$ such that for all $n$ at least one of the components $Y_{n,t_1}^1,\dots,Y_{n,t_1}^{k(n,t_1)}$ of $X_n^{<t_1}$ is not a tree. If for all $n$ the image of the fundamental group of one of these component fails to be convex–cocompact then are done with $t=t_1$. Assume that there is a subsequence $(n_i)_i$ such that the image of $\pi_1(Y_{n_i,t_1}^j)$ is convex–cocompact for all $j$ and $i$. By claim 1 there is a constant $A_1$ such that $Y_{n_i,t_1}^j$ is $A_1$–quasi-convex for all $i,j$. In particular, we obtain from a constant $t_2$ such that $X_{n_i}^{<t_1}$ is a proper subgraph of $X_{n_i}^{<t_2}$ for all $i$. If again the image in of the fundamental group of every connected component of $X_{n_i}^{<t_2}$ is convex–cocompact for infinitely many $i$, say for all $i$, then we can repeat the process. The bound on the number of edges of $X_n$ ensures that after at most $6g$ steps we find the desired subgroup.
We can now conclude the proof of . Let $S_n$ be a generating set of $\pi_1(Y_n)$ where $Y_n$ is the connected subgraph of $X_n$ provided by claim 2, extend it to a generating set $\bar S_n$ of $X_n$ and let $\bar{\mathcal S}_n$ be the generating set of $\pi_1(M(F^n))$ obtained as the image of $\bar S_n$ under the homomorphism $$(f_n)_*\co \pi_1(X_n)\to\pi_1(M(F^n)).$$ By , $\bar{\mathcal S}_n$ is Nielsen equivalent to the minimal generating set ${\mathcal S}_n$ we started with. In particular, $\bar{\mathcal S}_n$ is minimal as well. The claim of follows once we prove that $\bar{\mathcal S}_n$ is a standard generating set of $\pi_1(M(F^n))$ and hence has $2g+1$ elements. Observe that since $\bar S_n$ has $\operatorname{rank}(\pi_1(M(F^n))\le 2g+1$ elements, it suffices to show that the generating set $S_n$ of $\pi_1(Y_n)$ has $2g$ elements and that its image under $(f_n)_*$ generates the subgroup $\pi_1(M')$ of $\pi_1(M(F^n))$ corresponding to the fiber $\Sigma_g$. This is what we prove next: The graph $Y_n$ is contained in $X_n^{<t}$, where $t$ is as in claim 2, and therefore it has at most diameter $6gt$. By there is $n_1$ such that $Y_n$ lifts to $M'$ for all $n\ge n_1$; in particular, $\pi_1(Y_n)$ does not surject onto $\pi_1(M(F^n))$ and hence one has $$\label{eq-rank1}
\operatorname{rank}(\pi_1(Y_n))\le\operatorname{rank}(\pi_1(M(F^n)))-1\le 2g.$$ On the other hand, since the image of $\pi_1(Y_n)$ into $\pi_1(M')\simeq\pi_1(\Sigma_g)$ is not convex–cocompact we deduce from that $\pi_1(Y_n)$ surjects on $\pi_1(M')$; thus $$\label{eq-rank2}
2g=\operatorname{rank}(\pi_1(M'))\le\operatorname{rank}(\pi_1(Y_n)).$$ This concludes the proof of .
|
---
abstract: |
We propose a novel geometric structure, called ${\mathrm{Nest}}(P)$, which is induced by $P$ and is an arrangement of $\Theta(n^2)$ segments, each of which is parallel to an edge of $P$. This structure admits several interesting and nontrivial properties, which follow from two fundamental properties in geometry, namely, convexity and parallelism. Moreover, we give a perfect application of this structure in the following geometric optimization problem: Given a convex polygon $P$ with $n$ edges, compute the parallelograms in $P$ with maximal area. We design an $O(n\log^2n)$ time algorithm for computing all these parallelograms, which improves over a previous known quadratic time algorithm.
Concretely, we show that ${\mathrm{Nest}}(P)$ captures the essential nature of the maximal area parallelograms, and the optimization problem we considered reduces to answering $O(n)$ location queries on ${\mathrm{Nest}}(P)$. Moreover, using a few nontrivial algorithmic tricks, we answer each of these queries in $O(\log^2n)$ time. This should avoid an explicit construction of ${\mathrm{Nest}}(P)$, which takes $\Omega(n^2)$ time.
author:
- Kai Jin
bibliography:
- 'MAP.bib'
title: 'Maximal Parallelograms in Convex Polygons and a Novel Geometric Structure [^1] [^2]'
---
\[theorem\][Fact]{}
![Two examples of ${\mathrm{Nest}}(P)$. The edges of the given polygon $P$ are labeled by $1$ to $n$. The other line segments in the figure are the edges from ${\mathrm{Nest}}(P)$.[]{data-label="fig:examples"}](NestP.pdf){width="\textwidth"}
![Two examples of ${\mathrm{Nest}}(P)$. The edges of the given polygon $P$ are labeled by $1$ to $n$. The other line segments in the figure are the edges from ${\mathrm{Nest}}(P)$.[]{data-label="fig:examples"}](NestP2.pdf){width=".85\textwidth"}
Introduction {#sect:introduction}
============
Assume $P$ is a convex polygon with $n$ sides. In this paper, we introduce a new geometric structure, called ${\mathrm{Nest}}(P)$, which is associated with the convex polygon $P$ as shown in Figure \[fig:examples\] and \[fig:examples-regular\]. This structure is interesting because it enjoys several (six indeed) properties which are nontrivial to prove but extremely succinct to state. More interestingly, by nicely applying all of its properties, we design a practical and efficient algorithm for solving the following geometric problem: Compute all the parallelograms in $P$ with maximum area.
To introduce ${\mathrm{Nest}}(P)$, we build and investigate a few geometric objects. Let $\partial P$ denote $P$’s boundary. First, we define a set ${\mathcal{T}}^P$ of tuples in $\partial P^3 = (\partial P,\partial P,\partial P)$ and a simple geometric function $f$ defined on ${\mathcal{T}}^P$. Set ${\mathcal{T}}^P$ is well-defined on $P$ but its definition is based on several cascading notations (see Equation \[eqn:def\_T\]). Abbreviate ${\mathcal{T}}^P$ as ${\mathcal{T}}$ when $P$ is clear. Function $f$ maps $(X_1,X_2,X_3)$ to the unique point $Y$ such that $YX_1X_2X_3$ forms a parallelogram. Next, we define $\Theta(n^2)$ blocks and $2n$ sectors, each of which is a subregion of $f({\mathcal{T}})$. (Note: $f(S)$ is short for $\{f(X_1,X_2,X_3)\mid (X_1,X_2,X_3)\in S\}$ for any subset $S$ of ${\mathcal{T}}$.) Briefly, a block is defined as the image set of those tuples $(X_1,X_2,X_3)$ in ${\mathcal{T}}$ under $f$ for which the points $X_3,X_1$ are restricted in some specific edges or vertices of $P$ (see Equation \[def:block\]), whereas a sector is defined as the image set of those tuples $(X_1,X_2,X_3)$ in ${\mathcal{T}}$ under $f$ for which the point $X_2$ is restricted in some specific edge or vertex of $P$ (see Equation \[def:sector\]). Finally, ${\mathrm{Nest}}(P)$ is the union of the boundaries of all the blocks and sectors.
We then prove the following properties about ${\mathcal{T}}$ under $f$. First, $f$ is bijection from ${\mathcal{T}}^*$ to $f({\mathcal{T}}^*)$, where ${\mathcal{T}}^*$ denotes the subset of ${\mathcal{T}}$ which are mapped to $\partial P$ under $f$. Second, the intersection of any two blocks lies in the interior of $P$. Third, the intersection between any sector and $\partial P$ is continuous, which means that it is either empty or a (continuous) boundary-portion of $P$. Moreover, the $2n$ intersections between the sectors and $\partial P$ are pairwise-disjoint and have a monotonicity. Furthermore, $f({\mathcal{T}})$ has an annular shape and its inner boundary interleaves $\partial P$. See more in Theorem \[theorem:nestp\]. All these properties follow from two basic properties in geometry: convexity and parallelism, and they together manifest good relations between ${\mathrm{Nest}}(P)$ and $\partial P$. Moreover, since ${\mathrm{Nest}}(P)$ is induced by $P$, they are properties of convex polygon $P$ indeed, and hence may be interesting in convex geometry.
In the next, we apply ${\mathrm{Nest}}(P)$ to solve the aforementioned geometric optimization problem. To find the maximum area parallelograms (MAPs), our algorithm first compute all the locally maximal area parallelograms (LMAPs) and then select the largest among them. An LMAP has an area larger than or equal to those of all its nearby parallelograms that lie in $P$. (See a rigorous definition in Definition \[def:LMAP\].) Concretely, we show that ${\mathrm{Nest}}(P)$ captures the essential information relevant to finding the LMAPs and we reduce the problem of computing the LMAPs to $O(n)$ location queries on ${\mathrm{Nest}}(P)$. Moreover, we avoid building ${\mathrm{Nest}}(P)$ (which would take $\Theta(n^2)$ time) and answer each of these queries in $O(\log^2n)$ time, thus obtain an $O(n\log^2n)$ time algorithm. Besides, we also prove that there are in total $O(n)$ LMAPs.
This paper can be divided into three parts. One part (from Section \[sect:block-sector\] to Section \[sect:fT\]) is dedicated to defining and studying ${\mathrm{Nest}}(P)$ and proving its properties; one part (Section \[sect:LMAPs-properties\] and Section \[sect:reduction\]) to learning LMAPs and designing the main algorithm for computing the LMAPs; and the last part (Section \[sect:algorithms\] and Section \[sect:alg\_B\]) to solving the location queries on ${\mathrm{Nest}}(P)$. To make it clearer, we note that the last part does not depend on the second, and the second is based on some previous results of LMAPs proved by the same author in [@arxiv:n2]. [^3]
In our opinion, the discovery of ${\mathrm{Nest}}(P)$ and the proof of its structural properties are delightful and are our major contributions. Some previous reviewers commented that ${\mathrm{Nest}}(P)$ is beautiful and as interesting as some well-known geometric structures like the Voronoi Diagrams and Zonotopes. We hope this structure may found more applications in future, perhaps in some other disciplines like industry design, art design, or physics.
In fact, ${\mathrm{Nest}}(P)$ admits other interesting properties. For example, if we travel along ${\mathrm{Nest}}(P)$ (whose segments are actually connected and directional; see the leftmost picture in Figure \[fig:examples-regular\]) one cycle (starting and ending at the same node), the total distance would be 3 times of the perimeter of $P$, no matter which path we choose. This property is trivial due to the definition of ${\mathrm{Nest}}(P)$. Moreover, we mention that most of properties of ${\mathrm{Nest}}(P)$ still hold when $P$ is a convex curve. These will not be proved or applied in the paper.
![Examples of ${\mathrm{Nest}}(P)$ for regular $n$-side polygon for $n=5,6,7$.[]{data-label="fig:examples-regular"}](NestP5678.pdf "fig:"){width=".6\textwidth"}\
#### Related Work, Motivations & Applications of computing the MAPs {#related-work-motivations-applications-of-computing-the-maps .unnumbered}
The problem of computing the MAPs belongs to the polygon inclusion problems, the classic geometric optimization problems of searching for extremal figures with special properties inside a polygon. Several such problems have been studied in history, and are listed in [@arxiv:n2]. For example, the potato-peeling [@potate-focs84] and the maximum k-gon problem [@Boyce82kgon]. Many best known algorithms in this area require at least quadratic time, e.g., the best one for computing the maximum rectangle in a convex polygon $P$ takes $O(n^3)$ time; the best one for the maximum similar copy of a triangle inside $P$ takes $O(n^2\log n)$ time; and the best one for computing the maximum square and equilateral triangle inscribed in $P$ takes $O(n^2)$ time.
Although, our problem is as natural as many related problems studied in the history, and there is an intrinsic geometric interest in itself, it has a special motivation. The Heilbronn triangle problem is a minimax problem which concerns placing $m$ points in a convex region, in order to avoid small triangles constituted by these $m$ points. Its simplest case, namely $m=4$, reduces to finding the MAP in the region. Also, computing the MAPs has an application in shape approximation: The MAP serves as a $2/\pi$-approximation of the largest centrally symmetric body in a convex polygon. Moreover, by finding the MAP, we can bring the body into a “good position” by an affine transformation, to avoid almost degenerate, i.e., needle-like or fat bodies. In fact, the maximum volume parallelepiped in convex bodies has been extensively studied in pure mathematics. See details in the introduction of [@arxiv:n2].
#### Two important notes {#subsect:techover .unnumbered}
1\. A noteworthy step in the proofs of the properties of ${\mathcal{T}}$ under $f$ is the invention of the regions so-called “bounding-quadrants of the blocks” in Section \[sect:bounding-quadrants\]. Each of such region is a relaxation of a corresponding block and is a quadrant in the plane.
2\. The previous result [@arxiv:n2] includes an interesting property of the LMAPs (called clamping bounds) and an $O(n^2)$ time algorithm for computing the LMAPs. This property will be summarized in Section \[sect:LMAPs-properties\], where we reveal the connections between ${\mathcal{T}}$ and the LMAPs.
Summary and future work
=======================
As a summary of the last two sections, we get:
\[theorem:preprocess\] In $O(n\log^2n)$ time, we can compute information (\[def:information\]) for all vertex $V$.
Our main result is the following:
Given an $n$-sided convex polygon $P$, all the LMAPs in $P$ can be computed in $O(n\log^2n)$ time. Moreover, there number of LMAPs is bounded by $O(n)$.
The first claim is a corollary of Theorem \[theorem:reduction\] and Theorem \[theorem:preprocess\]. The number of LMAPs is $O(n)$ because each of the three routines outputs $O(n)$ parallelograms.
[@arxiv:n2] gave an alternative method for bounding the number of LMAPs. It uses another interesting property of the LMAPs, which states that all LMAPs interleave each other. This property easily implies that the number is bounded by $O(n)$.
##### Bottleneck and open problems. {#bottleneck-and-open-problems. .unnumbered}
The bottleneck of our algorithm lies in the preprocessing procedures. But we note that these procedures are amendable for being parallelized.
In fact, we believe that the lowerbound for computing the LMAPs is **not** $\Omega(n\log^2n)$, which means our algorithm is not optimum. Toward the goal of designing a better sequential algorithm, it remains to study whether the preprocessing procedures can be improved to $O(n\log n)$ time using the tentative Prune-and-Search technique [@TPStechnique].
Moreover, it would be interesting to know whether there is a space subdivision associated with a three dimensional convex polyhedron that is similar to ${\mathrm{Nest}}(P)$. Can we discover similar results in other geometry spaces?
Besides, because ${\mathrm{Nest}}(P)$ admits rich properties, can it find more applications?
##### Acknowledgements. {#acknowledgements. .unnumbered}
The author thanks our god for his grace and mercy. It is very lucky to find this novel structure, even though it takes me years to write this paper and [@arxiv:n2]. Through this research, I have a feeling that everything is rotating and everything is perfect.
The author thanks Haitao Wang for fruitful discussions and for his considerate advices and helps on writing this paper, and thanks Andrew C. Yao, Jian Li, and Danny Chen for instructions, and Matias Korman, Wolfgang Mulzer, Donald Sheehy, Kevin Matulef, and anonymous reviewers from past conferences for many precious suggestions. Last but not least, the author appreciates the developers of Geometer’s Sketchpad${}^\circledR$.
[^1]: Supported by the National Basic Research Program of China Grant 2007CB807900, 2007CB807901, and the National Natural Science Foundation of China Grant 61033001, 61061130540, 61073174.
[^2]: This work is mainly done during my Ph. D. in the Institute for Interdisciplinary Information Sciences at Tsinghua University.
[^3]: [@arxiv:n2] is the full version of the conference paper [@JinM11], but contains some new results of LMAPs not stated (or not stated explicitly) in the conference paper. All notations in [@arxiv:n2] are consistent with this paper.
|
---
author:
- |
David Stutz$^1$, Matthias Hein$^2$, Bernt Schiele$^1$\
$^1$ Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken\
$^2$ University of Tübingen, Tübingen\
`{david.stutz,schiele}@mpi-inf.mpg.de`, `matthias.hein@uni-tuebingen.de`\
bibliography:
- 'bibliography.bib'
title: '-1px Confidence-Calibrated Adversarial Training: Towards Robust Models Generalizing Beyond the Attack Used During Training'
---
|
---
author:
- 'Takashi Kato[^1]'
- 'Akihiko Takahashi[^2]'
- 'Toshihiro Yamada[^3]'
date: 'December 31, 2012'
title: 'An Asymptotic Expansion Formula for Up-and-Out Barrier Option Price under Stochastic Volatility Model'
---
This paper derives a new semi closed-form approximation formula for pricing an up-and-out barrier option under a certain type of stochastic volatility model including SABR model by applying a rigorous asymptotic expansion method developed by Kato, Takahashi and Yamada [@1]. We also demonstrate the validity of our approximation method through numerical examples.\
\
[**Keywords**]{}: Barrier Option, Up-and-Out Call Option, Asymptotic Expansion, Stochastic Volatility Model
Introduction
============
Numerical computation schemes for pricing barrier options have been a topic of great interest in mathematical finance and stochastic analysis. One of the tractable approaches for evaluation of barrier options is to derive an analytical approximation. However, from the mathematical viewpoint, deriving an approximation formula by applying stochastic analysis is not an easy task since the Malliavin calculus approach as in Takahashi and Yamada [@3] cannot be directly applied. Recently, Kato, Takahashi and Yamada [@1] has provided a new asymptotic expansion method for the Cauchy–Dirichlet problem by developing a rigorous perturbation scheme in a partial differential equation (PDE), and as an example, derived an approximation formula for a down-and-out call option price under a stochastic volatility model. In this paper, we give a new asymptotic expansion formula for an up-and-out call option price under a stochastic volatility model which is widely used in trading practice. Moreover, we show the validity of our formula through numerical experiments.
Asymptotic expansion formula for up-and-out barrier option prices
=================================================================
Consider the following stochastic differential equation (SDE) in a stochastic volatility model:$$\begin{aligned}
\label{SVdrift}
dS_{t}^{\varepsilon}&=&(c-q) S_{t}^{\varepsilon} dt+\sigma_t^{\varepsilon} S_{t}^{\varepsilon} dB_{t}^1, \\ S_0^{\varepsilon}&=&S,\\
d\sigma_{t}^{\varepsilon}&=&\varepsilon \lambda (\theta-\sigma_t^{\varepsilon})dt\\
&&+\varepsilon \nu \sigma_{t}^{\varepsilon}(\rho dB_{t}^1+\sqrt{1-\rho^2} dB_{t}^2),
\\ \sigma_{0}^{\varepsilon}&=&\sigma,\nn \end{aligned}$$ where $S, \sigma , c, q >0$, $\varepsilon \in [0,1)$, $\lambda, \theta, \nu>0$, $\rho \in [-1,1]$ and $B=(B^1,B^2)$ is a two dimensional standard Brownian motion. This model is motivated by pricing currency options. In this case, $c$ and $q$ represent a domestic interest rate and a foreign interest rate, respectively. The process $S^\varepsilon$ denotes a price of the underlying currency. Our purpose is to evaluate an [*up-and-out*]{} barrier option with time-to-maturity $T-t$ and the upper barrier price $H (>S)$, and its initial value is represented under a risk-neutral probability measure as follows: $$\begin{aligned}
&&C_\mathrm {Barrier}^{SV,\varepsilon}(T-t, S)\\
&&\ \ \ \ \ \ \ = {\mathop {\rm E}}\left[e^{-c(T-t)} {f}(S^{\varepsilon } _{T-t} ) 1_{\{\tau _{(0,H)}(S^{\varepsilon }) > T-t\}}\right],\end{aligned}$$ where $f$ stands for a call option payoff function $f(s) = \max \{ s - K, 0 \}$ for some $K > 0$. Here, the stopping time $\tau _{(0, H)}(S^{\varepsilon })$ is defined as $$\begin{aligned}
\tau _{(0, H)}(S^{\varepsilon }) = \inf \{ t\in [0, T] ; S^{\varepsilon }_t\notin (0, H) \}
\ \ (\inf \emptyset := \infty). \end{aligned}$$
Remark that $C_\mathrm {Barrier}^{SV,\varepsilon}(T-t, S)$ has no closed-form solution and therefore we have to rely on some numerical method such as the Monte–Carlo simulation in order to calculate $C_\mathrm {Barrier}^{SV,\varepsilon}(T-t, S)$. However, when $\varepsilon=0$, $C_\mathrm {Barrier}^{SV,0}(T-t, S)$ corresponds to the up-and-out barrier option price in the Black-Scholes model which is known to be solved explicitly. Then, for $\varepsilon > 0$, we are able to derive a [*semi closed-form expansion*]{} around $C_\mathrm {Barrier}^{SV,0}(T-t, S)$ when $\varepsilon \downarrow 0$. This is our main result and hereafter we show our approximation method for $C_\mathrm {Barrier}^{SV,\varepsilon}(T-t, S)$.
Clearly, applying Itô’s formula, we can derive the SDE of logarithmic process of $S^\varepsilon _t$ as $$\begin{aligned}
dX_{t}^{\varepsilon}&=&(c-q -\frac{1}{2}(\sigma_t^{\varepsilon})^2) dt+\sigma_t^{\varepsilon} dB_{t}^1, \\ X_{0}^{\varepsilon}&=&x \ := \ \log S. $$ Then we can rewrite $C_\mathrm {Barrier}^{SV,\varepsilon}(T-t, S)$ as $$\begin{aligned}
&&C_\mathrm {Barrier}^{SV,\varepsilon}(T-t, e^x)\\
&&\ \ \ \ \ \ \ = {\mathop {\rm E}}\left[e^{-c(T-t)} \bar{f}(X^{\varepsilon } _{T-t} ) 1_{\{\tau _{D}(X^{\varepsilon }) > T-t\}}\right], \end{aligned}$$ where $\bar{f}(x)=\max \{ e^x - K, 0 \}$ and $D = (-\infty, \log H )$. Note that $$\begin{aligned}
\tau _{D}(X^{\varepsilon }) = \inf \{t\in [0, T] \ ; \ X^\varepsilon _t\notin D\} = \tau _{(0, H)}(S^{\varepsilon }). \end{aligned}$$
Let $u^{\varepsilon}(t, x)=C_\mathrm {Barrier}^{SV,\varepsilon}(T-t,e^x)$ for $t\in [0, T]$ and $x\in {\bf R}$. Then $u^{\varepsilon}(t, x)$ satisfies the following PDE: $$\begin{aligned}
\left\{
\begin{array}{ll}
\left(\frac{\partial}{\partial t}+\mathscr {L}^{\varepsilon}-c \right)u^{\varepsilon}(t,x) = 0, & (t, x)\in (0, T]\times D, \\
u^{\varepsilon}(T,x) = \bar{f}(x), & x \in \bar{D}, \\
u^{\varepsilon}(t,\log H) = 0, & t\in [0, T],
\end{array}
\right. \end{aligned}$$ where $$\begin{aligned}
\mathscr {L}^{\varepsilon}&=&\left(c-q -\frac{1}{2}\sigma^2 \right)\frac{\partial}{\partial x}+\frac{1}{2}\sigma^2\frac{\partial^2}{\partial x^2}\\
&&+\varepsilon \rho \nu\sigma^2 \frac{\partial^2}{\partial x \partial \sigma}+\varepsilon \lambda (\theta-\sigma)\frac{\partial}{\partial \sigma}
+\varepsilon^2 \frac{1}{2}\nu^2\sigma^2 \frac{\partial^2}{\partial \sigma^2}.\label{Gene_ep}\end{aligned}$$ As mentioned above, when $\varepsilon = 0$, we can obtain the explicit value of $u^0(t, x)$. In this case, $u^0(t, x) = C^{BS}_\mathrm{Barrier}(T-t ,e^x,\sigma,H)$ represents the price of the up-and-out barrier call option under the Black–Scholes model. We have $$\begin{aligned}
C_{\mathrm{Barrier}}^{BS}=C_{\mathrm{Vanilla}}^{BS}-C,\end{aligned}$$ where $$\begin{aligned}
C_{\mathrm{Vanilla}}^{BS}&=&e^x e^{-qT}N(d_1)-Ke^{-cT}N(d_2),\\
C&=&e^x e^{-qT}N(x_1)-Ke^{-cT}N(x_2)\\
&&-e^x e^{-qT} \left( \frac{H}{e^x} \right)^{2\lambda}[ N(-y)-N(-y_1) ]\\
&&+ Ke^{-cT} \left( \frac{H}{e^x} \right)^{2\lambda-2}\\
&&\ \ \times [ N(-y+\sigma \sqrt{T})-N(-y_1+\sigma \sqrt{T}) ]\end{aligned}$$ with $$\begin{aligned}
x_1
&=&
\frac{x- \log H +(c-q)T+1/2 \sigma^2 {T} }{\sigma \sqrt{T}},
\\
x_2&=&x_1-\sigma \sqrt{T},\\
\lambda&=&\frac{(c-q)}{\sigma^2}+ \frac{1}{2}, \\
y&=&
\frac{2 \log H -x-\log K +(c-q)T+1/2 \sigma^2 {T} }{\sigma \sqrt{T}},\\
\\
y_1&=&
\frac{\log H- x +(c-q)T+1/2 \sigma^2 {T} }{\sigma \sqrt{T}}.\end{aligned}$$ See Hull [@2] for the details.\
We can represent $u^0(t, x) = \bar{P}^D_t\bar{f}(x)$ by using a semi-group $(\bar{P}^D_t)_t$ defined as $$\begin{aligned}
{\bar P}_{s}^{D}g(x)&=&\int_{-\infty}^{\log H} e^{-c s} (1-e^{-\frac{2(\log H-x)(\log H-y)}{\sigma^2s}})\nonumber\\
&&\ \ \times \frac{1}{\sqrt{2 \pi \sigma^2 s}}
e^{-\frac{(y-x-(c-q-\frac{1}{2}\sigma^2) s )^2}{2\sigma^2 s}}
g(y) dy\nonumber\\ \label{semigroup}\end{aligned}$$ for a continuous function $g$ with polynomial growth rate which satisfies $g(x)=0$ on $\partial D$.
The main result of Kato, Takahashi and Yamada [@1] suggests the following approximation formula.\
\
[**\[Asymptotic expansion formula\]**]{} $$\begin{aligned}
&&u^{\varepsilon}(t,x)= C_{\mathrm{Barrier}}^{BS}\\
&&\ \ \ \ +\varepsilon e^{-c(T-t)}\int_{0}^{T-t}{\bar P}^{D}_{s} \tilde{\mathscr {L}}^0_{1} {\bar P}^{D}_{T-t-s} {\bar f}(x)ds+O(\varepsilon^2), \end{aligned}$$ where $$\begin{aligned}
\tilde{\mathscr {L}}^0_1 =
\frac{\partial }{\partial \varepsilon }\mathscr {L}^{\varepsilon}|_{\varepsilon=0} =
\rho \sigma^2{\partial^2\over \partial x \partial \sigma}+\lambda (\theta-\sigma)\frac{\partial}{\partial \sigma}. \label{g_expansion}\end{aligned}$$\
Using (\[semigroup\]), the term $\int_{0}^{T-t}{\bar P}^{D}_{s} \tilde{\mathscr {L}}^0_{1} {\bar P}^{D}_{T-t-s} {\bar f}(x)ds$ is expressed as follows: $$\begin{aligned}
&&\int_{0}^{T-t}{\bar P}^{D}_{s} \tilde{\mathscr {L}}^0_{1} {\bar P}^{D}_{T-t-s} {\bar f}(x)ds\nonumber\\
&=&\int_{0}^{T-t} \int_{-\infty}^{\log H} e^{-c s} (1-e^{-\frac{2(\log H-x)(\log H-y)}{\sigma^2s}})\nonumber\\
&&\ \ \times \frac{1}{\sqrt{2 \pi \sigma^2 s}}
e^{-\frac{(y-x-(c-q-\frac{1}{2}\sigma^2) s )^2}{2\sigma^2 s}}
\tilde{\mathscr {L}}^0_{1} {\bar P}_{T-t-s}^{D}\bar{f}(y)dyds.\nonumber\\
\label{Approx_term}\end{aligned}$$
We are able to compute the integrand of the right hand side of the above formula (\[Approx\_term\]) as $$\begin{aligned}
&&\tilde{\mathscr {L}}^0_{1} {\bar P}_{T-t}^{D}\bar{f}(x)\\
&=&e^{c(T-t)} \Biggl\{ \rho \sigma^2 \frac{\partial^2}{\partial x \partial \sigma} C_{\mathrm{Barrier}}^{BS}(T-t,e^x,\sigma)\\
&&+\lambda (\theta-\sigma) \frac{\partial}{\partial \sigma} C_{\mathrm{Barrier}}^{BS}(T-t,e^x,\sigma) \Biggr\} .\end{aligned}$$ Here, $\frac{\partial}{\partial \sigma}C_{\mathrm{Barrier}}^{BS}(T,e^x)$ and $\frac{\partial^2}{\partial x \partial \sigma}C_{\mathrm{Barrier}}^{BS}(T,e^x)$ are concretely expressed as follows: $$\begin{aligned}
&&\frac{\partial}{\partial \sigma}C_{\mathrm{Barrier}}^{BS}(T,e^x)\\
&=&e^{-qT} e^x n(d_1) \sqrt{T}\\
&&-e^{-qT} e^x n(x_1) \sqrt{T}-(H-K)e^{-cT} n(x_2) \frac{-x_1}{\sigma}\\
&&+e^x e^{-qT} \left( \frac{H}{e^x} \right)^{2\lambda}\\
&&\times \Biggl\{ (\log H - x) \frac{-4(c-q)}{\sigma^3}[ N(-y)-N(-y_1) ]\\
&& + [n(y)\frac{y'}{\sigma} -n(y_1)\frac{y'_1}{\sigma}] \Biggr\}\\
&&-Ke^{-cT} \left( \frac{H}{e^x} \right)^{2\lambda-2} \\
&&\times \Biggl\{ (\log H - x) \frac{-4(c-q)}{\sigma^3}[ N(-y')-N(-y'_1) ]\\
&&+ [n(y')\frac{y}{\sigma}-n(y'_1)\frac{y_1}{\sigma}] \Biggr\}, \end{aligned}$$ $$\begin{aligned}
&&\frac{\partial^2}{\partial x \partial \sigma}C_{\mathrm{Barrier}}^{BS}(T,e^x)
=e^{-qT} e^{x}n(d_1)(-d_2)\frac{1}{\sigma}\\
&&-e^{-qT} e^{x}n(x_1)(-x_2)\frac{1}{\sigma}\\
&&-(H-K)e^{-cT} \frac{n(x_2)}{\sigma^2 \sqrt{T}} \{{x_1 x_2}-1 \}\\
&&+\frac{4(c-q)}{\sigma^3} \{ (-1+2\lambda )(\log H - x)+1 \} \\
&&\times e^x e^{-qT} \left( \frac{H}{e^x} \right)^{2\lambda} [ N(-y)-N(-y_1) ]
\\
&&+e^x e^{-qT} \left( \frac{H}{e^x} \right)^{2\lambda} [n(y)\frac{y'}{\sigma}-n(y_1)\frac{y'_1}{\sigma}]\\
&&\times \left(1 -2 \lambda \left( \frac{H}{e^x} \right)^{2\lambda} \right)\\
&&-e^x e^{-qT} \left( \frac{H}{e^x} \right)^{2\lambda} (\log H - x)\\
&&\times \frac{4(c-q)}{\sigma^3} \left( n(y)\left(\frac{1}{\sigma \sqrt{T}}\right)-n(y_1)\left(\frac{1}{\sigma \sqrt{T}}\right) \right)\\
&&+e^x e^{-qT} \left( \frac{H}{e^x} \right)^{2\lambda}\\
&&\times \left( n(y)\frac{1}{\sigma^2 \sqrt{T}}(yy'-1)-n(y_1)\frac{1}{\sigma^2 \sqrt{T}}(y_1y'_1-1) \right)\\
&&-Ke^{-cT}[ N(y')-N(y'_1) ]\\
&&\times \left( \left( \frac{H}{e^x} \right)^{2\lambda-2}\frac{4(c-q)}{\sigma^3} \{ (2 \lambda-2) (\log H - x)+1\} \right)\\
&&+Ke^{-cT} \left( \frac{H}{e^x} \right)^{2\lambda-2} (\log H - x) \frac{4(c-q)}{\sigma^3}\\
&&\times \left( n(y') \frac{1}{\sigma \sqrt{T}}-n(y'_1) \frac{1}{\sigma \sqrt{T}} \right)\\
&&+ Ke^{-cT} (2 \lambda-2) \left( \frac{H}{e^x} \right)^{2\lambda-2}\\
&&\times \left( n(y') \frac{y}{\sigma}-n(y'_1) \frac{y_1}{\sigma} \right)\\
&&-Ke^{-cT} \left( \frac{H}{e^x} \right)^{2\lambda-2}\\
&&\times \left(n(y')\frac{1}{\sigma^2 \sqrt{T}}(y'y-1)-n(y'_1)\frac{1}{\sigma^2 \sqrt{T}}(y'_1 y_1-1) \right) , \end{aligned}$$ where $$\begin{aligned}
&&y'=\frac{2 \log H-x-\log K+(c-q)T-\frac{1}{2} \sigma^2 {T} }{\sigma \sqrt{T}},\\
&&y'_1=\frac{\log H- x +(c-q)T-\frac{1}{2} \sigma^2 {T} }{\sigma \sqrt{T}}.\end{aligned}$$
Numerical Examples
==================
In this section we show numerical examples for pricing European up-and-out barrier call options under SABR volatility model ($\lambda=0$) as an illustrative purpose. By the asymptotic expansion formula in the previous section, we see $$\begin{aligned}
C_\mathrm {Barrier}^{SV,\varepsilon}(T,S) &\simeq& C_\mathrm {Barrier}^{BS}(T,S)\\
&&+\varepsilon e^{-c T}\int_{0}^{T}{\bar P}^{D}_{s} \tilde{\mathscr {L}}^0_{1} {\bar P}^{D}_{T-s} {f}(S)ds.\end{aligned}$$ Let us define [**AE first**]{} and [**AE zeroth**]{} as $$\begin{aligned}
\mbox{{\bf AE first}}&=& C_\mathrm {Barrier}^{BS}(T,S)\\
&&+\varepsilon e^{-c T}\int_{0}^{T}{\bar P}^{D}_{s} \tilde{\mathscr {L}}^0_{1} {\bar P}^{D}_{T-s} {f}(S)ds,\\
\mbox{{\bf AE zeroth}}&=&C_\mathrm {Barrier}^{BS}(T,S).\end{aligned}$$
Below we list the numerical examples, \[Case 1\] – \[Case 6\], where the numbers in the parentheses show the error rates (%) relative to the benchmark prices of $C_\mathrm {Barrier}^{SV,\varepsilon}(T,S)$ which are computed by Monte–Carlo simulations with $100,000$ time steps and $1,000,000$ trials (denoted by [**MC**]{}). We check the accuracy of our approximation formula by changing the model parameters.
Apparently, our approximation formula [**AE first**]{} improves the accuracy for $C_\mathrm {Barrier}^{SV,\varepsilon}(T,S)$, and it is observed that the approximation term $\varepsilon e^{-c T}\int_{0}^{T}{\bar P}^{D}_{s} \tilde{\mathscr {L}}^0_{1} {\bar P}^{D}_{T-s} {f}(S)ds$ accurately compensates for the difference between $C_\mathrm {Barrier}^{SV,\varepsilon}(T,S)$ and $C_\mathrm {Barrier}^{BS}(T, S)$, which confirms the validity of our method.\
\
For all cases, we set $S=100$, $\sigma=0.2$, $c=0.0$, $q=0.0$, $\rho=-0.5$, $\varepsilon \lambda=0.0$, $\theta=0.0$ and $T=1.0$. In Case 1, 2 and 3, given $\varepsilon \nu=0.1$, the upper bound price is set as $H=120, 130, 140$, respectively, while in Case 4, 5 and 6, given $\varepsilon \nu=0.2$, $H$ is set as $120, 130, 140$, respectively. Particularly, for the case of $\varepsilon \nu=0.2$ (that is, higher volatility of volatility case, Case 4, 5 and 6), we remark that the errors of the approximation become slightly larger. However, as observed in comparison between [**AE first**]{} and [**AE zeroth**]{}, we are convinced that the higher order expansion improves the approximation further, which will be investigated in our next research.\
$[{\bf Case 1}]$\
$$\begin{aligned}
&&
S=100,\ \sigma=0.2,\ c=0.0, \ q=0.0,\ \varepsilon \nu=0.1, \\
&&
\rho=-0.5,\ \varepsilon \lambda=0.0,\ \theta=0.0,\ H=120,\ T=1.0.
$$
[Strike: $K$]{}
----------------- ------- ---------------- ----------------
100 1.204 1.188 (-1.35%) 1.105 (-8.25%)
102 0.882 0.869 (-1.44%) 0.804 (-8.78%)
105 0.512 0.504 (-1.62%) 0.463 (-9.59%)
: up-and-out barrier option prices and the relative errors (Case 1)[]{data-label="fig3"}
\
\
\
\
$[{\bf Case 2}]$\
$$\begin{aligned}
&&
S=100,\ \sigma=0.2,\ c=0.0, \ q=0.0,\ \varepsilon \nu=0.1, \\
&&
\rho=-0.5,\ \varepsilon \lambda=0.0,\ \theta=0.0,\ H=130,\ T=1.0.
$$
[Strike: $K$]{}
----------------- ------- ---------------- ----------------
100 3.216 3.200 (-0.49%) 2.966 (-7.78%)
102 2.621 2.607 (-0.55%) 2.406 (-8.22%)
105 1.869 1.857 (-0.69%) 1.702 (-8.93%)
: up-and-out barrier option prices and the relative errors (Case 2)[]{data-label="fig3"}
\
\
\
$[{\bf Case 3}]$\
$$\begin{aligned}
&&
S=100,\ \sigma=0.2,\ c=0.0, \ q=0.0, \varepsilon \nu=0.1,\\
&&
\rho=-0.5, \
\varepsilon \lambda=0.0,\ \theta=0.0,\
H=140,\ T=1.0.$$
[Strike: $K$]{}
----------------- ------- --------------- ----------------
100 5.184 5.186 (0.05%) 4.847 (-6.49%)
102 4.420 4.423 (0.06%) 4.121 (-6.77%)
105 3.420 3.422 (0.06%) 3.174 (-7.19%)
: up-and-out barrier option prices and the relative errors (Case 3)[]{data-label="fig5"}
\
\
\
$[{\bf Case 4}]$\
$$\begin{aligned}
&&
S=100,\ \sigma=0.2,\ c=0.0, \ q=0.0, \varepsilon \nu=0.2,\\
&&
\rho=-0.5, \
\varepsilon \lambda=0.0,\ \theta=0.0,\
H=120,\ T=1.0.$$
[Strike: $K$]{}
----------------- ------- ---------------- -----------------
100 1.317 1.271 (-3.51%) 1.105 (-16.12%)
102 0.971 0.934 (-3.83%) 0.804 (-17.15%)
105 0.569 0.545 (-4.30%) 0.463 (-18.65%)
: up-and-out barrier option prices and the relative errors (Case 4)[]{data-label="fig5"}
\
\
\
$[{\bf Case 5}]$\
$$\begin{aligned}
&&
S=100,\ \sigma=0.2,\ c=0.0, \ q=0.0, \varepsilon \nu=0.2,\\
&&
\rho=-0.5, \
\varepsilon \lambda=0.0,\ \theta=0.0,\
H=130,\ T=1.0.$$
[Strike: $K$]{}
----------------- ------- ---------------- -----------------
100 3.475 3.435 (-1.15%) 2.966 (-14.66%)
102 2.844 2.808 (-1.27%) 2.406 (-15.42%)
105 2.041 2.011 (-1.48%) 1.702 (-16.58%)
: up-and-out barrier option prices and the relative errors (Case 5)[]{data-label="fig5"}
\
\
\
$[{\bf Case 6}]$\
$$\begin{aligned}
&&
S=100,\ \sigma=0.2,\ c=0.0, \ q=0.0, \varepsilon \nu=0.2,\\
&&
\rho=-0.5, \
\varepsilon \lambda=0.0,\ \theta=0.0,\
H=140,\ T=1.0.$$
[Strike: $K$]{}
----------------- ------- --------------- -----------------
100 5.483 5.526 (0.78%) 4.847 (-11.59%)
102 4.683 4.725 (0.85%) 4.121 (-12.03%)
105 3.635 3.670 (0.97%) 3.174 (-12.68%)
: up-and-out barrier option prices and the relative errors (Case 6)[]{data-label="fig5"}
[99]{} T. Kato, A. Takahashi and T. Yamada, An asymptotic expansion for solutions of Cauchy-Dirichlet problem for second order parabolic PDEs and its application to pricing barrier options, [arXiv preprint]{}, (2012). J.C. Hull, Options, Futures, and Other Derivatives, 6-th Edition, Prentice Hall, 2005 A.Takahashi and T.Yamada, An asymptotic expansion with push-down of Malliavin weights, SIAM Journal on Financial Mathematics, [**3**]{}, (2012), 95–136.
[^1]: Osaka University,
[^2]: The University of Tokyo,
[^3]: The University of Tokyo & MTEC
|
---
abstract: 'The Sloan Extension for Galactic Understanding and Exploration (SEGUE) survey obtained $\approx$ 240,000 moderate resolution ($R\sim 1800$) spectra from 3900Å to 9000Å of fainter Milky Way stars ($14.0 < g < 20.3$) of a wide variety of spectral types, both main-sequence and evolved objects, with the goal of studying the kinematics and populations of our Galaxy and its halo. The spectra are clustered in 212 regions spaced over three-quarters of the sky. Radial velocity accuracies for stars are $\sigma(\rm RV) \sim 4 \>\rm km~s^{-1}$ at $g < 18$, degrading to $\sigma(\rm RV) \sim 15\rm \>km~s^{-1}$ at $g\sim 20$. For stars with signal-to-noise ratio $> 10$ per resolution element, stellar atmospheric parameters are estimated, including metallicity, surface gravity, and effective temperature. SEGUE obtained $3500 \rm deg^2$ of additional $ugriz$ imaging (primarily at low Galactic latitudes) providing precise multicolor photometry ($\sigma(g,r,i) \sim 2$%), ($\sigma(u,z) \sim 3$%) and astrometry ($\approx 0.1''''$) for spectroscopic target selection. The stellar spectra, imaging data, and derived parameter catalogs for this survey are publicly available as part of Sloan Digital Sky Survey Data Release 7.'
author:
- 'Brian Yanny, Constance Rockosi, Heidi Jo Newberg, Gillian R. Knapp, Jennifer K. Adelman-McCarthy, Bonnie Alcorn, Sahar Allam, Carlos Allende Prieto, Deokkeun An, Kurt S. J. Anderson, Scott Anderson, Coryn A.L. Bailer-Jones, Steve Bastian, Timothy C. Beers, Eric Bell, Vasily Belokurov, Dmitry Bizyaev, Norm Blythe, John J. Bochanski, William N. Boroski, Jarle Brinchmann, J. Brinkmann, Howard Brewington, Larry Carey, Kyle M. Cudworth, Michael Evans, N. W. Evans, Evalyn Gates, B. T. Gänsicke, Bruce Gillespie, Gerald Gilmore, Ada Nebot Gomez-Moran, Eva K. Grebel, Jim Greenwell, James E. Gunn, Cathy Jordan, Wendell Jordan, Paul Harding, Hugh Harris, John S. Hendry, Diana Holder, Inese I. Ivans, [Z]{}eljko Ivezić, Sebastian Jester, Jennifer A. Johnson, Stephen M. Kent, Scot Kleinman, Alexei Kniazev, Jurek Krzesinski, Richard Kron, Nikolay Kuropatkin, Svetlana Lebedeva, Young Sun Lee, R. French Leger, Sébastien Lépine, Steve Levine, Huan Lin, Daniel C. Long, Craig Loomis, Robert Lupton, Olena Malanushenko, Viktor Malanushenko, Bruce Margon, David Martinez-Delgado, Peregrine McGehee, Dave Monet, Heather L. Morrison, Jeffrey A. Munn, Eric H. Neilsen, Jr., Atsuko Nitta, John E. Norris, Dan Oravetz, Russell Owen, Nikhil Padmanabhan, Kaike Pan, R. S. Peterson, Jeffrey R. Pier, Jared Platson, Paola Re Fiorentin, Gordon T. Richards, Hans-Walter Rix, David J. Schlegel, Donald P. Schneider, Matthias R. Schreiber, Axel Schwope, Valena Sibley, Audrey Simmons, Stephanie A. Snedden, J. Allyn Smith, Larry Stark, Fritz Stauffer, M. Steinmetz, C. Stoughton, Mark SubbaRao, Alex Szalay, Paula Szkody, Aniruddha R. Thakar, Sivarani Thirupathi, Douglas Tucker, Alan Uomoto, Dan Vanden Berk, Simon Vidrih, Yogesh Wadadekar, Shannon Watters, Ron Wilhelm, Rosemary F. G. Wyse, Jean Yarger, Dan Zucker'
title: 'SEGUE: A Spectroscopic Survey of 240,000 stars with $g=$14–20'
---
Introduction
============
Stellar Spectroscopic Surveys
-----------------------------
A large-scale study of the Milky Way is important to our general understanding of galaxy structure and formation. It is only in our own ‘backyard’ that great numbers of stars, the building block of all galaxies, may be observed individually, with their collective properties serving as constraints on theories of galaxy formation and evolution. Spectroscopic data of individual stars can provide a much richer variety of information on both stellar kinematics and stellar atmospheric parameters than is possible with photometric measurements alone.
The first large area spectroscopic surveys used objective prism plates, including the fundamental surveys for spectroscopic classification [@cannon18; @mkk43; @h78] and more specialized surveys for unusual stars [@cameron56; @nassau65; @sb71; @bm73]. Later objective prism surveys focused on extremely metal-poor stars [@bps85; @norbert] and on halo giants [@kavan; @fm90]. Objective prism surveys had the advantage of rapidly recording a large number of stellar spectra over significant solid angles, but had disadvantages such as a bright limiting magnitude and an inability to accurately calibrate the spectra.
“Aperture" spectroscopic surveys using modern spectrographs have been rarer, in part because of the huge investment of telescope time required to assemble substantial-sized samples. They include the “Spaghetti" survey for halo giants [@metal00], the SIM Grid Giant Star Survey [@simgrid], the detailed chemical study of @eetal93 and its various sequels, and the monumental survey of @netal04 who studied nearby, bright F and G stars and obtained metallicity, temperature and age information from Strömgren photometry and accurate radial velocities (RVs) from CORAVEL and other spectrographs.
Multiobject spectroscopic surveys (first implemented with plug boards, or slit masks, then with automated positioners), which provide a large gain in efficiency, have also been done with specific scientific goals in mind. A number of programs [@kg89; @ig95; @igi94; @gwj95; @gwn02] have searched for coherent structures in the halo. The RAVE Survey [@setal06; @zetal08], which focuses on bright stars of all colors ($9 < I < 13$), produces accurate velocities and estimates of stellar parameters from a small spectral region including the Ca II infrared (IR) triplet.
Orthogonal to the volume-limited or spectral-type specific surveys have been the compilations of homogeneous spectroscopic atlases of a few hundred objects [@gs83; @jhc84; @p85] obtained with electronic scanners and imaging tubes. The members of these catalogs were selected to sample objects of all spectral types with at least one example of each temperature and luminosity class. These catalogs do not give relative numbers of stars in the different spectral categories, however, and they may miss some rare categories, especially the low-metallicity stars.
Looking to the future, the Gaia spaced-based mission [@petal01] plans to obtain proper motions (and precise positions) of approximately one billion stars to $g \sim 20$, with RVs for all of the brighter objects with $g < 17$ [@katz04; @wketal05]. Gaia, when underway, will represent a several orders of magnitude leap forward in our knowledge of the kinematics, structure and evolution of our Galaxy.
Stars and the SDSS
------------------
The Sloan Digital Sky Survey (SDSS; York et al. 2000) is primarily an extragalactic survey that has obtained 2% multicolor imaging of nearly 8000 $\rm deg^2$ of filled contiguous sky toward the Northern Galactic Cap, and 700 $\rm deg^2$ in three stripes in the South Galactic Cap near the celestial equator. Spectra have been acquired of one million galaxies and one hundred thousand quasars. The major science program consists of constructing a large three-dimensional map of the universe and constraining cosmological models; see, e.g. @betal01 [@bletal03; @fetal03; @thketal04; @tetal04; @eetal05; @retal06].
A significant product of the SDSS was a large number of Milky Way stellar spectra combined with deep, accurate multicolor photometry. This led to several (initially) serendipitous Galactic structure, Galactic halo, and M31 halo science results; see @ietal00 [@yetal00; @netal02; @retal02; @wetal05; @yetal03; @netal03; @zetal04a; @zetal04b; @betal06a; @betal06b; @apetal06; @betal07; @betal07a; @betal07b; @ketal07; @xdh07; @xetal08; @jetal08].
Near the conclusion of the original SDSS program in 2004, partially as a result of the productive Galactic science enabled by the SDSS, a set of three individual surveys (under the umbrella designation of SDSS-II) were designed: 1) Legacy: a survey following the same goals of the original SDSS, to complete the SDSS imaging and spectroscopic footprint; 2) SN Ia [@fetal08]: a well-calibrated, systematic survey for 200 intermediate redshift ($0.1 < z < 0.4$) type Ia supernovae, filling an important gap in redshift coverage, and anchoring the calibrations of higher redshift supernova surveys; and 3) Sloan Extension for Galactic Understanding and Exploration (SEGUE), an imaging and spectroscopic survey of the Milky Way and its surrounding halo. SDSS-II operated from 2005 August to 2008 July at Apache Point Observatory, building on SDSS, which operated from 2000 August until 2005 July.
The SEGUE Survey is the subject of this paper. The processed, searchable data archive from SEGUE was made publicly available in the Fall of 2008 as part of SDSS-II Data Release 7 (DR7). With few exceptions, all stellar spectral types are represented in the SEGUE Survey. Notable categories which are missing include luminous Population I early types, such as O, B, and Wolf-Rayet stars, and some Population I giants, which are not targeted because they are generally too bright for SEGUE observations if they are in the solar neighborhood and are too rare or are obscured by dust toward the Galactic center to be seen at greater distances. Samples of spectrophotometrically calibrated stars of a wide variety of spectral types are presented in Section §3. A defining part of SEGUE is the creation and release of its public database that can be mined to enable a whole range of astrophysics projects not conceived of when the survey was carried out.
Survey Goals and Footprint
==========================
SEGUE Goals
-----------
The original, five year SDSS program demonstrated the existence of significant spatial substructure in the stellar halo of the Milky Way discovered from photometric data, from which stellar distance estimates were obtained primarily for bluer (A and F) stars. These substructures cast doubt upon previous measurements of a presumed axially symmetric spheroid stellar component of the Galaxy with a smoothly varying power law density structure. Discovery of the substructure also created a tremendous need for follow-up spectroscopy of each structure, so that stellar population and orbital information for the debris could be determined.
SEGUE was designed to sample the stellar spheroid at a variety of distances, from a few kpc to a hundred kpc, in 200 “pencil beams," spaced around the sky so that they would intersect the largest structures. At the time SEGUE was designed, we knew about the Sagittarius Dwarf spheroidal tidal debris stream and the controversial Monoceros stream in the Galactic plane. Both structures were 6–10 kpc across and were believed to extend all of the way around the Milky Way. We expected that additional substructure would be discovered, and that it was most important to identify and characterize the largest structures; without knowledge of the spatial variation of the spheroid at 10 kpc scales, it was difficult to positively identify smaller or lower surface brightness structures.
SEGUE augmented the photometric data from the SDSS/Legacy Surveys to sample the sky about every 15$^\circ$ across the sky, in all parts of the sky accessible from the telescope’s latitude. This included adding photometry at low latitudes ($|b| < 35^\circ$) and additional photometry in the South Galactic Cap. Two hundred pencil beams were selected for spectroscopy because these could be arranged to sample the sky at intervals of 10$^\circ$ to 15$^\circ$, and two observations of each pencil beam could be observed in the three year duration of SDSS-II.
Spectroscopic target selection was designed to maximize the science from SEGUE stellar spectroscopy; in particular we wished to study the Milky Way’s chemical and dynamic formation history and to constrain the Galaxy’s gravitational potential.
The target selection strategy that was used to achieve these general goals devoted most of the fibers on each pencil beam line of sight to sampling the stellar populations of the Galaxy on large (tens of kpc) scales, including spheroid substructure and global properties of the thin and thick disk components. In addition, a small subset of the fibers was devoted to unusual stars, including those thought to have low metallicity (\[M/H\] $< -2$) or to be otherwise unusual based on their colors and velocities (as determined by photometry and proper motions). In addition, we specifically targeted star clusters with a variety of ages and metallicities so that this important spectral database could be well calibrated.
To meet these goals, SEGUE has produced 1) an imaging survey of 3500 $\rm deg^2$ of $ugriz$ imaging with the SDSS telescope and camera [@getal98; @getal06], 2) a spectroscopic catalog that spans the stellar population observable in magnitude and Galactic latitude range of the data at a resolution of $R \sim 1800$. The spectroscopic catalog includes estimates for the observational parameters (position, RV, multicolor photometric and spectrophotometric magnitudes), as well as the derived, modeled parameters (including $[\rm M/H]$, surface gravity, and $T_{\rm eff}$) for all observed stars in a systematic and well-calibrated fashion.
SEGUE Imaging
-------------
The original SDSS imaged most of the North Galactic Cap plus three stripes of data in the South Galactic Cap; regions of low Galactic latitude ($|b| < 35^\circ$) were largely excluded by design. The SEGUE imaging footprint was designed to allow the selection of spectroscopic targets in as broad a range of sky directions as possible, to enable study of the important transition zones between our Milky Way’s disks and stellar halo, to include a large and varied sample of Galactic star clusters that could be used for calibration, and to ensure that photometric calibration would be feasible (i.e. avoid zones of extreme and variable extinction).
The low-latitude SEGUE imaging area includes 15 2.5$^\circ$-wide stripes of data along constant Galactic longitude, spaced by approximately $20^\circ$ around the sky. These stripe probe the Galaxy at a wide variety of longitudes, sampling the changing relative densities of global Galactic components (thin disk, thick disk, halo). The precise longitudes of the SEGUE stripes are not evenly spaced; the $l$ of several stripes were shifted by up to $8^\circ$, so that several known open clusters near the Galactic plane could be optically imaged. We added two stripes of data in the South Galactic Cap. Because spectra of stars toward cardinal Galactic directions ($l$ near $90^\circ, 180^\circ, 270^\circ, 360^\circ$) are important for generating a simplified kinematic analysis of a very complex dynamic Galaxy, the two SEGUE stripes (at $l=94^\circ$ and $l=178^\circ$) that nearly coincide with cardinal pointings were extended to give nearly complete pole-to-pole imaging and more complete spectroscopic plate coverage than at other longitudes. Where possible, the SEGUE stripes were designed to cross other SDSS imaging data at multiple locations to facilitate photometric calibration.
Figure 1 shows the constant longitude and two Southern imaging stripes chosen to augment the original SDSS footprint in Equatorial (top) and Galactic coordinates (bottom). We sample the sky in all directions that are accessible to the Apache Point Observatory; since the observatory is at a Northern latitude of $32^\circ$, essentially no SEGUE data is obtained with the Equatorial coordinates $\delta < -20^\circ$. The Galactic anticenter ($\delta = 29^\circ$) is well sampled, but the Galactic center ($\delta = -29^\circ$) is not. The stellar population of the bulge is largely inaccessible and obscured by dust in this optical survey.
The SEGUE imaging scans (i.e., data not associated with the Legacy SDSS Survey) are tabulated in Table 1. All SEGUE imaging was obtained between 2004 August and 2008 January. Note that each SEGUE stripe is $2.5^\circ$ wide. Stripes 72 and 79 follow standard numbering conventions of the original SDSS imaging survey. Stripes with four digit numbers run along constant $l$, running for variable extents in $b$. The formula for converting the SDSS Survey coordinates $(\mu,\nu)$ for a particular node and inclination to the Equatorial coordinates J2000 $(\alpha, \delta)$ may be found in @setal02.
Except in regions of high stellar density, the processing and calibration of the SEGUE imaging data are the same as that of the SDSS imaging data [@betal03b; @fetal96; @hetal01; @petal03; @smetal02; @setal02; @tetal06]. A modified version of the SDSS PHOTO processing pipeline software (R. H. Lupton et al., 2010 in preparation) was combined with the Pan-STARRS [@m06] object detection code, and was run on all lower latitude SEGUE imaging scans. The key modifications were: 1) the PHOTO code was optimized for primarily stellar objects by truncating fits to the wings of extended sources at essentially the point-spread function (PSF) radius, 2) the PHOTO code’s object deblender was allowed to deblend groups of closely spaced objects into more ‘child’ objects than in standard high latitude (less crowded) SDSS fields, and 3) Pan-STARRS threshold object detection code was used. This code generates more complete object lists in regions of high stellar density, and was used to supplement PHOTO’s object detector. Imaging scans which participated in this low-latitude processing are tagged in the data archive with a rerun (reprocessing version) number of 648. This is in contrast to standard SDSS and SEGUE PHOTO reprocessing numbers $40 \le \rm rerun \le 44$. Except in regions of high stellar density, the magnitudes from this version of PHOTO are interchangeable (within the errors) with the version used to process the rest of the SDSS and SEGUE imaging data. At high stellar density the choice depends on the needs of the particular investigation. Both the PHOTO PSF magnitudes and the Pan-STARRS aperture magnitudes are available for comparison in the DR7 data archive for low latitude regions of sky. We refer the reader to the DR7 paper [@strauss09] for more discussion of this reprocessing of the imaging data, and to the documentation on the DR7 Web site. At the density extreme, several globular clusters present in the SDSS and SEGUE footprints are analyzed by @aetal08 using independent software (see below).
As in SDSS DR6 [@dr6], the zeropoint photometric calibration SEGUE imaging has been enhanced by the calibration procedure described in @petal08. This procedure finds a simultaneous global fit for the individual imaging scans’ photometric zeropoints, extinction coefficients and flat-field characteristics (of all 30 SDSS camera CCDs), relying on the overlap between SEGUE and SDSS Legacy scans to improve the absolute zeropoints accuracies in the $gri$ filters to $<1\%$ in most areas around the sky.
SEGUE Spectroscopy
------------------
SEGUE leveraged the unique features of the SDSS telescope and spectrographs (namely the ability to go deep and wide, with broad spectral coverage and high spectrophotometric accuracy) to acquire spectra of $\sim$ 240,000 stars of a wide variety of spectral types (from white dwarfs (WDs) on the blue end to M and L subdwarfs on the red end), probing a wide range of distances ($<10 \rm \> pc\>to \> > 100 \>kpc$). The SDSS spectrographs used for SEGUE are a pair of highly efficient dual CCD camera, fiber-fed spectrographs, with wavelength coverage from 3900Å to 9000Å at resolving power $R \sim 1800$. The twin spectrographs can simultaneously record data from 640 fibers in a 7 $\rm deg^2$ field of view; 7% – 12% of the fibers are reserved for sky signal and other calibration fibers (such as spectrophotometric standard stars, generally chosen from color-selected F subdwarfs with $16 < g < 18.5$).
SEGUE took spectra of stars in the magnitude range $14 < g < 20.3$. At $g\sim 14$ the SDSS spectrographs saturate in a 300 second exposure. Objects down to $r = 18.5$ can be routinely obtained with signal-to-noise ratio (S/N) $>$ 30, sufficient for RVs good to 4 $\rm km\>s^{-1}$ and metallicity $[\rm
M/H]$ measurements accurate to 0.2 dex for a wide variety of spectral types (A-K). At $g \sim 20.3$ we were able to obtain $ \rm S/N \sim 3$ in 2 hours integration time under photometric conditions with seeing of $2''$ or better (all S/N quotations are per $\rm 150 \>km~s^{-1}$ resolution element).
Spectroscopic plate pointings sparsely sample all areas of the sky with available imaging (Figure 1), probing all the major known Galactic structures (thin and thick disk, halo, and streams) with the exception of the bulge, which is below our Southern declination limit. To study the detailed structure of these Galactic components, the density of targets is made high enough that the velocity distribution of one homogeneous subset of stars (the G dwarfs) in one distance bin (an interval of one apparent magnitude) may be determined to be either consistent or distinct from a Gaussian. This requires at least 40 targets of one spectral type (G dwarfs) per magnitude interval (14th through 20th) per plate pointing. It is this scientific goal which drives the assignment of over 300 fibers per plate pair to G star candidates toward each SEGUE line of sight.
The required RV accuracy is driven by the scientific goal of separating stellar streams with dispersion $\sigma \sim 10\rm \>km~s^{-1}$ from field disk and halo stars with dispersions of $\sigma\sim$ 30 and 100 $\rm \>km~s^{-1}$ respectively. Figure 2 shows the actual relative RV accuracy obtained from SEGUE spectra using quality assurance (QA) stars. These are simply stars at $r\sim 18$ that are observed twice, once on each of the two plates that make a complete observation of a SEGUE pencil beam. There are $\sim 20$ such pairs for every pencil beam. We restrict our QA sample to those with $S/N > 10$. RVs are measured by cross-correlating each spectrum against a set of $\sim 900$ selected templates taken from the ELODIE high resolution spectroscopic survey [@moultaka04]. The ELODIE templates span a wide range of spectral types and metallicities. Very early and very late types, however, are nearly absent. We note that the correlation is done by shifting each spectrum repeatedly, stepping through wavelength space (rather then via fast Fourier transform techniques), and, while time consuming, this appears to result in somewhat higher accuracy. The top panel shows the measured RV difference histogram between all QA stars and a second observation of the same stars on a different plate. Individual errors are $\rm \sigma \sim 4.4 \> km~s^{-1}$. The lower panel shows how the RV accuracy degrades with color and S/N: in general bluer objects have worse RV errors than red, with blue horizontal branch stars (BHBs) and their broad Balmer lines being hardest to measure accurately. We expand on the content of Figure 2 in Table 2, where we order the set of spectra with multiple independent observations of the same object by S/N and divide the data for each of the six color ranges into four quartiles. For each color range from blue to red, we tabulate N and list by quartile the average (dereddened) magnitude $\bar g_0$, average S/N and the $1\sigma$ velocity error (divided by $\sqrt{2}$ to compensate for the fact that we have two independent measurements). The errors are well behaved as S/N decreases from $>50$ to $<10$. We do not characterize velocity errors for extreme spectral types (WDs on the blue end and late M and L type stars on the red end), as these samples have very large systematic errors, due to very broad spectral features and a lack of standard templates with which to cross-correlate. Follow-up reductions of the SEGUE spectra will include synthetic templates for deriving velocities for stars with unusual Carbon enhancement (S. Thirupathi, 2008, private communication).
One component of the total RV error is the systematic error in zeropoint of each plate. The scatter in that offset between plates, measured from the mean offset of the QA stars for each pencil beam, has zero mean and standard deviation 1.8 $\rm km~s^{-1}$. This scatter of 1.8 $\rm km~s^{-1}$ contributes to the RV uncertainty of all SEGUE stars, and sets the lower bound of our RV errors. To check the overall zeropoint of the RV calibration we used a set of 100 bright field stars observed both with SEGUE and at higher resolution on the Hobby-Eberly telescope [@apetal08]. SEGUE spectra of stars in the outskirts of globular clusters with known RV also provided checks on the RV zeropoint. This analysis resulted in an offset of 7.3 km/s with very small scatter. This 7.3 km/s offset was applied to SEGUE RVs (but not to the $z$ redshift measurements directly from the spectroscopic pipelines) in the final DR7 catalog. The origin of these systematics are not completely understood, but they are thought to be associated with telescope flexure or night sky line fitting errors when computing wavelength solutions for each merged SEGUE plate, which consists of a numbers of exposures taken over several hours, often spanning multiple nights (see below).
Our scientific goals require that for a large fraction of the stars with $g < 18.5$, sufficient S/N be obtained so it is possible to reliably estimate the metallicities $\rm [M/H]$ and luminosity classes (dwarf vs. subgiant vs. giant) for stars of spectral types A-M well enough to separate stream stars from field disk populations from halo populations. An S/N $> 10$ is required to measure these stellar atmospheric parameters, with more accurate measurements at higher S/N. This drove the integration time for the SDSS spectrographs to about 2 hours for a $g=18.5$ object and 1 hour for a $g=17$ object.
The actual SEGUE spectroscopic survey selected pointings are shown as blue circles in Figure 1. There are 212 SEGUE pointings on the sky, all listed in Table 3. These pointings are divided into the following categories: 1) ‘Survey’, 172 approximately evenly spaced pointings around the SDSS and SEGUE imaging area, separated by no more than $20^\circ$ from the next nearest pointing, sampling all directions without regard for known structures or streams. 2) ‘LLSurvey’, 12 pointings at low latitude, where a separate target selection algorithm is used that functions even in highly reddened lines of sight; 3) ‘Strm’, 16 pointings toward five previously discovered stream-like structures around the halo of the Milky Way, such as the Sagittarius or Orphan streams; 4) ‘Cluster’, 12 pointings towards known globular or open clusters of known $\rm [M/H]$ for purposes of calibrating the metallicity and luminosity pipelines and 5) ‘Test’, five early SEGUE pointings to test the target selection algorithms at a variety of latitudes and to test the RV accuracy of the survey. Six pointings are duplicated. Note that a significant fraction of SEGUE spectroscopy relies upon the SDSS Legacy Survey imaging in the North Galactic Cap.
A SEGUE (and SDSS) spectroscopic plate is a circular disk of machined aluminum, with a diameter of $0.75$ m, corresponding to an angular on-the-sky radius of $1.49^\circ$. A small hole, holding one fiber, is drilled at the position of each object of interest. Each plate may have up to 640 object holes, which are fed to twin 320-fiber spectrographs. Target holes are restricted to being no closer together than $55''$ on the sky [@setal02]. The total area of each plate is approximately 7 $\rm deg^2$ on the sky.
Because there are many more than 640 stars per 7 $\rm deg^2$ to g=20.3 and because the SEGUE magnitude limits $14< g < 20$ span more than a factor of 100 in apparent brightness, we observe each SEGUE pointing with two SDSS-style plates, each with a maximum of 640 fibers. Recall that for SDSS extragalactic observations, only one plate at each position was designed, to match the sampling of approximately 100 galaxies per square degree for objects with $r_{\rm extended} < 17.77$.
One plate of the pair is called the SEGUE bright (or SEGUE regular) plate, and consists of holes targeting stars with $14.0 < r < 17.8$, exposed for typically 1 hr. The bright magnitude limit is set by the saturation of the spectrographs for a 300 s exposure, as well as cross-talk considerations between the brightest and faintest objects in adjacent fibers on a given plate. The second plate of 640 fibers, designated the SEGUE faint plate, primarily consists of stars with $17.8 < r < 20.1$, exposed for a total of typically 2 hr.
About 20 stars per pointing with $r \sim 18$ are targeted twice, on both the bright and faint plates. These objects are called QA objects, and, as mentioned above, are used to determine the systematic reproducibility of RVs and other derived parameters from plate to plate.
For the special case of ‘Cluster’ plates, where it is desired to obtain spectra of bright nearby globular cluster giant branch stars for calibration, the SDSS spectrograph saturation limit is extended by taking short (1–2 minute) exposures, allowing one to sample stars as bright as $g\sim 11$. These short exposures, however, have little sky signal, and it is thus difficult to do an accurate wavelength calibration. This is because the final step in the calibration process depends on using the positions of fixed, known night sky lines such as Hg, Na and \[O I\]). The bright plates have 32 fibers reserved for blank sky and 16 reserved for spectrophotometric standards; the faint plates have 64 fibers reserved for blank sky and 16 fibers reserved for spectrophotometric standards. This approach leaves approximately 1152 fibers available for science targets in each 7 $\rm deg^2$ pointing. The number of sky fibers was determined by the need to maximize target S/N in fiber-fed multiobject spectrographs, as discussed in @wg92. The spectrophotometric standards (primarily halo F subdwarfs) from SEGUE and the SDSS Survey constitute a valuable sample in themselves for numerous Galactic structure studies [@apetal06; @cetal07].
Altogether, there are 416 plates in the SEGUE database; all but 17 of the 212 SEGUE pointings have a bright and faint plate of $\sim 576$ targets each. Individual 10–30 minute exposures were obtained, sometimes on successive nights, until the desired S/N for each SEGUE plate was reached. All common exposures for a given plate and plugging (fibers are plugged into the metal plates by hand and the plugged plates can be moved into and out of the focal plane as many times as necessary to reach a desired S/N value) were combined and uniquely identified by four digit plate number and by the Modified Julian Date (MJD) of the last night on which a given plate-plugging was observed. The plate names and identifying observation dates (MJDs) are all in Table 3. The total number of unique SEGUE spectra is approximately 240,000.
SEGUE spectra are processed with the same basic pipelines used to process the SDSS data [@setal02]. The pipelines have been modified slightly to enhance the radial velocity accuracy of stellar spectra. It should be noted that a handful of SEGUE plates were obtained under moon illumination fractions of greater than 0.85, and a careful analysis indicates that the wavelength solutions of these 10 plates are systematically off by as much as $10 \rm \>km~s^{-1}$. These plates are marked in Table 3 with asterisks.
In addition to the standard extraction and RV reduction pipelines of SDSS, the SEGUE plates have been processed through an additional ‘SEGUE Stellar Parameters Pipeline’ (SSPP) that estimates the metallicity $\rm [M/H]$, surface gravity ($\rm log~g$) and effective temperature $\rm T_{\rm eff}$, along with associated errors, for each star with sufficient S/N. Details of the design, operation and error analysis of the SSPP as run on SEGUE spectra are described in @letal08a [@letal08b] and @apetal08. The uncertainties in the SSPP parameters were determined by analysis of star clusters with known metallicities, reddenings, and distances and by comparison of SSPP-derived parameters with parameters derived from higher resolution spectra of the same sample of field stars. For spectra with $\rm S/N ~ > 30$, which are usually obtained for stars with $g < 18.5$, the errors are $\sigma(\rm [M/H]) \sim 0.2 $ dex, $\sigma(\rm log~g) \sim 0.3 $ dex, and $\sigma(\rm T_{\rm eff})
\sim 200 $ K. These uncertainties are valid for stars with $T_{\rm eff}$ between 4500 K and 8500 K. For stars outside this temperature range, and for stars of lower $\rm S/N$, atmospheric parameters are still computed, but with appropriately higher error estimates. Table 4 contains a list of SSPP quality flags which are set for every spectrum. These flags, when set, indicate something unusual about a given spectrum, i.e. one with unusually strong Balmer, Mg or Na lines, or one where there is a mismatch between the photometric color \[derived from the $(g-r)_0$ color\], and the spectroscopic type of the star. Within the context of DR7, a practical quality cut used to select only stars (and avoid galaxies, quasars, and low-S/N spectra of the sky) is to insist that a given spectrum has the error on the ELODIE template cross-correlation RV $elodierverr$ strictly $ > 0$.
TARGET SELECTION BY CATEGORY
============================
General SEGUE Target Selection Principles
-----------------------------------------
SEGUE targets are selected primarily from photometry in the $ugriz$ SDSS filter system [@fetal96]. For a few categories (cool WD, K giant, and M subdwarf candidates), the presence (or absence) and amplitude of a proper motion measurement from a match to an astrometric catalog [@metal04; @l08] is also used.
The broad science goal of characterizing large-scale stellar structures in the Galaxy, combined with the specific goal of studying halo streams, informed the target selection algorithms’ design. The additional goals of finding rare but scientifically interesting samples of objects of unusually low metallicity, odd spectral type, or extreme kinematics were also factors.
The variety of science goals dictated a target selection algorithm that sampled stars at a variety of distances, favoring those at large distances. The SEGUE Survey targets objects at a variety of colors and apparent magnitudes to probe distances from 10 pc (with WDs and M and K dwarfs) to the outer halo at $d \sim 100 $ kpc (with BHB and red K giant stars). At intermediate distances, a target selection category denoted ‘G star’ (which contains some G IV subgiants) is sampled over the entire SEGUE magnitude range $14.5 < r_0 < 20.2$, and effectively probes the thick disk to inner halo transition region around the Galaxy with a large, unbiased sample.
SEGUE targets were divided into 15 different target categories, which spanned the range from the bluest (WD) to reddest (spectral type L brown dwarfs). Table 5 lists the SDSS/SEGUE Primary target bit in hexadecimal in Column 3. A search of the database may be done for objects with Primtarget matching these bits in the SpecObjAll table of the CAS (see below). The fourth column of Table 5 lists the magnitude, color, and proper motion cuts for each target type. When more stars than fibers in a given category are available for targeting, weighting mechanisms are used to subselect from candidates within a given category. These weighting mechanisms generally were designed to randomly subselect from all possible stars in a given category in a given 7 $\rm deg^2$ field, with the probability of selection weighted by either magnitude (favoring brighter objects over fainter) or by color (favoring blue objects over red, except in the case of K giants). The colors used for target selection were in some cases generalized linear combinations of the SDSS $ugriz$ generated colors. These generalized colors were designed to run parallel to or perpendicular to the color-color space stellar locus of (dereddened) Galactic stars [@lenz; @helmi03]. There is a maximum number of targets accepted in any given pointing in each category (see Column 5 of Table 5 which lists the maximum number of targets allowed per pointing, the approximate total number targeted by category during SEGUE, and a rough estimate of the fraction of the spectra in each category that turn out to be of the type that was targeted).
The SEGUE target selection algorithms were not perfected immediately, and for some of the categories, several versions of the algorithm exist, as indicated in Column 2 of Table 5. The final version of SEGUE target selection is designated v4.6, with earlier revisions having lower version numbers. The significant changes from earlier versions of the target selection cuts for each target type are tabulated, also in Table 5. Table 6 indicates the range of SEGUE plate numbers which correspond to each version of SEGUE target selection.
In general, colors of all objects are dereddened by their full @sfd98 extinction value before applying the various target selection cuts described in Table 5 and below. Exceptions to this are the white dwarf/main-sequence (WD/MS) binary, the esdM, and the legacy brown dwarf and legacy WD categories, where applying the full dust correction, assuming the star lies behind the dust screen, is not correct, and thus uncorrected colors are preferred.
For each pointing, candidate lists of all objects which match the color and reduced proper motion cuts for each of the 15 categories are generated. Each of the 15 candidate lists is then sorted, usually randomly, but by magnitude in some cases (red K giants, low-metallicity categories). After guide stars, blank sky patches and spectrophotometric and reddening standards are assigned hole positions on each plate, science target assignment begins in a round-robin fashion: the lists of possible objects in each of the 15 categories are examined in turn and the first object in each list is assigned a fiber (assuming no $55''$ collision with prior targets). After selecting a target from the fifteenth category, the algorithm returns to the first category and the process repeats until all fibers are assigned. Categories are eliminated from consideration for target assignment when they reach their maximum fiber allocation or when there are no more candidates on a given list. This allows the categories with only a few targets per plate pair (cool WD, sdM, brown dwarf) to always have their candidates targeted, while several large categories (BHB, F, low-metal candidates, G, K giant) take up the bulk of the fibers on each plate pair.
The overall picture of where target categories are located in color and proper motion space is shown in Figure 3. The $(g-r)_0$ optical color can be used as proxy for effective temperature, and it provides a reasonable estimate of spectral type. For a star with proper motion $\mu$ in $\rm arcsec ~yr^{-1}$, we define a reduced proper motion in the $g$ filter: $H_g = g + 5{\rm log} ~\mu + 5.$
Targets are selected as PRIMARY detections (duplicates and objects from overlapping scans removed) of objects in the DR7 database with stellar (not extended) PSFs (the “STAR" table in the database). Candidates are required to pass the following SDSS flag cuts on the quality of their photometric imaging data: objects must not be saturated (flag bit: SATURATED), not close to the EDGE, not have an interpolated PSF, (INTERP\_PSF), and not have an inconsistent flux count (BADCOUNTS). In addition, if the center is interpolated (INTERP\_CENTER), there should not be a cosmic ray (CR) hit indicated. These flags are set by the PHOTO pipeline for every object, and details on all possible flags, and their meaning can be found on the SDSS Web site: http://www.sdss.org. A few categories below, such as the L brown dwarf category, require stricter flag cuts.
We now give details of each target selection algorithm by category, as summarized in Table 5, and present representative spectra of the various target types. The sample spectra are plotted with flux in units of $\rm 10^{-17}~ergs~cm^{-2}~s^{-1}~\AA ^{-1}$ on the y-axis versus wavelength in Å on the x-axis. If more than one spectrum appears in a plot, then additional spectra are offset by an arbitrary amount for readability. Spectra are smoothed from 1.5 to three $150 ~\rm km~s^{-1}$ resolution elements, depending on the S/N. Common spectral features are indicated by line name (and the occasional night-sky feature by ‘NS’). Spectra are labeled with their unique SDSS/SEGUE three part id (plate-mjd-fiberId), as well as with relevant magnitudes, colors or atmospheric quantities from the SSPP analysis. All example spectra, along with their measured parameters, can be found in the DR7 data release by using their three-part id to look them up.
White Dwarfs, sdB, sdO
----------------------
WDs are important for absolute calibration of the astrophysical temperature and flux scales, including calibration of filter systems that span ultraviolet (UV), optical and IR wavelengths [@hb06; @b07]. The SDSS WD catalogs of @ketal04 and @eetal06 contain an extensive list of WDs discovered in the SDSS imaging and spectroscopy.
A goal of SEGUE is to continue this survey of hot WDs by obtaining spectra of most WDs with $\rm T_{\rm eff} > 14000~K $ and most hot subdwarf stars, while excluding most QSOs. SEGUE obtained spectra of 4069 hot WD candidates, of which about 62% appear to be DA type WDs while roughly 15% are other types of WDs. Other hot stars, designated sdB and sdO types [@gsl86], most of which are extreme horizontal branch stars, are also selected by this SEGUE WD color box. Roughly 10% of the objects selected as hot WD targets are QSO or emission-line galaxy contaminants. Figure 4 shows sample SEGUE DA WD, sdB and sdO spectra.
Cool White Dwarfs
-----------------
Cool WD stars, the fossil remains of an ancient stellar population, offer a window into the early stages of the Galaxy and its formation. They can be used to place lower limits on the ages of various Galactic components, extend our knowledge of stellar evolution, and provide hints of star formation processes during the Galaxy’s earliest epochs. Very cool (ultracool) WDs with hydrogen in their atmospheres exhibit a unique spectral signature due to collision-induced absorption (CIA) by molecular hydrogen. In pure H-atmosphere WDs, CIA is mediated by H2–H2 collisions that produce a flux suppression in the IR at temperatures below about 5000 K, resulting in objects of very unusual color.
SDSS has proven to be an excellent database in which to search for ultracool WDs. To date, 14 new ultracool WDs have been discovered in SDSS spectral data (Harris et al. 2001; Gates et al. 2004; Harris et al. 2008; Hall et al. 2008), constituting the majority of known ultracool WDs. Additional cool WD candidates have been identified in SDSS photometric data (Kilic et al. 2006; Vidrih et al. 2007). Several extremely faint high proper motion cool WDs have recently confirmed in the SDSS repeat-scan (stripe 82) data [@scholz08].
SEGUE presents a unique opportunity to find more of these rare objects. Recent SDSS studies of WDs (Kilic et al. 2006 and Harris et al. 2006) have demonstrated the usefulness of a reduced proper motion cut for selecting candidates from the photometric data, and a similar procedure is used to identify targets in the SEGUE imaging data. Objects which satisfy the Table 5 selection cuts in color and reduced proper motion are targeted as cool WDs and assigned spectral fibers. All selected targets are required to have a good proper motion match (as defined in Kilic et al. 2006) in order to obtain a reliable reduced proper motion $H_g$.
For target selection versions prior to v3.3, cool WDs were allotted a maximum of 10 fibers per plate pair; however, analysis of preliminary SEGUE data observed in 2004 revealed that selection cuts frequently yielded fewer than this, with occasional fields containing 11 to 15 target objects while the overall average remained less than 10. As a result the final target selection algorithm targets all objects which satisfy the selection criteria (increasing the number of ultracool WD fibers to more than 10 if necessary for a specific plate pair), ensuring that we obtain spectra for all candidates. This reduced proper motion selection algorithm also allows us to target all low luminosity WDs and most high-velocity WDs and identify cool WD candidates that exhibit milder CIA suppression.
SEGUE targeted about 1187 cool WD candidates. While an analysis of the full SEGUE cool WD set is currently underway, preliminary results show that the selection by reduced proper motion is yielding a high return of cool and high-velocity WDs. Out of 16 plate-pairs studied, 60% of the targets are cool DA type WDs, 18% are DQ or DZ type, and 12% are DC type, while 10% are contaminating objects of nondegenerate stars or QSOs. Of the rare ultracool WDs that are intended as a goal of this selection category, two of the 15 published SDSS ultracoool WDs (SDSS J0310-01 and SDSS J2239+00) were targeted in this category and found in early SEGUE spectra. Figure 5 shows a sample cool WD spectrum from SEGUE.
BHB, A main sequence, Blue Stragglers
-------------------------------------
BHB stars in the halo are important distant standard candles for mapping structure in the outer halo [@yetal00]. They may be seen to distances of 80 kpc or more and, along with red K giants, are our most distant stellar halo probe of kinematics [@netal03]. The recent work of @xetal08 shows how this sample probes large enough distances in the halo to constrain the mass of the Milky Way at $d \sim 50 $ kpc. A linear combination of the SDSS colors, $v = -0.283(u-g) -0.354(g-r) +0.455(r-i) + 0.766(i-z)$ [@lenz] is somewhat helpful in separating BHBs from higher surface gravity blue stragglers (BS), and it is used to weight selection of targets toward BHBs. BS are surprisingly common in the halo, and are thought to be the result of binary evolution. They are an interesting population in their own right, though halo members can be difficult to separate from disk populations at brighter magnitudes [@mom07]. SEGUE placed fibers on 24,688 BHB/BS candidates, and about 66% of the resulting spectra are auto-classified as BHB/BS types. Figure 6 shows examples of BHB and BS spectra.
F turnoff, Plate Spectrophotometic Standards
--------------------------------------------
F turnoff stars are an extremely numerous, relatively luminous category, and their spectra are very clean and amenable to accurate spectroscopic analysis [@rzetal06]. The number of F turnoff stars available toward a given SEGUE pointing far exceeds the number of fibers available, thus, in targeting this category, SEGUE used linear combinations of the ($(u-g)_0,(g-r)_0$) colors to favor the selection of lower metallicity, halo objects. This is done by using the ‘s’ (perpendicular to the stellar locus) and P1(s) (parallel to stellar locus) colors as described in @helmi03. These colors essentially “straighten-out" the stellar locus in the vicinity of the turnoff and allow a simple cut on P1(s) to favor halo subdwarfs, even as the relative density of thick disk versus halo turnoff stars is changing rapidly as a function of magnitude. For stars with $g < 19$, the $S/N$ is generally high enough for the SSPP to derive atmospheric parameters with relatively small uncertainties. In the faintest magnitude bin ($19 < g < 20.5$), only the RVs are still accurate, but it is possible to use this large number of F dwarfs to probe halo substructure at distances of $10 < d < 18 $ kpc from the Sun [@netal02; @netal07]. Due to their large numbers and relative absolute brightnesses, SEGUE targeted 37,900 F subdwarfs, plus about 6500 spectrophotometric and reddening standards. About 70% of these candidates yielded spectra classified as type F, with some indication of lower metallicity (\[M/H\] $< -0.7$).
Kinematics of SEGUE F turnoff stars has recently allowed @wetal09 to tightly constrain the orbit of a halo stream [@gd06].
Low-Metallicity Candidates
--------------------------
The $u-g$ color of F, G, and K stars with $0.4<(g-r)_0<0.8$ can be used as a metallicity proxy, in the sense that bluer stars tend to have lower metallicity [@lenz]. We can employ this color cut to restrict the number of spectroscopic targets. For $(g-r)_0>0.8$, $(u-g)_0$ fails as an effective discriminant.
Another item to consider when searching for very low metallicity stars is the volume sampled. While low metallicity K and M dwarf stars live for much longer on the main sequence than F and G stars, their intrinsic faintness relative to the other spectral types means that our magnitude-limited sample of these stars is dominated by disk stars that are near the Sun. Relatively few spheroid stars are expected to be observed in the small volume of the Galaxy that we probe with late type dwarfs. Since F “turnoff" stars are significantly brighter (they can be 1–2 mag brighter than their zero-age main-sequence luminosity), and therefore can be seen to further distances, many of the lowest metallicity \[Fe/H\] $< -3.0$ candidate objects identified to date have been found to have colors of F turnoff stars. See @apetal08 and @cetal07 for extensive studies of halo F turnoff stars.
SEGUE targeted 30,998 candidates in the low metallicity category. About 12% (4600) of them show an SSPP metallicity of $\rm [M/H] < -2$. About 0.1% (32) indicate metallicity $\rm [M/H] < -3$. The very lowest metallicity candidates will need to be follow-up up on larger telescope at higher resolution. We show in Figure 7 a set of turnoff stars, all with similar effective temperatures, that have metallicities ranging from $\rm [M/H] < -3$ to super-solar (the higher metallicity stars in this sequence were not selected from the low-metallicity targeting category).
It should be noted that the present version of the SSPP, described in detail by @letal08a [@letal08b], Allende Prieto et al. (2008), and Re Fiorentin et al. (2007) produces conservative estimates of \[Fe/H\] for stars with metallicities below roughly \[Fe/H\] = -2.7. A recent preliminary analysis of over 80 SDSS/SEGUE stars with SSPP metallicity determinations \[Fe/H\] $<$ -2.7, based on high-resolution spectroscopy obtained with the Subaru/HDS (W. Aoki, 2008, private communication), indicates that the actual metallicity can be 0.3 dex lower than the level determined by the SSPP. This analysis shows that the lowest metallicity stars from this category have \[Fe/H\] = -3.7. With this new recalibration, T. C. Beers et al. (2009, in preparation) assemble some 15,000 stars with \[Fe/H\] $< -2.0$, and several hundred with \[Fe/H\] $< -3.0$. These totals include low-metallicity stars from all of the various target categories used in SDSS/SEGUE.
F/G stars
---------
The F/G target category represents an unbiased random subsampling of the range of stars with colors $0.2 < (g-r)_0 < 0.48$. This distinguishes it from the F subdwarf category (above), which is biased toward objects of lower metallicity. This category was only used in target selection versions v3.3 and later. SEGUE targets 6939 of these, of which 90% are classified as type F or G. Figure 8 shows an example spectrum.
G dwarfs and Sgr subgiants
--------------------------
The G dwarf sample represents SEGUE’s largest single homogeneous stellar spectral category. The target selection is very simple, just a color cut in $(g-r)_0$, and thus is very close to unbiased. With the SEGUE unbiased G dwarf sample, researchers will be able to address the metallicity distribution function (MDF) as well as the kinematic distribution of G dwarfs in a much larger volume of the Galaxy than has been previously attempted [@wg95; @w86].
This sample will also be extremely useful for probing the structure of the Galaxy’s major components, especially using the brighter stars with S/N $> 20$, for which surface gravities can be determined by the SSPP. Subgiant stars (spectral type G IV) stars in the Sagittarius dwarf tidal debris stream can also be isolated from stars in this selection category.
SEGUE targeted 62,784 G star candidates based on the simple color selection. At least 96% of these yield G star spectra. A significant population (roughly 7%) of evolved (subgiant or giant) spectra are indicated by the SSPP $\rm log~g< 3.75$ indicator. Figure 9 shows a sample G dwarf star spectrum (lower) and a G giant spectrum (upper).
K Giants
--------
K giants are the most luminous tracer available for old stellar populations. They can (albeit rarely) be found with absolute magnitudes $M_g$ as high as $-2.0$. Such stars at $g=18$ are then located at a distance of 100 kpc.
It is not possible to use a simple selection criterion such as that used by SEGUE for F/G stars or G dwarfs to select K giants, because in SEGUE’s apparent magnitude range, giants are swamped in number by foreground dwarfs with the same $(g-r)$ color. We can use the G dwarf category, which overlaps the blue edge of the K giant color range, to demonstrate this. Only 3% of the targeted G dwarf stars with $g=15-17$ in the SEGUE data are giants. By contrast, 50% of the stars with similar magnitudes that were targeted with the selection criteria described below are giants.
In past surveys, K giants have been identified using the pressure-sensitive feature near 5200Å , produced by a blend of the Mg I$b$ triplet and the MgH band, which is strong in dwarfs and weak in giants, but unfortunately also has some sensitivity to stellar metallicity and temperature [@bonner]. This feature is a good luminosity criterion for late G and K stars redward of $(g-r)_0 = 0.5$ ($B-V \sim 0.75$). @kavan and @fm90 used objective prism spectra to isolate stars with weak Mg$b$/H features in the G/K color range. As it is quite a broad feature, it is also possible to measure their strength using intermediate-band photometry [@metal00].
Since the SDSS $ugriz$ filters do not have a narrow filter in the Mg$b$/H region, we use a more indirect method of finding giants. In SEGUE’s magnitude range, many of the giants that we observe are halo stars, so we use a photometric metallicity indicator to remove foreground disk dwarfs. Briefly, we select stars with a UV-excess in the $ugriz$ system, and use the power of our fiber spectroscopy to eliminate the remaining foreground dwarfs.
In more detail, in the $(g-r)_0,(u-g)_0$ diagram, metal-poor stars appear bluer in $(u-g)_0$ for a given $(g-r)_0$ because they have less line-blanketing in the $u$ filter. Two different colors have been defined to measure the deviations from the mean stellar locus in this diagram: the $s$-color of @helmi03 and the $l$-color of @lenz. Large values of the $l$-color correspond to metal-poor stars, and we have chosen to target stars with $l>0.07$. Our selection strategy is complicated by the fact that the Mg$b$/MgH feature is within the SDSS $g$ passband (it is in the blue wing of the $V$ filter). Giants (with weaker Mg$b$/MgH) will be bluer in $g-r$ than dwarfs.
Figure 10 shows how the competing effects of metallicity and gravity play out. Our K giant selection region extends from $(g-r)_0 = $ 0.5 to 1.2. The main locus, from the accurate averaged photometry from Stripe 82 [@zeljko] is shown in gray: stars with $l$-color greater than 0.07 are shown in light gray. Spectroscopically confirmed dwarfs are shown with large crosses, and giants with large filled circles. Giants range in metallicity from near solar to \[Fe/H\] less than -2.0. It can be seen that for $(g-r)_0$ between 0.5 and 0.8, the metallicity sensitivity of the $(u-g)_0$ color is the dominant effect, and metal-weak giants are clustered toward the blue side of the locus. For redder stars, the luminosity sensitivity of $(g-r)_0$ dominates, and giants appear above the main locus.
The details of our target selection for this category have changed as we have learned more about the behavior of giants, particularly the rare red giants and dwarfs in $ugriz$ colors, particularly the rare red giants. The efficiency of our discovery of K giants depends on magnitude and color. Using an $l$-color cut of 0.10 rather than 0.07, success rates are as follows: For $g=17-18$, $0.5 < (g-r)_0 < 0.6$, our success rate is 45%. For the same magnitude range and $0.6 < (g-r)_0 < 0.8$, the success rate is 28%.
A modification to the K giant algorithm occurred for SEGUE target-selection versions v4.2 and later to extend the selection to very red ($0.8 < (g-r)_0 < 1.2$) K giants, which are referred to as red K giants in the SEGUE target selection tables (there are few bona fide M giant spectra, with TiO bands and $(g-r)_0 > 1.3$, in SEGUE). Although only the last quarter or so of SEGUE plates targeted very red K giants, 5948 fibers were placed on these very red stars, of which 466 (8%) yielded a low gravity spectral indicator from SSPP. The selection algorithm included $(g-r)_0$ color, lack of proper motion, and a weighting toward brighter magnitudes. For effective random sampling with only a proper motion cut for $(g-r)_0$ between 0.8 and 1.2, the success rate is 2.5%.
This important category received 22,814 fibers, of which about 30% yielded a low surface gravity K giant, red K giant or subgiant spectrum. Figure 11 shows a classic K giant (top panel) and a very red K giant (lower panel).
AGB
---
An early SEGUE target selection category, designated “asymptotic giant branch” (AGB), that was intended to select the very reddest giants in Figure 10, targeted 1343 objects. Only about 8%, however, yielded actual red giant (and possible AGB ) spectra. In hindsight, the choice of $s$-color cuts, and $(g-r)_0$ limits were not optimal, and the low latitude AGB category, (LL AGB) below, and the red K giant category above eventually superseded this AGB category for SEGUE target selection of this type of object.
K dwarfs, early M dwarfs
------------------------
The K dwarf and early M dwarf category generated a significant unbiased sample of relatively nearby Galactic objects with complete six-dimensional phase space information: proper motions, RVs, positions, and photometric parallax distance estimates are all available for this data set. SEGUE targeted 18,358 stars in this category, of which 80% yield useful spectra. Figure 12 shows sample K and M dwarfs.
M subdwarfs, high proper motion M stars
---------------------------------------
The aim of this category is to obtain a sample of low metallicity M (sub)dwarfs, which are strongly correlated with halo M dwarfs. Originally, 1012 sdMs were targeted, based on a version of the @wetal08 color cuts. However, for a variety of reasons, only a handful of these candidates were actual sdMs. A second window onto this category was opened by using the American Museum of Natural History (AMNH) high proper motion catalog and careful color selection criteria [@l08; @ls08]. Approximately 40 fibers per plate pair were allocated to this search for sdM, esdM, and usdM objects. This category obtained a very high ($\sim 20$%) success rate on an allocation of 9420 candidate fibers. Figure 13 shows a low metal M dwarf and a high proper motion selected extreme subdwarf M star (esdM2.5).
WD/MS binaries
--------------
This category, designated WD/MS binary, has been used successfully to improve our understanding of close compact binary evolution. If the SEGUE WD/MS Survey is combined with follow-up observations identifying the short orbital period systems among the WD/MS binary stars, important processes such as the common envelope phase (see Webbink 2007, for a recent review) or angular momentum loss by magnetic wind braking can be constrained (see Politano & Weiler 2006, Schreiber et al. 2007). Already, SDSS-I efficiently identified new WD/MS binaries. @setal04 identified a new stellar locus, the WD/MS binary bridge. @siletal06 [@siletal07] published lists of more than 1400 spectroscopically identified WD/MS binaries. These samples, however, mainly consisted of young systems containing hot WDs.
The selection criteria used here have been designed to identify a large sample of old WD/MS binaries containing cold WDs that, according to Schreiber & Gänsicke (2003), should represent the dominant population of WD/MS binaries. On 240 spectroscopic plates in DR7, the WD/MS color selection a lgorithm chose 9531 candidates of which 431 have been observed spectroscopically. Among these we confirm 244 WD/MS objects (with 25 other possible candidates) resulting in a success rate of $\sim 56\%$. A first analysis shows that indeed for the first time a large sample of old systems with WD temperatures below $\sim\,12000$K could be identified (for more details see M. R. Schreiber et al. 2008, in preparation). Follow-up observations to further constrain compact binary evolution are well underway (Rebassa-Mansergas et al. 2007, 2008, Schreiber et al. 2008, A. Schwope et al., 2009 in preparation). The total SEGUE allocation to this WD/MS category is about 500 fibers, with a 56% success rate expected. Figure 14 shows one example WD/MS spectrum.
Brown Dwarfs, L and T dwarfs
----------------------------
Very red objects are photometrically detected in the $z$ image, but not in the bluer $ugri$ SDSS images. These are rare and interesting objects, and the only difficulty in targeting them was to assure that the one detection was a solid detection and not a spurious CR hit to the $z$-band CCD. Additional flag checking was performed for candidates in this category, similar to that done by @fetal99 for selecting very high redshift quasars that are detected in $i, ~z$ or $z$ only. SEGUE devoted 1277 fibers to this category, with about a 7% initial estimated yield of objects with spectral type later than M8. Figure 15 shows a SEGUE L dwarf spectrum.
The Low-Latitude (LL) Algorithms
--------------------------------
For 12 pointings at $|b| < 20$ (all toward regions of high reddening, $E(B-V) > 0.3$), the previously mentioned SEGUE target selection algorithms were not effective. The long lines of sight through the Galactic disk, at low latitudes, often rendered invalid the implicit assumption that all of the dust lies [*in front*]{} of the target stars. For these pointings, marked as ‘LLSurvey’ in Table 3, we use an alternative target selection scheme that targeted three categories: 1) bluest object candidates (mostly BHB and F stars): selecting the bluest ($g-r$) objects in any given pointing without regard for absolute colors; 2) K giant candidates: using the absence of proper motion as the primary selection criterion; and 3) AGB type objects: singling out objects in a fashion similar to that used for selecting red K giants, but with a brighter $g$ magnitude limit. This latter category was assigned only a small fraction of fibers.
Because of dust and crowding, recovering the selection function at low latitudes is problematic, and optical spectrophotometry of these low-latitude stars with SEGUE should be regarded as experimental, with an eye toward future surveys.
About 12,241 fibers were devoted to low-latitude algorithm plates. Each of the three primary LL categories, K giant (3220 candidates), AGB (499 candidates), and Blue-tip (8522 candidates) yielded about 30% success rate based on SSPP analysis using methods which fit normalized spectra (flattened without continuum slope, and are thus much less dependent on reddening). Figure 16 shows three spectra from one LL algorithm plate, one from each category.
The SDSS Legacy Stellar Target Selection algorithm
--------------------------------------------------
In addition to SEGUE, the SDSS and SDSS-II Legacy Surveys obtained spectra for nearly 200,000 (additional) stars, which were allocated fibers on the main SDSS Legacy Survey plates that observed primarily galaxies. The target selection categories were briefly described in @setal02. We list in Table 7 the color and magnitude cuts used to select stars by category in the SDSS Legacy Survey. It should be noted that, unlike SEGUE, Legacy’s stars targeting algorithms may assign multiple target type bits to the same target, that is, an object may be targeted based on its colors, as both a SERENDIPITY\_RED object and a ROSAT\_E object. These bits are tabulated in the PrimTarget field of the CAS database. The final column of Table 7 lists the approximate number of candidates which received a fiber in each target category.
Cluster Plates
--------------
In order to calibrate the SSPP’s metallicity, luminosity and effective temperature scales for all stars, a significant number of known globular clusters and a few open clusters were targeted with one or more SEGUE plates. These 12 pointings are indicated with the cluster name followed by the word ‘Cluster’ in Table 3. Because many of these clusters are relatively nearby, they have giant branches with stars brighter than the SEGUE spectroscopic saturation limit of $r\sim 14$. Additionally, due to the extreme density of the globular cluster fields, even the Pan-STARRS assisted PHOTO processing of SDSS and SEGUE imaging scans failed to resolve individual stars in the centers of the globulars. For these reasons, the following procedures were followed for most cluster plate observations: 1) the target list for each cluster was generated individually. Proper motion membership criteria were used for many clusters, allowing the targeting of stars in the dense cores of clusters which were saturated in the regular SDSS imaging. K. Cudworth (2006, private communication) provided membership lists for a significant fraction of the clusters targeted. Some of these targets do not, therefore, have standard SDSS run-rerun-camcol-field-id numbers, and are identified only by their R.A., decl. positions. 2) Shorter exposures were obtained for many of the clusters. One or two minute exposures allowed us to obtain nonsaturated spectra of stars as bright as $r \sim 11$.
These cluster spectra were used to calibrate the SSPP for all types of stars [@letal08a; @letal08b]. The spectra of cluster stars are available on an as-is basis in the DR7 database. Users should be aware that the spectrophotometry and in some cases the RV solutions of these plates may be subject to large systematic errors due to the extremely short exposure times, and that occasional bright targets may be saturated.
A major effort to process SDSS and SEGUE imaging data for clusters has been undertaken by @aetal08, using the DAOPHOT/ALLFRAME suite of programs. They reduced imaging data for crowded cluster fields where PHOTO did not run, and presented accurate photometry for 20 globular and open clusters.[^1] This effort has led to fiducial sequences for clusters over a wide range of metallicities in the SDSS $ugriz$ filter system.
Special plates centered on halo substructure
--------------------------------------------
Studying halo streams is an important goal of SEGUE, and SEGUE is well placed to obtain new kinematic information on the stars in these streams. Five previously known streams were specially targeted at various positions along their extent with SEGUE plates, for a total of 16 pointings. The five streams are Sagittarius [@yetal00], Monoceros [@netal02], Orphan [@betal07b], Virgo [@vetal01; @jetal08] and GD-1 [@gd06]. Plates with these pointings are marked with the stream name and ‘Strm’ in Table 3. The regular target selection algorithms described above were used on these streams, i.e. no special targets within the stream were given specific fibers. Some BHB, BS, F turnoff, K giant, and G subgiant spectra are clearly confirmed stream members on these ‘Strm’ plates. Identification of a particular star as a stream member versus a field star is of course, often difficult, and left to the interested researcher.
The Data Archive and an Example Query
======================================
All of the spectra and associated imaging from SEGUE were made public as a part of DR7 of the SDSS-II Survey on 2008 October 31. The calibrated magnitudes and positions of objects as determined by the SDSS PHOTO pipeline software are available in the ‘photoobjall’ and ‘star’ tables of the CAS database, which is available through interfaces at http://cas.sdss.org and described in detail at http://www.sdss.org. We note the SEGUE and SDSS Legacy imaging and spectra are all in a single large database, so it is possible to obtain SDSS Legacy photometry and SEGUE spectroscopic information for stars in the SDSS and SDSS-II footprints as part of a single query, or closely related set of queries. The SSPP and SPECTRO outputs (RV, $\rm [M/H]$, $\rm log~g$, $\rm T_{\rm eff}$), are available in tabular form in the ‘sppParams’ and ‘specObjAll’ tables of the CAS database. FITS format files containing all of the $\sim $240,000 extracted, co-added, sky-subtracted, flux-calibrated spectra are available at http://das.sdss.org for interested researchers. Similar files for all 200,000 stellar objects detected by SDSS and SDSS-II Legacy Surveys are also available at the same location.
To highlight the usefulness of the data archive, we present an example SQL query from the CAS database to help construct plots showing the extent and scientific usefulness of SEGUE data.
We design a query to select SEGUE-targeted G stars from the database, and return each object’s photometric and tabulated spectroscopic information.
The following SQL query is presented to the CAS DR7 database (http://cas.sdss.org/CasJobs):
select
dbo.frun(targetid) as run,
dbo.frerun(targetid) as rerun,
dbo.fcamcol(targetid) as camcol,
dbo.ffield(targetid) as field,
dbo.fobj(targetid) as obj,
g0,umg0,gmr0,spp.ra,spp.dec,spp.l,spp.b,
spp.plate,spp.mjd,spp.fiberid,elodierv,elodierverr,
feha,fehaerr,fehan, logga,loggaerr,
elodierv+10.1*cos(b*3.14159/180)*cos(l*3.14159/180)+
224.0*cos(b*3.14159/180)*sin(l*3.14159/180)+
6.7*sin(b*3.14159/180) as vgsr
from sppParams spp,specobjall sp, platex p
where spp.specobjid = sp.specobjid and sp.plateid = p.plateid and
p.programname like 'SEGUE%' and
gmr0 between 0.48 and 0.55 and elodierverr > 0
This query is written in ‘SQL’, a standard database query language. Some details of the query are as follows.
The first six lines select photometric quantities for the desired objects, include the unique five-part SDSS imaging ID (run,rerun,camcol,field,obj), dereddened photometry ($g_0,(u-g)_0,(g-r)_0$), (R.A.,decl.) J2000, and Galactic $(l,b)$.
The next lines select spectroscopic outputs for each object: unique three-part SDSS/SEGUE spectroscopic ID (plate,mjd,fiberid), RV and error (obtained by cross-correlation with ELODIE RV templates), elodierv, elodierverr in $\rm km\>s^{-1}$, and SSPP estimates of \[M/H\] (feha) with error (fehaerr) and an indication of how many estimators went in to the \[M/H\] estimate (fehan). Similarly, the stellar surface gravity and error is retrieved (logga, loggaerr).
A significant fraction of the spectra will have feha or logga set to -9.999, which indicates that the SSPP did not have sufficient S/N to estimate this value confidently. These values are mostly for fainter $g > 19$ spectra.
The next lines take each object’s heliocentric RV and Galactic $(l,b)$ and uses the database to compute the Galactocentric $v_{\rm gsr}$ velocity using standard values for the solar motion.
The selection is done with an SQL ‘join’ between the SSPP table ‘sppParams’, and the spectroscopic id table ‘specobjall’ and the plate list table ‘platex’, requiring that the type of spectroscopic program matches the key phrase ‘SEGUE’ (to exclude LEGACY SDSS galaxy plates) and that the dereddened color of the objects fall in the G dwarf color range ($0.48 < (g-r)_0 < 0.55$). Additionally the RV error must be greater than zero, indicating a minimal level of quality of the spectrum. This also excludes galaxies and quasars, leaving only objects with stellar kinematics.
This query yields a table (in comma separated value format, which may be downloaded to a user’s local computer for further manipulation) of 61,343 objects.
Individual images of spectra in this data set may be examined by fetching from the DAS, with a link like.
[wget http://das.sdss.org/spectro/1d\_26/1880/gif/spPlot-53262-1880-014.gif]{}
where the object may be verified to be a G star.
A FITS data file of the calibrated 1D spectrum is available from
[wget http://das.sdss.org/spectro/1d\_26/1880/1d/spSpec-53262-1880-014.fit]{}
for detailed further manipulation.
Figure 17 shows steps in isolating an interesting subpopulation of these G stars. The topmost panel plots Galactic $v_{gsr}$ versus $l$ for all G stars selected with the CAS database query above. A sinusoidal curve with amplitude 120 $\rm km~s^{-1}$ shows the average path of stars rotating with the Sun about the Galactic center. Several knots of G stars stick out from the general disk population(s). We focus on one set of stars with $-175 < v_{gsr} < -111 \rm \> km\>s^{-1}$ and $168^\circ < l < 182^\circ$. Galactic ($l,b$) for this subset is plotted in the second panel. A histogram of the apparent magnitudes of the further subset of stars with $-52^\circ < b < -37^\circ$ is plotted in the third panel. If these stars are dwarfs at $M_g \sim 5.5$, their implied distance from the Sun is 5 kpc. However, they may be giants or subgiants at much further distances. We plot the histogram of SSPP surface gravities in panel 4. There is a clear peak at subgiant and giant $\rm log~g < 3.75$ (all G dwarfs have $\rm log~g > 4.25$). Therefore this population consists of subgiants with $M_g \sim 1.5$, placing the stars at $d = 30 $ kpc from the Sun. A histogram of the quantity feha (adopted average \[Fe/H\] from the SSPP pipeline) for the subgiants is shown in the lowest panel. The estimated metallicity of these objects is $\rm [M/H] = -1.4\pm 0.5$. The location of these objects on the sky and their implied distances and velocities are consistent with that of Sagittarius Southern tidal stream stars, such as RR Lyraes and BHBs seen in @ietal00 [@yetal00].
This example just scratches the surface of the interesting science that can be obtained with the SEGUE G star sample.
Summary
=======
The SEGUE Survey provides a large sample of more than 240,000 spectra of stars at ($14 < g < 20.3$) covering 212 sightlines out to distances of 100 kpc. It supplements the SDSS Legacy imaging survey with an additional 3500 $\rm deg^2$ of $ugriz$ imaging at $|b| < 35^\circ$. Each 7 $\rm deg^2$ sightline obtains 1150 well-calibrated resolution $R = 1800$ spectra, analyzed by a uniform set of software pipelines to generate tables of RVs for all stars and estimates of \[Fe/H\], $\rm log~g$ and $\rm T_{\rm eff}$ for stars with S/N $> 10$. The selected targets in each pointing cover all major spectroscopic types from WDs to L brown dwarfs in numbers sufficient to sample the kinematic structure of all major stellar components of the Galaxy (except the bulge) at distances from the solar neighborhood (probed with M, L, and T dwarfs) to 100 kpc from the Sun (probed with BHB and K/M giant stars). The SEGUE sample is useful for isolating stellar substructures, particularly in the stellar halo.
The unbiased sample of over 60,000 G dwarf spectra presents a unique way to study the Galaxy’s structure in detail. Selected populations of rare targets ranging from cool WDs to high proper motion subdwarf M stars to stars with metallicity $\rm [M/H] < -3$ allow theories of formation and evolution of the Milky Way to be newly constrained.
A follow-up to the SEGUE Survey, entitled SEGUE-2, is now underway with the same instrument and telescope configuration and aims to obtain targeted spectra of a sample of similar size and quality to SEGUE.
All SEGUE data are calibrated and publicly accessible, now enabling a SEGUE to many productive science explorations beyond the dreams of the designers.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, Cambridge University, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
C. Allende Prieto acknowledges support from NASA grants NAG 5-13057 and NAG 5-13147. T.C. Beers, Y.S. Lee, and S. Thirupathi acknowledge partial funding of this work from grant PHY 02-16783: Physics Frontiers Center / Joint Institute for Nuclear Astrophysics (JINA), awarded by the U.S. National Science Foundation. P. Re Fiorentin acknowledges support through the Marie Curie Research Training Network ELSA MRTN-CT-2006-033481. We acknowledge useful discussions with Steve Majewski on the G dwarf target selection design. We acknowledge several useful suggestions from the referee.
Abazajian et al. 2009, ApJS in press June (arXiv:0812.0649) Adelman-McCarthy, J., et al. 2008, ApJS, 175, 297 Allende Prieto, C. et al. 2006, AJ, 636,804 Allende Prieto, C. et al. 2008, AJ, 136, 2070 An, D., Johnson, J. et al. 2008, ApJS 179, 326, arXiv:0808.0001 Becker, R. H. et al. 2001 AJ 122, 2850 Beers, T. C., Preston, G. W., & Shectman, S. A. 1985, AJ 90, 2089 Bell, E. F. et al. 2008 ApJ 680, 295 Belokurov, V. et al. 2006a, ApJ 642, L137 Belokurov, V. et al. 2006b, ApJ 647, L111 Belokurov, V. et al. 2007a, ApJ 654,897 Belokurov, V. et al. 2007a, ApJ 658,337 Bergeron, P, Wesemael, F. & Beauchamp, A. 1995 PASP 107, 1047 Bidelman, W. P., & MacConnell, D. J. 1973, , 78, 687 Blanton, M.R., Lin, H., Lupton, R.H., Maley, F.M., Young, N., Zehavi, I., and Loveday, J. 2003a, AJ, 125, 2276 Blanton, M. et al. 2003b ApJ 592, 819 Bohlin, R. C. 2007 in ASP Conf. Ser. 364, "The Future of Photometric, Spectrophotometric and Polarimetric Standardization, ed. C. Sterken (San Francisco, CA: ASP), 315 (arXiv:0801.0645) Cameron, D., & Nassau, J. J. 1956, , 124, 346 Cannon, A. J., & Pickering, E. C. 1918, Ann. Harv. Coll. Obs., 91, 1 Carollo, D. et al. 2007 Nature 450, 1020 Christlieb, N., Wisotzki, L., Reimers, D., Gehren, T., Reetz, J., & Beers, T. C. 1999, in ASP Conf. Ser. 165, The Third Stromlo Symposium: The Galactic Halo, ed. B. K. Gibson et al., (San Francisco, CA: ASP), 259 Edvardsson B., Andersen, J., Gustafsson, B., Lambert, D. L., Nissen, P. E., & Tomkin, J. 1993, A&A 275, 101 Eisenstein, D. J. et al. 2005 ApJ 633, 560 Eisenstein, D. J. et al. 2006 AJ 132, 676 Fan, X. et al. 1999 AJ 118,1 Fan, X. et al. 2003 AJ 125,1649 Flynn, C., & Morrison, H. L. 1990, , 100, 1181 Frieman, J. et al. 2008 AJ 135, 338 Fukugita, M., Ichikawa, T., Gunn, J.E., Doi, M., Shimasaku, K., & Schneider, D.P. 1996, AJ, 111, 1748 Gates, E. et al. 2004 ApJ 612, L129 Gilmore, G., Wyse, R. F. G. and Jones J. B. 1995 AJ 109, 1095 Gilmore, G., Wyse, R. F. G., & Norris, J. E. 2002 ApJ 574, L39 Green, R. F., Schmidt, M., & Liebert, J. 1986 ApJS 61, 305 Grillmair, C., & Dionatos, O. 2006 ApJ 643, L17 Gunn, J. E. & Stryker, L. L. 1983 ApJS 52, 121 Gunn, J. E. et al. 1998 AJ 116, 3040 Gunn, J. E. et al. 2006 AJ 131, 2332 Hall, P. B., Kowalski, P. M., Harris, H.C., Awal, A., Leggett, S. K., Kilic, M., Anderson, S. F., & Gates, E. 2008, ApJ 136, 76 Harris, H. C. et al. 2001 ApJ 549, L109 Harris, H. C. et al. 2006 AJ 131, 571 Harris, H. C. et al. 2008 ApJ 679, 697 Helmi, A., et al. 2003, , 586, 195 Hogg, D.W., Finkbeiner, D.P., Schlegel, D.J., & Gunn, J.E. 2001, AJ, 122, 2129 Holberg, J. B. & Bergeron, P. 2006 AJ 132, 1221 Houk, N. 1978, Michigan Catalog of Two-dimensional Spectral Types for the HD stars (Ann Arbor, MI: Dept. of Astronomy, University of Michigan), QB6.H77 Ibata, R. & Gilmore G. 1995 MNRAS 275, 591 Ibata, R., Gilmore G., & Irwin, M. J. 1994, Nature 370, 194 Ivezic, Z. et al. 2000 AJ 120, 963 Ivezi[ć]{}, [Ž]{}., et al. 2007, , 134, 973 Jacoby, G. H., Hunter, D. A., & Christian, C. A. 1984, ApJS 56, 257 Juric, M. et al. 2008, ApJ 673, 864 Katz et al. 2004, MNRAS 354, 1223 Kilic, M. et al. 2006, AJ 642, 1051 Kleinman, S. J. et al. 2004, ApJ 607, 426 Koposov, S. et al. 2007, ApJ 669, 337 Kuijken, & Gilmore, G. 1989, MNRAS 239, 651 Lee, Y. S. et al. 2008, AJ 136, 2022 Lee, Y. S. et al. 2008, AJ 136, 2050 Lenz, D. D., Newberg, H. J., Rosner, R., Richards, G. T., & Stoughton, C. 1998, , 119, 121 Lepine, S. 2008, AJ 135, 2177 Lepine, S.,& Scholz R.-D. 2008, ApJ 681, L33 Magnier, E. in The Advanced Maui Optical and Space Surveillance Technologies Conference, Wailea, Maui, HI, 2006, Ed.: S. Ryan, The Maui Economic Development Board, p.E50 Momany, Y., Held, E. V., Saviane, I., Zaggia, S., Rizzi, L, & Gullieuszik, M. 2007 A&A 468, 973 Morgan, W. W., Keenan, P. C., Kellman, E. 1943, An Atlas of Stellar Spectra, with an Outline of Spectral Classification, (Chicago, IL: Univ. of Chicago Press), QB881.M6 Morrison, H. L. et al. 2000 AJ 119, 2254 Moultaka, J., Ilovaisky, S. A., Prugniel, P., & Soubiran, C. 2004 PASP 116, 693 Munn, J. A. et al. 2004 AJ 127, 3034 Nassau, J. J., Stephenson, C. B., & McConnell, D. J. 1965, in Luminous Stars in the Northern Milky Way, Vol. 6 (Hamburg: Hamburger Sternw. Warner & Swasey Obs) Newberg, H., Yanny, B., Cole, N., Beers, T. C., Re Fiorentin, P., Schneider, D. P., Wilhelm, R. 2007, ApJ 668, 221 Newberg, H. et al. 2002, ApJ 569, 245 Newberg, H. et al. 2003, ApJ 596, L191 Nordström, B. et al. 2004 A&A 418, 989 Padmanabhan, N., et al. 2008, ApJ, 674, 1217 Patterson, R. J., Majewski, S. R., Kundu, A., Kunkel, W. E., Johnston, K. V., Geisler, D. P., Gieren, W., & Mu[ñ]{}oz, R. 1999, BAAS, 31, 1439 Pickles, A. J., 1985, ApJS 59, 33 Pier, J.R., Munn, J.A., Hindsley, R.B., Hennessy, G.S., Kent, S.M., Lupton, R.H., and Ivezic, Z. 2003, AJ, 125, 1559 Perryman, M. A. C., de Boer, K., Gilmore, G. et al. 2001 A&A 369, 339 Politano, M. & Weiler, K. P. 2006, ApJ 641, L137 Ramírez, I., Allende Prieto, C., Redfield, S., Lambert, D. L. 2006 A&A 459, 613 Ratnatunga, K. U., & Freeman, K. C. 1985, , 291, 260 Re Fiorentin, P., Bailer-Jones, C. A. L., Lee, Y. S., Beers, T. C., Sivarani, T., Wilhelm, R., Allende Prieto, C., & Norris, J. E. 2007 A&A 467, 1373 Rebassa-Mansergas, A., Gänsicke, B. T., Rodríguez-Gil, P., Schreiber, M. R., & Koester, D. 2007 MNRAS 382, 1377 Rebassa-Mansergas, A. et al. 2008 MNRAS, 390, 1635 Richards, G., et al. 2006 AJ 131, 2766 Rockosi, C. et al. 2002 AJ 124, 349 Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998 ApJ 500, 525 Scholz, R.-D., Storm, J., Knapp, G. R., & Zinnecker, H. 2008 A&A 494, 949 Schreiber, M. R.; Gänsicke, B. T. 2003, A&A 406, 305 Schreiber, M. R.; Gänsicke, B. T.; Nebot Gomez-Moran, A.; Schwope, A. D., 2008, A&A in prep Schreiber, M. R., Nebot Gomez-Moran, A., Schwope, A. D. 2007, in ASP Conf. Ser. 372, 15th European Workshop on White Dwarfs, ed. R. Napiwotzki and M. R. Burleigh (San Francisco, CA: ASP), 459 Seitter, W.C., 1970, Atlas for Objective Prism Spectra, Bonner Spektral Atlas I (Bonn: Duemmler) Silvestri, N. M. 2006, AJ 131, 1674 Silvestri, N. M. 2007, AJ 134, 741 Slettebak, A., & Brundage, R. K. 1971, , 76, 338 Smith, J.A., et al. 2002, AJ, 123, 2121 Smolcic, V. et al. 2004 ApJ 615, L141 Steinmetz, M. et al. 2006 AJ 132, 1645 Stoughton, C. et al. 2002 AJ 123, 485 Tegmark, M. et al. 2004 ApJ 606, 702 Tremonti, C. et al. 2004 ApJ 613, 898 Tucker, D. et al. 2006, Astron. Nachr., 327, 821 Vidrih, S. et al. 2007, MNRAS 382, 515 Vivas, A. K. et al. 2001, ApJ 554, L33 Webbink, R. F. 2007, in Short Period Binary Stars, ed. E.F. Milone, D.A. Leahy, & D.W. Hobill (Berlin: Springer), 233 (arXiv:0704.0280) West, Andrew A., Hawley, Suzanne L., Bochanski, John J., Covey, Kevin R., Reid, I. Neill, Dhital, Saurav, Hilton, Eric J., & Masuda, Michael 2008 AJ 135, 785 Wilkinson et al. 2005, MNRAS 359, 1306 Willett, B., Newberg, H. J., Zhang, H.T., Yanny, B., & Beers, T. C. 2009 ApJ, in press (arXiv:0901.4046) Willman, B. et al. 2005, AJ 129, 2692 Wyse, R. F. G. 1986 Nature 322, 806 Wyse, R. F. G., & Gilmore G. 1992, MNRAS 257, 1 Wyse, R. F. G., & Gilmore G. 1995, AJ 110, 2771 Xu, Y., Deng, L. C., & Hu, J. Y. 2007, MNRAS 379, 1373 Xue, X. X., et al. 2008, ApJ 684, 1143 Yanny, B. et al. 2000, ApJ 540, 825 Yanny, B. et al. 2003, ApJ 588, 824 (and 2004 ApJ 605,575) York, D. et al. 2000, AJ 120, 1579 Zucker, D. et al. 2004a, ApJ 612, L117 Zucker, D. et al. 2004b, ApJ 612, L121 Zwitter et al. 2008, AJ 136, 421
[rrrrrrrrr]{} 72 & — & 95.000 & $-25.000$ & 311.0 & 419.0 & $-14.9$ & $-27.1$ & 270.0\
79 & — & 95.000 & $-7.500$ & 311.0 & 419.0 & $-22.4$ & $-35.3$ & 270.0\
1020 & 10 & 60.004 & 34.950 & 242.3 & 277.3 & 35.0 & 0.0 & 87.5\
1062 & 31 & 98.629 & 27.192 & 247.4 & 312.4 & 35.0 & $-30.0$ & 162.5\
1100 & 50 & 136.813 & 31.702 & 252.0 & 332.0 & 35.0 & $-45.0$ & 200.0\
1140 & 70 & 161.744 & 44.754 & 257.1 & 337.1 & 35.0 & $-45.0$ & 200.0\
1188 & 94 & 178.713 & 64.498 & 249.1 & 384.1 & 50.0 & $-85.0$ & 337.5\
1220 & 110 & 186.882 & 78.511 & 269.6 & 349.6 & 35.0 & $-45.0$ & 200.0\
1260 & 130 & 196.095 & 276.287 & 33.8 & 128.8 & $-45.0$ & 50.0 & 237.5\
1300 & 150 & 205.976 & 293.890 & 41.1 & 121.1 & $-45.0$ & 35.0 & 200.0\
1356 & 178 & 225.998 & 316.856 & 14.2 & 129.2 & $-80.0$ & 35.0 & 287.5\
1374 & 187 & 236.019 & 323.166 & 66.5 & 131.5 & $-30.0$ & 35.0 & 162.5\
1406 & 203 & 261.852 & 331.241 & 70.5 & 135.5 & $-30.0$ & 35.0 & 162.5\
1458 & 229 & 315.139 & 328.784 & 94.8 & 146.8 & $-12.0$ & 40.0 & 130\
1540 & 270 & 176.405 & 61.064 & 152.8 & 187.8 & 35.0 & 70.0 & 87.5\
1600 & 300 & 191.522 & 87.391 & 171.7 & 198.7 & 43.0 & 70.0 & 67.5\
1660 & 330 & 25.976 & 66.110 & 195.0 & 230.0 & 38.0 & 70.0 & 80.0\
[rrrrrrrrrrrrr]{} 202&2803&2824&54368&54452&1220&0.64&28.14&110.00&-33.50&0.05&v4.3&Survey\
149&2624&2630&54380&54327&1188&1.02&-4.82&94.00&-65.00&0.03&v4.2&Survey\
200&2801&2822&54331&54389&72&1.25&24.95&109.77&-36.73&0.07&v4.3&Survey\
017&1912&1913&53293&53321&86&6.02&-10.00&100.99&-71.69&0.05&v2.0&Survey\
216&2848&—-&54453&—–&1188&7.78&-18.29&94.00&-80.00&0.02&v4.4&Survey\
075&2312&2327&53709&53710&79&9.03&7.48&116.28&-55.19&0.04&v3.3&Survey\
031&2038&2058&53327&53349&72&10.51&24.90&120.23&-37.92&0.03&v3.0&Survey\
013&1904&1905&53682&53706&82&11.00&0.00&118.86&-62.81&0.02&v2.0&Survey\
009&1896&1897&53242&53242&76&11.21&14.92&120.55&-47.93&0.08&v2.0&Survey\
076&2313&2328&53726&53728&82&17.00&0.00&131.95&-62.58&0.03&v3.3&Survey\
203&2804&2825&54368&54439&1260&17.86&15.60&130.00&-47.00&0.07&v4.3&Survey\
217&2849&2864&54454&54467&86&18.70&-9.72&141.60&-71.74&0.04&v4.4&Survey\
028&2040&2060&53384&53706&1260&19.14&25.74&130.00&-36.79&0.12&v3.0&Survey\
029&2041&2061&53387&53711&1260&20.00&31.69&130.00&-30.79&0.07&v3.0&Survey\
077&2314&2329&53713&53725&79&21.13&7.21&137.25&-54.74&0.04&v3.3&Survey\
023&2042&2062&53378&53381&1260&21.15&38.63&130.00&-23.79&0.05&v3.0&Survey\
030&2043&2063&53351&53359&1260&21.33&39.62&130.00&-22.79&0.07&v3.0&Survey\
085&2336&—-&53712&—–&1260&21.33&39.62&130.00&-22.79&0.07&vt.t&Survey\
018&1914&1915&53729&53612&86&24.27&-9.45&156.44&-69.30&0.04&v2.0&Survey\
032&2044&2064&53327&53341&72&24.72&23.70&136.73&-37.90&0.12&v3.0&Survey\
215&2816&—-&54397&—–&86&25.28&-9.39&158.75&-68.73&0.03&v4.3&Survey\
218&2850&2865&54461&54497&86&25.28&-9.39&158.75&-68.73&0.03&v4.4&Sgr Strm\
014&1906&1907&53293&53315&82&26.00&0.00&150.04&-60.08&0.03&v2.0&Survey\
010&1898&1899&53260&53262&76&26.67&13.98&142.70&-46.76&0.06&v2.0&Survey\
024&2045&2065&53350&53678&82&30.00&0.00&157.01&-58.26&0.03&v3.0&Survey\
219&2851&2866&54485&54478&82&30.00&0.00&157.01&-58.26&0.03&v4.4&Survey\
033&2046&2066&53327&53349&72&32.23&22.52&145.47&-36.94&0.11&v3.0&Survey\
069&2306&2321&53726&53711&79&33.20&6.62&156.16&-50.93&0.07&v3.3&Survey\
026&2047&2067&53732&53738&86&37.40&-8.47&178.72&-60.22&0.03&v3.0&Survey\
089&2379&2399&53762&53764&1300&38.17&25.50&150.00&-32.00&0.11&v3.4&Survey\
109&2442&2444&54065&54082&1300&39.70&28.17&150.00&-29.00&0.14&v4.0&Mon Strm\
999&1664&1664&52965&52973&82&42.00&0.00&173.65&-51.02&0.04&vt.t&RV Test\
088&2378&2398&53759&53768&1300&43.58&34.33&150.00&-22.00&0.12&v3.4&Survey\
070&2307&2322&53710&53727&79&45.24&5.74&171.39&-44.61&0.10&v3.3&Survey\
108&2441&2443&54065&54082&1300&46.07&37.79&150.00&-18.00&0.12&v4.0&Survey\
025&2048&2068&53378&53386&82&47.00&0.00&179.01&-47.44&0.07&v3.0&Survey\
084&2335&2340&53730&53733&79&48.25&5.48&174.65&-42.75&0.20&v3.3&Survey\
107&2397&2417&53763&53766&1300&48.79&41.20&150.00&-14.00&0.13&v3.4&Survey\
999&1665&1666&52976&52991&-999&49.96&41.53&150.57&-13.24&0.16&vt.t&Test\
083&2334&2339&53730&53729&79&51.24&5.20&177.71&-40.80&0.13&v3.3&Survey\
035&2049&2069&53350&53376&82&53.00&0.00&184.53&-42.87&0.09&v3.0&Survey\
027&2050&2070&53401&53729&86&55.43&-6.41&193.71&-44.60&0.05&v3.0&Survey\
178&2679&2697&54368&54389&1356&57.20&10.31&178.00&-33.00&0.21&v4.2&Survey\
034&2051&2071&53738&53741&86&59.42&-5.86&195.91&-40.94&0.11&v3.0&Survey\
162&2680&2698&54141&54140&1356&63.35&15.61&178.00&-25.00&0.44&v4.2&Survey\
204&2805&2826&54380&54389&1374&64.85&6.56&187.00&-29.50&0.23&v4.3&Survey\
154&2669&2673&54086&54096&1374&71.00&10.96&187.00&-22.00&0.38&v4.2&Survey\
256&—-&2942&—–&54521&1406&71.40&-5.68&203.00&-30.48&0.07&v4.6&Survey\
163&2681&2699&54397&54414&1356&71.50&21.98&178.00&-15.00&0.50&v4.2&LLSurvey\
153&2668&2672&54084&54085&1374&79.49&16.61&187.00&-12.00&0.46&v4.2&LLSurvey\
065&2300&2302&53682&53709&1300&82.64&62.07&150.00&15.00&0.25&v3.2&Survey\
040&2052&2072&53401&53430&82&83.20&0.00&203.68&-17.42&0.26&v3.0&Survey\
231&2887&2912&54521&54499&1374&91.30&23.40&187.00&1.00&0.77&v4.6&NGC2158/M35 Cls\
133&2540&2548&54110&54152&1260&91.83&83.51&130.00&25.71&0.06&v4.0&Survey\
064&2299&2301&53711&53712&1300&92.63&64.25&150.00&20.00&0.11&v3.2&Survey\
179&2678&2696&54173&54167&1374&98.14&26.67&187.00&8.00&0.23&v4.2&LLSurvey\
166&2682&2700&54401&54417&1356&101.28&37.61&178.00&15.00&0.14&v4.2&Survey\
157&2676&2694&54179&54199&1374&102.21&28.39&187.00&12.00&0.09&v4.2&Survey\
180&2712&2727&54409&54414&1406&105.56&12.44&203.00&8.00&0.09&v4.2&LLSurvey\
086&2337&—-&53740&—–&1300&106.24&65.80&150.00&25.94&0.04&vt.t&Survey\
257&2938&2943&54526&54502&1356&107.24&39.41&178.00&20.00&0.07&v4.6&Survey\
158&2677&2695&54180&54409&1374&110.74&31.44&187.00&20.00&0.08&v4.2&Survey\
260&2941&2946&54507&54506&38&111.29&37.62&180.89&22.44&0.05&v4.6&Mon Strm\
036&2053&2073&53446&53728&37&112.51&35.99&182.90&22.87&0.06&v3.0&Survey\
181&2713&2728&54397&54416&1406&113.02&15.86&203.00&16.00&0.05&v4.2&Survey\
167&2683&2701&54153&54154&1356&113.49&40.89&178.00&25.00&0.05&v4.2&Survey\
043&2078&2079&53378&53379&30&114.60&21.57&198.11&19.64&0.04&v3.2&NGC2420 Cluster\
038&2054&2074&53431&53437&27&116.00&18.18&201.97&19.54&0.03&v3.0&Survey\
234&2890&2915&54495&54497&27&116.00&18.30&201.85&19.59&0.03&v4.6&Mon Strm\
258&2939&2944&54515&54523&1300&116.19&66.11&150.00&30.00&0.04&v4.6&Survey\
037&2055&2075&53729&53737&32&116.88&28.02&192.41&23.86&0.04&v3.0&Survey\
235&2891&2916&54507&54507&29&118.00&23.20&197.73&23.16&0.07&v4.6&Mon Strm\
182&2714&2729&54208&54419&1406&118.29&18.06&203.00&21.50&0.04&v4.2&Survey\
259&2940&2945&54508&54505&19&119.00&9.50&211.61&18.62&0.02&v4.6&Mon Strm\
041&2056&2076&53463&53442&16&121.20&6.75&215.24&19.38&0.03&v3.0&Survey\
205&2806&2827&54425&54422&1458&122.77&-7.39&229.00&14.00&0.10&v4.3&Survey\
039&2057&2077&53816&53846&10&124.00&0.00&222.93&18.72&0.05&v3.0&Survey\
155&2670&2674&54115&54097&33&124.50&38.00&183.37&32.64&0.04&v4.2&Survey\
134&2541&2549&54481&54523&1260&127.07&83.27&130.00&29.71&0.03&v4.0&Survey\
078&2315&2330&53741&53738&26&127.73&24.40&199.78&31.96&0.03&v3.3&Survey\
206&2807&2828&54433&54438&1458&127.96&-4.33&229.00&20.00&0.04&v4.3&Survey\
079&2316&2331&53757&53742&37&129.62&53.91&164.26&37.20&0.03&v3.3&Survey\
080&2317&2332&54152&54149&14&132.57&6.14&221.47&29.17&0.06&v3.3&Survey\
152&2667&2671&54142&54141&17&132.82&10.94&216.61&31.53&0.04&v4.2&M67 Cluster\
232&2888&2913&54529&54526&12&134.00&3.20&225.20&29.01&0.04&v4.6&Mon Strm\
090&2380&2400&53759&53765&30&134.44&37.13&185.88&40.31&0.04&v3.4&Survey\
091&2381&2401&53762&53768&26&139.44&30.43&195.57&43.49&0.02&v3.4&Survey\
067&2304&2319&53762&53763&22&139.89&22.17&206.64&41.95&0.04&v3.3&Survey\
092&2382&2402&54169&54176&14&141.56&7.30&225.30&37.58&0.05&v3.4&Survey\
233&2889&2914&54530&54533&25&144.00&30.05&197.01&47.32&0.02&v4.6&Survey\
094&2384&2404&53763&53764&34&144.67&52.86&163.48&46.20&0.01&v3.4&Survey\
093&2383&2403&53800&53795&37&146.38&62.07&150.92&43.62&0.03&v3.4&Survey\
220&2852&2867&54468&54479&10&150.00&0.00&239.10&40.72&0.04&v4.4&Survey\
096&2386&2406&54064&54084&22&152.38&25.93&205.39&53.92&0.03&v3.4&Orph Strm\
097&2387&2407&53770&53771&26&152.52&35.29&189.36&54.80&0.01&v3.4&Survey\
221&2853&2868&54440&54451&18&156.53&17.74&220.87&55.27&0.03&v4.4&Orph Strm\
222&2854&2869&54480&54454&14&156.63&8.82&234.18&51.20&0.03&v4.4&Survey\
142&2557&2567&54178&54179&29&158.57&44.34&171.74&57.63&0.02&v4.2&GD Strm\
099&2389&2409&54213&54210&10&162.00&0.00&250.28&49.82&0.04&v3.4&Orph Strm\
144&2559&2569&54208&54234&10&162.00&0.00&250.28&49.82&0.04&v4.2&Orph Strm\
100&2390&2410&54094&54087&30&163.80&48.01&162.38&59.24&0.02&v3.4&GD Strm\
223&2855&2870&54466&54534&22&165.56&28.56&203.12&65.87&0.03&v4.4&Survey\
224&2856&2871&54463&54536&26&166.97&38.59&178.45&65.54&0.02&v4.4&Survey\
174&2690&2708&54211&54561&1540&167.16&-16.21&270.00&40.00&0.07&v4.2&Survey\
103&2393&2413&54156&54169&14&168.77&9.61&245.98&61.30&0.02&v3.4&Survey\
225&2857&2872&54453&54533&18&169.09&19.29&227.63&66.83&0.02&v4.4&Survey\
104&2394\*&2414&54551&54526&34&169.30&59.05&143.49&54.16&0.01&v3.4&Survey\
227&2859&2874&54570&54561&1540&169.73&-11.87&270.00&45.00&0.07&v4.4&Survey\
226&2858&2873&54498&54505&37&172.12&66.98&134.92&48.17&0.01&v4.4&Survey\
229&2861&2876&54583&54581&1540&172.22&-7.52&270.00&50.00&0.04&v4.4&Survey\
230&2862&2877&54471&54523&10&174.00&0.00&266.09&57.37&0.02&v4.4&Survey\
261&2963&2965&54589&54594&14&180.94&9.98&267.43&69.50&0.02&v4.6&Survey\
236&2892&2917&54552&54556&10&181.00&0.00&278.20&60.57&0.02&v4.6&Survey\
237&2893&2918&54552&54554&18&181.81&19.97&245.85&77.61&0.03&v4.6&Survey\
238&2894&2919&54539&54537&30&181.89&49.96&140.22&65.67&0.03&v4.6&Survey\
117&2452&2467&54178&54176&26&182.39&39.97&154.34&74.50&0.03&v4.0&Survey\
143&2558&2568&54140&54153&10&186.00&0.00&288.15&62.08&0.03&v4.2&Vir Strm\
239&2895&2920&54567&54562&10&189.00&0.00&294.52&62.62&0.02&v4.6&Survey\
241&2897&2922&54585&54612&9&191.00&-2.50&299.18&60.32&0.03&v4.6&Survey\
173&2689&2707&54149&54144&1600&191.16&-7.83&300.00&55.00&0.03&v4.2&Survey\
122&2457&2472&54180&54175&22&191.46&29.84&147.00&87.02&0.02&v4.0&Survey\
242&2898&2923&54567&54563&30&192.75&49.74&123.11&67.39&0.01&v4.6&Survey\
111&2446&2461&54571&54570&34&192.96&59.76&122.84&57.37&0.01&v4.0&Survey\
243&2899&2924&54568&54582&18&194.57&19.74&315.26&82.45&0.02&v4.6&Survey\
244&2900&2925&54569&54584&26&197.96&39.27&104.92&77.13&0.02&v4.6&Survey\
245&2901&2926&54652&54625&10&198.00&0.00&314.09&62.43&0.04&v4.6&Survey\
126&2476&—-&53826&—–&17&199.03&17.01&333.60&78.38&0.03&v4.0&NGC5053 Cluster\
110&2445\*&2460&54573&54616&37&202.79&66.49&116.77&50.16&0.02&v4.0&Survey\
247&2903&2928&54581&54614&14&205.28&9.39&338.75&68.73&0.03&v4.6&Survey\
125&2475&—-&53845&—–&22&205.55&28.40&42.31&78.70&0.01&v4.0&M3 Cluster\
248&2904\*&2929&54574&54616&22&206.68&28.21&41.20&77.72&0.01&v4.6&Survey\
249&2905\*&2930&54580&54589&18&207.22&18.62&3.16&74.29&0.02&v4.6&Survey\
184&2716&—-&54628&—–&1660&210.10&-9.21&330.00&50.00&0.04&v4.2&Survey\
250&2906&2931&54577&54590&26&212.81&36.58&67.14&70.65&0.01&v4.6&Survey\
112&2447&2462&54498&54561&34&214.83&56.35&100.68&56.81&0.01&v4.0&Survey\
251&2907\*&2932&54580&54595&30&217.15&45.26&82.47&63.49&0.01&v4.6&Survey\
252&2908&2933&54611&54617&14&217.40&8.47&358.72&60.22&0.02&v4.6&Survey\
132&2539&2547&53918&53917&35&217.73&58.24&100.60&54.36&0.01&v4.0&GD Strm\
253&2909&2934&54653&54626&10&222.00&0.00&353.65&51.02&0.04&v4.6&Survey\
254&2910&2935&54630&54652&26&226.36&32.20&51.02&60.57&0.02&v4.6&Survey\
192&2724&2739&54616&54618&14&229.44&7.18&9.84&50.00&0.04&v4.2&Survey\
246&2902&2927&54629&54621&14&229.44&7.18&9.84&50.00&0.04&v4.6&Survey\
255&2911&2936&54631&54626&30&231.38&39.43&63.98&55.84&0.02&v4.6&Survey\
114&2449&2464&54271&54272&34&231.76&49.88&81.08&52.66&0.02&v4.0&Survey\
262&2964\*&—-&54632&—–&18&231.78&14.00&20.99&51.46&0.04&v4.6&Survey\
124&2459\*&2474&54544&54564&26&238.51&26.52&42.88&49.49&0.04&v4.0&Survey\
045&2175&2186&54612&54327&14&241.41&5.57&16.92&39.10&0.06&v3.2&Survey\
048&2178&2189&54629&54624&13&242.33&4.07&15.88&37.53&0.07&v3.2&Survey\
046&2176&2187&54243&54270&37&242.51&52.37&81.35&45.48&0.02&v3.2&Survey\
047&2177&2188&54557&54595&22&243.77&16.67&31.37&41.94&0.05&v3.2&Survey\
135&2550&2560&54206&54205&1188&247.15&62.85&94.00&40.00&0.03&v4.2&Survey\
044&2174&2185&53521&53532&34&250.01&36.20&58.61&41.22&0.02&v3.2&M13 Cluster\
063&2255&—-&53565&—–&34&250.01&36.20&58.61&41.22&0.02&v3.2&M13 Cluster\
195&2796&2817&54629&54627&1062&253.10&12.49&31.00&32.00&0.05&v4.3&Survey\
050&2180&2191&54613&54621&30&253.12&23.95&43.95&36.06&0.06&v3.2&Survey\
051&2181&2192&53524&54232&37&254.92&39.65&63.60&37.73&0.02&v3.2&Survey\
207&2808&2829&54524&54623&1100&255.75&28.39&50.00&35.00&0.07&v4.3&Survey\
055&2247&2256&54169&53859&39&258.78&42.03&66.95&35.10&0.02&v3.2&M92 Cluster\
128&2535\*&—-&54632&—–&1020&259.41&-13.07&10.00&14.00&0.45&v4.0&LLSurvey\
052&2182&2193&53905&53888&1100&261.18&27.01&50.00&30.00&0.04&v3.2&Survey\
196&2797&2818&54616&54616&1062&262.52&8.11&31.00&21.75&0.11&v4.3&Survey\
136&2551&2561&54552&54597&1188&262.57&64.37&94.00&33.00&0.03&v4.2&Survey\
061&2253\*&2262&54551&54623&39&263.11&33.16&57.36&30.08&0.04&v3.2&Survey\
198&2799&2820&54368&54599&1140&263.44&44.15&70.00&32.00&0.02&v4.3&Survey\
053&2183&2194&53536&53904&1100&266.47&25.43&50.00&25.00&0.09&v3.2&Survey\
054&2184&2195&53534&54234&1100&271.61&23.67&50.00&20.00&0.12&v3.2&Survey\
127&2534&2542&53917&53919&1100&277.60&21.33&50.00&14.00&0.16&v4.0&LLSurvey\
201&2802&—-&54326&—–&1220&278.04&78.51&110.00&27.50&0.07&v4.3&Survey\
137&2552&—-&54632&—–&1188&278.72&64.15&94.00&26.00&0.04&v4.2&Survey\
197&2798&2819&54397&54617&1140&279.34&41.30&70.00&20.00&0.07&v4.3&Survey\
211&2812\*&2833&54633&54650&1100&283.38&18.79&50.00&8.00&0.39&v4.3&Survey\
129&2536&2544&53883&53884&1140&286.66&39.11&70.00&14.00&0.18&v4.0&LLSurvey\
999&1857&1858&53182&53271&-999&290.34&78.20&110.00&25.00&0.05&vt.t&Test\
199&2800&2821&54326&54393&1140&290.57&37.68&70.00&10.62&0.15&v4.3&NGC6791 Cluster\
138&2553&2563&54631&54653&1188&291.69&62.61&94.00&20.00&0.05&v4.2&Survey\
212&2813&—-&54650&—–&1100&298.01&11.26&50.00&-8.00&0.31&v4.3&Survey\
081&—-&2338&—–&53679&-999&298.44&19.08&57.00&-4.41&0.45&v3.3&M71 Cluster\
066&2303&2318&54629&54628&1062&301.86&-11.46&31.00&-22.00&0.11&v3.3&Survey\
139&2554&2564&54263&54275&1188&302.97&60.01&94.00&14.00&0.22&v4.2&LLSurvey\
999&1660&1661&53230&53240&-999&303.62&77.18&110.0&22.00&0.19&vt.t&Test\
056&2248&2257&53558&53612&72&306.44&13.67&56.45&-13.78&0.09&v3.2&Survey\
057&2249&2258&53566&54328&72&309.33&14.73&58.97&-15.53&0.09&v3.2&Survey\
015&1908&1909&53239&53261&82&311.00&0.00&46.64&-24.82&0.07&v2.0&Survey\
049&2179&2190&53555&54386&1220&311.16&76.18&110.00&20.00&0.28&v3.2&Survey\
019&1916&1917&53269&53557&86&311.58&-6.00&41.08&-28.23&0.05&v2.0&Survey\
058&2250&2259&53566&53565&72&312.25&15.76&61.53&-17.25&0.08&v3.2&Survey\
140&2555&2565&54265&54329&1188&312.39&56.59&94.00&8.00&0.93&v4.2&LLSurvey\
214&2815&—-&54414&—–&1100&312.72&2.52&50.00&-25.00&0.11&v4.3&Survey\
020&1918&1919&53240&53240&82&317.00&0.00&50.11&-29.97&0.11&v2.0&Survey\
006&1890&1891&53237&53238&76&319.01&10.54&61.22&-25.64&0.06&v2.0&Survey\
068&2305&2320&54414&54653&86&320.56&-7.18&44.84&-36.65&0.23&v3.3&Survey\
021&1960&1962&53289&53321&-999&322.67&11.33&64.41&-27.98&0.10&v2.1&M15 Cluster\
131&2538&2546&54271&54625&1220&323.07&73.64&110.00&16.00&0.53&v4.0&LLSurvey\
022&1961&1963&53299&54331&-999&323.36&-0.20&54.00&-35.43&0.04&v2.1&M2 Cluster\
141&2556&2566&54000&54333&1188&330.15&45.06&94.00&-8.00&0.30&v4.2&LLSurvey\
059&2251&2260&53557&53638&72&332.50&21.47&80.06&-27.66&0.07&v3.2&Survey\
072&2309&—-&54441&—–&86&332.60&-8.47&51.37&-47.64&0.06&v3.3&Survey\
071&2308&2323&54379&54380&79&332.78&6.36&67.76&-38.78&0.10&v3.3&Survey\
130&2537&2545&53917&53915&1220&334.17&69.39&110.00&10.50&0.48&v4.0&LLSurvey\
145&2620&—-&54397&—–&1188&335.67&39.37&94.00&-15.00&0.13&v4.2&Survey\
007&1892&1893&53238&53239&76&340.25&13.68&81.04&-38.36&0.05&v2.0&Survey\
060&2252&2261&53613&53612&72&340.97&23.07&88.35&-31.10&0.06&v3.2&Survey\
011&1900&1901&53262&53261&82&341.00&0.00&69.20&-49.10&0.06&v2.0&Survey\
146&2621&2627&54380&54379&1188&342.14&30.88&94.00&-25.00&0.05&v4.2&Survey\
016&1910&1911&53321&53295&86&344.72&-9.39&61.32&-58.07&0.04&v2.0&Survey\
073&2310&2325&53710&54082&79&344.84&7.05&80.43&-46.37&0.07&v3.3&Survey\
148&2623&2629&54328&54087&1188&347.53&22.12&94.00&-35.00&0.17&v4.2&Survey\
999&1662&1663&52970&52973&-999&351.41&52.73&110.00&-7.98&0.31&vt.t&Test\
005&1888&1889&53239&53240&1220&353.41&48.91&110.00&-12.00&0.21&v2.0&Survey\
147&2622&2628&54095&54326&1188&354.52&8.71&94.00&-50.00&0.13&v4.2&Survey\
004&1886&1887&53237&53239&1220&354.71&46.04&110.00&-15.00&0.13&v2.0&Survey\
008&1894&1895&53240&53242&76&355.69&14.81&99.18&-44.87&0.03&v2.0&Survey\
003&1884&1885&53228&53230&1220&355.89&43.16&110.00&-18.00&0.12&v2.0&Survey\
012&1902&1903&53271&53357&82&356.00&0.00&89.32&-58.40&0.03&v2.0&Survey\
074&2311&—-&54331&—–&86&356.88&-9.90&78.72&-67.11&0.03&v3.3&Survey\
002&1882&1883&53262&53271&1220&357.30&39.30&110.00&-22.00&0.13&v2.0&Mon Strm\
001&1880&1881&53262&53261&1220&358.26&36.40&110.00&-25.00&0.11&v2.0&Survey\
087&2377&—-&53991&—–&100&359.35&56.71&115.53&-5.39&0.41&v3.4&NGC7789 Cluster\
[rr]{} n & All is normal for this flag position\
d & Possible WD\
D & Apparent WD\
E & Emission, possible QSO\
h & Helium line detected\
H & Apparent Teff too Hot for parameter est.\
l & Sodium line, possibly late type\
N & Noise spectrum\
S & Sky fiber – no signal\
V & RV mismatch\
C & Color mismatch (spectroscopic color Teff far from implied $(g-r)_0$ color Teff)\
B & Balmer flag: unusual Balmer eq. widths for star\
g & g band : unusual G-band eq. width for star – Carbon star?\
G & Carbon star?\
-0.8in
[rrrrr]{} WD/sdO/sdB & v4.6 & 0x80080000 & $g < 20.3, -1 < g-r < -0.2, -1 < u-g < 0.7, u-g + 2(g-r) < -0.1 $ & 25,4069,0.62\
& v3.0 & & $-1 < u-g < 0.7, u-g + 2(g-r) < -0.1$ &\
& v2.0 & & $g < 20.3, -1 < g-r < 0.2, -1 < u-g < 0.5$ &\
CWD & v4.6 & 0x80020000 & $ r < 20.5, -2 < g-i < 1.7 \tablenotemark{b}, H_g > 16.05+2.9(g-i)$ & 10,1187,0.005\
& v3.1 & & allowed number of targets/pointing to exceed 10 on occasion &\
& v3.0 & & $ r < 20.5, -2 < g-i < 1.7, H_g > 16.05+2.9(g-i)$ &\
& v2.0 & & $15 < r 20, -0.1 < g-r < 1.1, g-r > 2.4(r-i) + 0.5, i-z < 0$ &\
BHB/BS/A & v4.6 & 0x80002000 & $g < 20.5, 0.8 < u-g < 1.5, -0.5 < g-r < 0.2, v\tablenotemark{c} \rm weight$ & 150,24688,0.66\
& v3.3 & & Added v ’Luminosity color’ weighting &\
& v3.2 & & Sort by color, favor bluest BHBs &\
& v3.0 & & $ 0.6 < u-g < 1.4, -0.5 < g-r < 0.2$ &\
& v2.0 & & $g < 20.5, 0.5 < u-g < 1.4, -0.8 < g-r < 0.2, s < -0.065$ &\
F & v4.6 & 0x80100000 & $g < 20.3, -0.7 < P1(s)\tablenotemark{d} < -0.25, 0.4 < u-g < 1.4, 0.2 < g-r < 0.7$ & 200,37900,0.68\
& v3.0 & & $g < 20.3, -0.7 < P1(s) < -0.25, 0.4 < u-g < 1.7, 0.2 < g-r < 3 $ &\
& v2.0 & & $g < 20.3, -0.7 < P1(s) < -0.3, 0.4 < u-g < 1.7, -0.3 < g-r < 3 $ &\
Low Metal & v4.6 & 0x80010000 & $r < 19, -0.5 < g-r < 0.75, 0.6 < u-g < 3.0, l\tablenotemark{e} > 0.135$ & 150,29788,0.12\
& v3.4 & & changed l-color cut to $l > 0.135$ &\
& v3.3 & & changed l-color cut to $l > 0.15$ &\
& v3.1 & & weighted by l-color and magnitude (bright targets favored) &\
& v3.0 & & $r<19.5, -0.5 < g-r < 0.75, l > 0.12$ &\
& v2.0 & & $r<20.2, -0.5 < g-r < 0.9, 0.3 < u-g < 3, l > 0.15$ &\
F/G & v4.6 & 0x80000200 & $g < 20.2, 0.2 < g-r < 0.48 $ & 50,6939,0.9\
& v3.3 & & First appearance of F/G category &\
G & v4.6 & 0x80040000 & $r < 20.2, 0.48 < g-r < 0.55$ & 375,62784,0.96\
& v3.0 & & $0.48 < g-r < 0.55$ &\
& v2.0 & & $ r < 20.2$ , $0.50 < g-r < 0.55$ &\
K giant & v4.6 & 0x80004000 & $ r < 18.5, 0.5 < g-r < 0.8 $ & 70,16866,0.3\
& & & $l\tablenotemark{e} > 0.07 $ $\rm pm < 11 mas~yr^{-1}$ &\
& v4.2 & & $g < 19.5, 0.5 < g-r < 0.9$ &\
& v3.0 & & $ 0.35 < g-r < 0.8, l > 0.07$, pm $<$ 11 mas/yr &\
& v2.0 & & $ r < 20.2 , 0.7 < u-g < 4, 0.4 < g-r < 1.2, 0.15 < r-i < 0.6, l > 0.1$ &\
& vt.t & & $ r < 20.2 , 0.5 < g-r < 1.3$ &\
Red K giant & v4.6 & 0x80004000 & $ r < 18.5, 0.8 < g-r < 1.2, \rm pm < 5 mas~yr^{-1}$ & 30,5948,0.08\
& v4.4 & & $g < 18.5$, weight brighter, redder objects higher &\
& v4.3 & & $g < 19.5, 0.9 < g-r < 1.2$ &\
AGB & v4.6 & 0x80800000 & $ r<19.0, 2.5 < u-g < 3.5, 0.9 < g-r <1.3, \rm s\tablenotemark{f} < -0.06 $ & 10,1343,0.08\
dK , dM& v4.6 & 0x80008000 & $ r < 19, 0.55 < g-r < 0.75$ &50,18358,0.80\
& v3.0 & & $ r < 19.0 , 0.55 < g-r < 0.75, 0.3 < r-i < 0.8 $ &\
& v2.0 & & $ r < 19.5 , g-r > 0.7, 0.3 < r-i < 0.8 $ &\
sdM& v4.6 & 0x80400000 & $ r < 20$ , $g-r > 1.6, 0.9 < r-i < 1.3 $ & 25,1012,0.003\
esdM& v4.6 & 0x81000000 & $ (g-r)0.787-0.356>(r-i) , r-i <0.9, H_r >17, 2.4 >g-i>1.8 $ & 40,9420,0.20\
WD/MS& v4.6 & 0x80001000 & $ g < 20, (u-g) < 2.25, -0.2 < g-r < 1.2, 0.5 < r-i < 2.0$ &5,431,0.56\
& & & $g-r > -19.78(r-i)+11.13, g-r < 0.95(r-i)+0.5$ &\
& & & $i-z > 0.5 \rm ~for~r-i > 1$ &\
& & & $i-z > 0.68(r-i)-0.18~ \rm for~ r-i <= 1$ &\
&v3.4 & & algorithm uses non-dereddened colors &\
&v3.3 & & First appearance MSWD category &\
L& v4.6 & 0x80200000 & $ z < 19.5$ , $i-z > 1.7$ & 5,1277,0.07\
& & & $u > 21$ &\
& v3.0 & & hi-z QSO style flag checks added &\
LL Blue& v4.6 & 0x80000800 &$g-r<0.25$ &800,8522,0.3\
& v4.0 & & First appearance all low-latitude algs &\
LL AGB& v4.6 & 0x88000000 & $s\tablenotemark{f}>-0.13,3.5>u-g>2.6,0.8 <g-r <1.3$ &50,499,0.3\
LL KIII& v4.6 & 0x80000400 & $0.55<g-r<0.9,g<19,\rm pm<11 mas~yr^{-1}$ &300,3220,0.3\
[rr]{} vt.t&1660-1858\
v2.0&1880-1919\
v2.1&1960-1963\
v3.0&2038-2077\
v3.1&2147-2163\
v3.2&2174-2302\
v3.3&2303-2338\
v3.4&2377-2476\
v4.0&2534-2549\
v4.2&2550-2741\
v4.3&2796-2837\
v4.4&2848-2877\
v4.6&2887-2965\
-1.4in
[rrrr]{} SPECTROPHOTO STD &0x20 & $0.1<g-r<0.4,16<g<17.1$ &20320\
REDDEN STD& 0x2 & $0.1<g-r<0.4,17.1<g<18.5$ &22337\
HOT STD &0x200 &$g<19,u-g<0,g-r<0$ &3370\
ROSAT C & 0x800 & $r<16.5$ ROSAT X-ray source within $60''$& 8000\
ROSAT C & 0x800 &$u-g <1.1$ ROSAT within $60''$\
ROSAT E & 0x8000000 & ROSAT within $60''$ & 8000\
STAR BHB & 0x2000 & $-0.4 < g-r < 0, 0.8 < u-g < 1.8$ & 19887\
STAR CARBON & 0x4000 & $g-r > 0.85, r-i > 0.05, i-z > 0, r-i < -0.4+0.65(g-r) $ & 4453\
& & $ (g-r) > 1.75 $ &\
STAR BROWN DWARF & 0x8000 & $z<19, \sigma(i) < 0.3, r-i > 1.8, i-z > 1.8$ & 667\
STAR SUB DWARF & 0x10000 & $g-r > 1.6, 0.95 < r-i < 1.6, \sigma(g) < 0.1$ & 1482\
STAR CATY VAR & 0x20000 & $g<19.5,u-g < 0.45,g-r < 0.7,r-i>0.3,i-z>0.4 $ & 8959\
& 0x20000 & $(u-g)-1.314(g-r) < 0.61 , r-i > 0.91, i-z > 0.49$\
STAR RED DWARF & 0x40000 & $i < 19.5, \sigma(i) < 0.2, r-i > 1.0, r-i > 1.8$ & 14649\
STAR WHITE DWARF & 0x80000 & $g-r < -0.15, u-g+2(g-r) < 0,r-i < 0$ & 6059\
& & $H_g\tablenotemark{c} > 19 , H_g > 16.136 + 2.727(g-i)$\
& & $ g-i > 0, H_g > 16.136 + 2.727(g-i)$\
STAR PN & 0x10000000 & $g-r > 0.4, r-i < -0.6, i-z > -0.2, 16 < r_0 < 20.5$ & 20\
SERENDIP BLUE & 0x100000 &$u-g <0.8, g-r < -0.05 $& 81937\
SERENDIP FIRST & 0x200000 & FIRST radio source within $1.5''$ & 14689\
SERENDIP RED & 0x400000 & $r-i > 2.7, i-z >1.6$ & 4179\
SERENDIP DISTANT & 0x800000 & $g-r>2, r-i <0$ & 11820\
![ Upper panel: the SEGUE Survey footprint in the Equatorial coordinates from $360^\circ$ to $0^\circ$ (left to right) and $-26^\circ$ to $90^\circ$ (bottom to top). Selected stripes are labeled with their stripe number. The SDSS North Galactic Cap stripes are numbered from 9 to 44. Southern SDSS stripes are numbered 76, 82 and 86. SEGUE fills in Southern stripes 72 and 79. SEGUE’s constant Galactic longitude stripes are numbered with $stripe = 1000 + 2l$ where $l$ is the Galactic longitude. Each SEGUE plate pointing (usually representing a pair of 640 hole plates), is indicated with a blue circle. Lower panel: the SEGUE Survey footprint in $(l,b)$ in Aitoff projection, centered on the Galactic anticenter. The line marking the Southern limit of the telescope observing site $\delta = -20^\circ$ is indicated in magenta. Red and green filled areas represent South and North SDSS and SEGUE strips respectively. \[footprintadlb\]](fig1.ps)
![ target selection categories in SEGUE. Top panel: $(g-r)_0,(u-g)_0$ color-color diagram showing different SEGUE target categories in different colors/symbols. Note the ‘Low Metal’ category hugs the blue side (in $(u-g)_0$) of the stellar locus, and a substantial fraction of F stars with redder $(u-g)_0$ and $0.2 < (g-r)_0 < 0.48$ are not targeted, except by the F/G category. Middle panel: the same as above, except categories which use redder $(i-z)_0,(r-i)_0$ colors are highlighted. Note the L dwarf candidate at $(i-z,r-i)_0 = (1.72,1.9)$. The proper motion selected extreme M subdwarf candidates are shown as open blue circles. Lower panel: a $(g-i)_0, H_g$ selection diagram for categories which use USNO-B proper motion in their selection. $H_g = g + 5\rm log_{10}(\mu/1000)+5$ where $\mu$ is the proper motion in $mas\>yr^{-1}$. Note the cool WD candidates (high proper motion) as magenta circles and the K giant candidates (consistent with 0 proper motion). \[figts\] ](fig3.ps)
![ a set of SEGUE F stars, selected to show the range of metallicities sampled by the F subdwarf, F/G, spectrophotometric standard and reddening standard categories. All 13 stars have similar effective temperatures, near 6500 K, but the strength of the Ca K line at $\lambda 3933$ indicates metallicities ranging from less than 0.001 to 1.5 times Solar. \[metalseq\]](fig7.ps)
![ the five panels here show a step-by-step analysis of the SEGUE G star spectroscopic sample to isolate an interesting subpopulation. Top: $(l,v_{gsr})$ plot of 61,343 measured SEGUE G star parameters from the CAS database. A sinusoidal line, representing a rotating thick disk population is indicated with a black curve. Subsets of stars which stand out from the black curve are candidate halo dwarf galaxy or stream structures. A feature of interest for further study is highlighted with a black mark pointing to $(l,v_{gsr}) = (170^\circ,-160\rm \>km~s^{-1}$). There are several other interesting features which will not be explored further here. Second from top: the set of stars selected in ($l,v_{gsr}$) is plotted in $(l,b)$ to localize the interesting structure in position on the sky. Middle: the magnitude distribution of these stars is quite broad, centered near $g\sim 19$, but covering in excess of a 2 mag range. It is possible that they are all at the same approximate distance only if we are sampling a steep subgiant branch population. Second from bottom: a histogram of the SSPP measured surface gravity of this sample indicates that nearly all these objects ($\rm log~g < 3.75$) are in fact subgiants rather than dwarfs. Bottom: the metallicity distribution of these objects indicates a metallicity of $\rm [M/H] = -1.4\pm 0.5$, with individual object errors on metallicity of about 0.3. The implied distance to these objects, if they are one population, is about 30 kpc. The spatial location of these objects is consistent with previous studies of the Sagittarius tidal stream Southern trailing tail. ](fig17.ps)
[^1]: Available at [http://www.sdss.org/dr7/products/value\_added/anjohnson08\_clusterphotometry.htm]{}
|
---
abstract: 'We report the possibility of completely destructive interference of three indistinguishable photons on a three port device providing a generalisation of the well known Hong-Ou-Mandel interference of two indistinguishable photons on a two port device. Our analysis is based on the underlying mathematical framework of SU(3) transformations rather than SU(2) transformations. We show the completely destructive three photon interference for a large range of parameters of the three port device and point out the physical origin of such interference in terms of the contributions from different quantum paths. As each output port can deliver zero to three photons the device generates higher dimensional entanglement. In particular, different forms of entangled states of qudits can be generated depending on the device parameters. Our system is different from a symmetric three port beam splitter which does not exhibit a three photon Hong-Ou-Mandel interference.'
author:
- 'S. Mährlein'
- 'J. von Zanthier'
- 'G. S. Agarwal'
title: 'Complete three photon Hong-Ou-Mandel interference at a three port device'
---
Introduction
============
The Hong-Ou-Mandel (HOM) effect [@Hong1987; @Shih1988], i.e., the completely destructive interference of two independent but indistinguishable photons, brought a paradigm shift to the field of quantum optics. Until the demonstration of the HOM effect the interference of independent photons was considered to be impossible. Such an interference effect manifests itself in the study of photon correlations rather than in intensity measurements. More specifically, if two single photons are sent from two different ports of a 50/50 beam splitter then the number of coincidence events at the two output ports vanishes. This follows from the fact that if two photons are indistinguishable with respect to their wavelength and polarisation and their wave packages overlap in time then the two different quantum paths interfere so that the two photons will never leave the beam splitter at different ports. If one of these parameters is changed the photons become distinguishable and the dip in the observed coincidence rate starts to disappear.
The effect is quite versatile and has been observed in a very wide class of systems. Besides systems with discrete optical elements like beam splitters it has been studied in other optical setups such as in integrated devices like evanescently coupled waveguides [@Politi2008; @Rai2008; @Bromberg2009] and coupled plasmonic systems [@Tame2013; @Gupta2014; @Martino2014; @Fakonas2014]. It has also been studied in the radiation from two trapped ions [@Maunz2007], atoms [@Gillet2010; @Wiegner2010; @Hofmann2012] , quantum dots [@Patel2010; @Gold2014] and for two different kinds of sources [@Li2008; @Laiho2009; @Wiegner2010a].
Since the original work of HOM one also has examined the kind of interference that can take place if two single photons are replaced by say two photons on each port or say by one at one port and two at the other port. Hereby one has found very interesting quantum interference effects depending on the beam splitter reflectivity [@Ou1999; @Wang2005a; @Liu2007]. Another interesting possibility occurs if $n$ photons arrive at each port of a 50/50 beam splitter - in this case the output ports never have odd numbers of photons [@Agarwal2012].
In this letter we report a three photon interference effect which is in the original spirit of the HOM effect - we examine the completely destructive interference of three indistinguishable photons on a three port device. We thus shift the focus from a two port device to a three port device. This brings a key change to the underlying mathematical framework as we work with SU(3) transformations rather than SU(2) transformations. We specifically examine a three port integrated device consisting of a small array of three single mode evanescently coupled waveguides as these are relatively easy to fabricate [@Suzuki2006; @Tanzilli2012]. Although we tailor our discussion to coupled waveguide systems the results will be applicable to a wide class of bosonic systems described by the Hamiltonian (\[eq:Hint\]). For the three port device we have found an analytical expression for the completely destructive three photon interference. Thereby we produce a variety of two and three photon entangled states at the output ports.
Our three port network is different from the symmetric multiport beam splitter which has been extensively studied for HOM like interferences [@Lim2005a; @Tichy2010], as for the three port system such a splitter does not exhibit a perfect three photon HOM interference. On the other hand, Campos [@Campos2000] using the idea of Reck et al. [@Reck1994] constructed a SU(3) transformation involving beam splitters and phase shifters which leads to three photon HOM interference. Further, Tan et al. [@Tan2013] showed how the SU(3) transformation involving beam splitters and phase shifters can lead to perfect photon interference depending on specific values of the parameters of SU(2) transformations. While the papers of Campos [@Campos2000] and Tan et al. [@Tan2013] concentrate on configurations using beam splitters we investigate integrated devices. Such integrated devices are now used by many experimental groups. Studies of multi photon interferences in integrated devices have been reported, e.g., in [@Weihs1996; @Politi2008; @Bromberg2009; @Peruzzo2011; @Meany2012; @Broome2013; @Spring2013; @Tillmann2013; @Crespi2013; @Chaboyer2014]. Note that the examination of multiport systems is also important in the context of Bosonic sampling [@Broome2013; @Spring2013; @Tillmann2013; @Crespi2013; @Tamma2014] where the output distribution of Fock input states is sampled what is exponentially hard to predict with classical computations.
While our investigated setup is similar to the experiment studied in [@Spagnolo2013], which relies on a 3D geometry in order to couple all three modes to each other, we will use a more simple 2D structure, where the outer modes are coupled to the inner mode but not to each other. With the setup of [@Spagnolo2013] it is possible to suppress [@Tichy2010] states of the form ${\left| 2,1,0 \right\rangle}$, which contain two, one and zero photons in the different output modes, but the output state will still contain a ${\left| 1,1,1 \right\rangle}$-term corresponding to the coincidence detection of all three photon in the three different output modes. We will show that for a whole range of parameters of the 2D waveguide structure our system can suppress this coincidence event which corresponds to the original Hong-Ou-Mandel effect [@Hong1987; @Shih1988] extended to three interfering photons.
Model and time evolution
========================
![Scheme of a $3 \times 3$ waveguide array with continuous coupling between the inner mode and the outer modes[]{data-label="fig:waveguide"}](waveguide-scheme.pdf)
The investigated system consists of a 2D $3 \times 3$ waveguide array (three input modes and three output modes) with continuous evanescent coupling between the inner mode and the outer modes (see Fig. \[fig:waveguide\]). We assume identical single mode waveguides with uniform coupling throughout the waveguide array. We also assume that the waveguide mode’s frequency is matched to the frequency of the input field. The coupling strength is given by the coupling coefficients $g_1$ (between the first and second mode) and $g_2$ (between the second and third mode) and is basically determined by the distance between the guides. The calculation of the coupling parameters is text book material and can be found, e.g., in [@Saleh2007]. Note that the coupling between the two outer modes is negligible for our geometry.
The interaction Hamiltonian for this system reads $${\hat{H}}_\text{int} = \hbar \left( g_1 {\hat{a}}_1 {\hat{a}^\dagger}_2 + g_2 {\hat{a}}_2 {\hat{a}^\dagger}_3 + g_1^* {\hat{a}}_2 {\hat{a}^\dagger}_1 + g_2^* {\hat{a}}_3 {\hat{a}^\dagger}_2 \right).
\label{eq:Hint}$$\
Each part of Eq. (\[eq:Hint\]) stands for the annihilation of a photon in a certain mode and the creation of a photon in a neighbouring mode associated with the corresponding coupling strength $g_{1/2}$. Although $g_{1}$ and $g_{2}$ are real for waveguide systems we keep these parameters complex, since SU(3) Hamiltonians occur for many physical systems - for example $N$ identical three level atoms interacting with external fields of different phases. We further add that the Hamiltonian that we use adequately describes the investigated waveguide system although it is not the most general SU(3) Hamiltonian. The calculations can be done for the most general case, however, we will see that the Hamiltonian in Eq. (\[eq:Hint\]) already leads to a number of very interesting interference effects by keeping the analysis more simple.
In order to analyse the time dependent evolution of the system we switch to the Heisenberg picture. To simplify the calculation we define a vector $\vec{a} = \left( {\hat{a}^\dagger}_1, \, {\hat{a}^\dagger}_2, \, {\hat{a}^\dagger}_3 \right)^T$ so that the time evolution is governed by $$\frac{\text{d}}{\text{d}t} \vec{a}(t) = \operatorname{\mathrm{i}}\underbrace{\left(\begin{matrix} 0 & g_1 & 0 \\ g^*_1 & 0 & g_2 \\ 0 & g^*_2 & 0 \end{matrix} \right)}_{M} \vec{a}(t),
\label{eq:Heisenberg2}$$ where the interaction time $t$ is determined by the length of the waveguide. The equation can easily be solved using the exponential ansatz $V = e^{-\operatorname{\mathrm{i}}M t}$, which allows to rewrite the creation operators ${\hat{a}^\dagger}_j(0)$ at time $t=0$ in terms of creation operators ${\hat{a}^\dagger}_j(t)$ at time $t$ via $\vec{a}(0) = V\cdot \vec{a}(t)$. The solution of this equation yields the explicit form of the matrix $V$
$$V = \left(\begin{matrix} \cos^2 (\theta) \cos (G) + \sin^2 (\theta) & -\operatorname{\mathrm{i}}\cos (\theta) \sin (G) \, e^{\operatorname{\mathrm{i}}\psi} & \cos (\theta) \sin (\theta) e^{\operatorname{\mathrm{i}}(\varphi + \psi)} \left( \cos (G) - 1 \right) \\ -\operatorname{\mathrm{i}}\cos (\theta) \sin (G) e^{-\operatorname{\mathrm{i}}\psi} & \cos (G) & -\operatorname{\mathrm{i}}\sin (\theta) \sin (G) e^{\operatorname{\mathrm{i}}\varphi} \\ \cos (\theta) \sin (\theta) e^{-\operatorname{\mathrm{i}}(\varphi + \psi)} \left( \cos (G) - 1 \right) & -\operatorname{\mathrm{i}}\sin (\theta) \sin (G) e^{-\operatorname{\mathrm{i}}\varphi} & \sin^2 (\theta) \cos (G) + \cos^2 (\theta) \end{matrix} \right).
\label{eq:Uexpl}$$
where $g_1 \cdot t = G \, \cos (\theta) \, e^{\operatorname{\mathrm{i}}\psi}$ and $g_2 \cdot t = G \, \sin (\theta) \, e^{\operatorname{\mathrm{i}}\varphi}$. From this expression it is easy to see that the evolution given by $V$ is periodic with respect to $G$ and $\theta$. The periodicity in $\theta$ arises due to the change to polar coordinates while the periodicity in $G$ is linked to the interpretation of $G$ as overall scaling factor, i.e., for example to variations of the interaction lengths of the waveguide array. Specifically, for $G=\sqrt{\left|g_1\right|^2+\left|g_2\right|^2} \cdot t=2\pi \cdot m$, with $m \in \mathbb{N}$, we get $\sin(G)=0$ and $\cos(G)=1$, so that $V$ reduces to the identity matrix. This means that the system returns to its initial state, what is also known as self-imaging or revivals of multi mode interferences [@Heaton1999; @Poem2011]. Note that, as expected, the transformation $V$ is unitary - this is because we are considering a lossless device.
![Three examples for different indistinguishable quantum paths leading to the same output state ${\left| 300 \right\rangle}$ in (a), ${\left| 021 \right\rangle}$ in (b) and ${\left| 111 \right\rangle}$ in (c)[]{data-label="fig:qpaths"}](quantumpaths.pdf){width="45.00000%"}
Three single photon input
=========================
Next we focus on an input state where at each input port a single photon is coupled into the waveguide at $t=0$. The wave function of this state is given by ${\left| \psi_\text{in} \right\rangle} = {\hat{a}^\dagger}_1 (0) {\hat{a}^\dagger}_2 (0) {\hat{a}^\dagger}_3 (0) {\left| 0 \right\rangle}$. Via the transformation matrix $V$ we can easily calculate the general form of the output state $${\left| \psi_\text{out} \right\rangle} =\sum_{l=1}^{3} V_{1l} {\hat{a}^\dagger}_l(t) \cdot \sum_{m=1}^{3} V_{2m} {\hat{a}^\dagger}_m(t) \cdot \sum_{n=1}^{3} V_{3n} {\hat{a}^\dagger}_n(t) {\left| 0 \right\rangle},
\label{eq:Ouput}$$ where $V_{mn}$ is the matrix element in the $m$th row and the $n$th column of $V$ in Eq. (\[eq:Uexpl\]). As the transformation matrix $V$ only acts on the creation operators of the three different modes in Eq. (\[eq:Ouput\]) a vacuum state is not transformed by the time evolution governed by the Hamiltonian of Eq. (\[eq:Hint\]). The general output state is a superposition of all possible distributions of the three photons among the three output modes with respective coefficients depending on the explicit form of $V$. The general form of the output state reads $$\begin{split}
{\left| \psi_\text{out} \right\rangle} = &c_{300} {\left| 3,0,0 \right\rangle} + c_{030} {\left| 0,3,0 \right\rangle} +\ldots \\ + &c_{210} {\left| 2,1,0 \right\rangle} + \ldots + c_{111} {\left| 1,1,1 \right\rangle},
\label{eq:GenOuput}
\end{split}$$ where corresponding coefficients can be calculated by explicitly expanding Eq. (\[eq:Ouput\]) or alternatively using a formalism for linear optical networks involving permanents [@Scheel2004].
A permanent of a matrix is equal to its determinant but without the sign of the permutation taken into account. For a $N\times N$ matrix $A$ it is given by $$\operatorname{Perm}A = \sum_{\sigma} \prod_{j=1}^N A_{j\sigma(j)},
\label{eq:permanent}$$ where the sum runs over all possible permutations $\sigma$ of the set $\{1,2,\ldots,N \}$.
The coefficients $c_{klm}$ of Eq. (\[eq:GenOuput\]) can be expressed by permanents of matrices $V^{\{k,l,m\}}$, where $k$, $l$ and $m$ are the number of photons in the three output modes so that $k+l+m=3$. Hereby $V^{\{k,l,m\}}$ is a $3 \times 3$ matrix and is constructed via the transformation matrix of Eq. (\[eq:Uexpl\]). It consists of $k$ copies of the first column of $V$, $l$ copies of the second column of $V$ and $m$ copies of the third column of $V$ [@Scheel2004]. Dividing the permanent of $V^{\{k,l,m\}}$ by a normalisation factor yields the final expression for the coefficients $$c_{klm}= \frac{\operatorname{Perm}V^{\{k,l,m\}}}{\sqrt{k!l!m!}}.
\label{eq:c_klm}$$ One can show that the absolute value of these coefficients depends only on $G$ and $\theta$ but not on the phases $\psi$ and $\varphi$ of the coupling coefficients $g_{1/2}$ which will only have an impact on the phases of the coefficients $c_{klm}$.
Note that Eq. (\[eq:c\_klm\]) only holds true for the input state ${\left| 1,1,1 \right\rangle}$. However, as shown in [@Scheel2004], the formalism involving permanents can be expanded to arbitrary initial states by additional consideration of the rows of the transformation matrix corresponding to the input state.
The permanents can be understood as produced by the coherent superposition of indistinguishable quantum paths (excluding the normalization factor) leading to the same output state. To illustrate this we consider in the following three examples, starting with the coefficient $c_{300}$. To calculate this coefficient, associated with the state ${\left| 300 \right\rangle}$, one first has to construct the matrix $V^{\{300\}}$ containing three copies of the first row of the transformation matrix $V$. It therefore reads $$V^{\{3,0,0\}} = \left(\begin{matrix} V_{11} & V_{11} & V_{11} \\ V_{21} & V_{21} & V_{21} \\ V_{31} & V_{31} & V_{31} \end{matrix} \right),
\label{eq:V300}$$ where $V_{nm}$ are the corresponding matrix elements of $V$ in Eq. (\[eq:Uexpl\]) determined by the transition amplitude for a photon initially in mode $n$ to exit in mode $m$. From Eq. (\[eq:V300\]) the permanent of $V^{\{300\}}$ can be calculated yielding $$\operatorname{Perm}V^{\{3,0,0\}} = 6 V_{11} V_{21} V_{31}.
\label{eq:perm300}$$ This expression corresponds to the only possible quantum path where a photon in the first mode stays in the first mode and a photon of the second mode and a photon of the third mode also exits the waveguide array in the first mode as illustrated in Fig. \[fig:qpaths\](a). As only a single quantum path appears for this coefficient no interference is observed in this case.
The calculation of any other coefficient like, e.g., $c_{021}$ for the state ${\left| 0,2,1 \right\rangle}$, follows the same structure: First one constructs the matrix $V^{\{0,2,1\}}$ by taking two copies of the second row and one copy of the third row of $V$ $$V^{\{0,2,1\}} = \left(\begin{matrix} V_{12} & V_{12} & V_{13} \\ V_{22} & V_{22} & V_{23} \\ V_{32} & V_{32} & V_{33} \end{matrix} \right)
\label{eq:V021}$$ and then calculates its permanent, yielding in this case $$\begin{split}
\operatorname{Perm}V^{\{0,2,1\}} = 2 \big( &V_{12} V_{22} V_{33} + V_{12} V_{23} V_{32} \\ + &V_{13} V_{22} V_{32} \big).
\label{eq:perm021}
\end{split}$$ Now three different indistinguishable quantum paths appear which have to be added coherently as shown in Fig. \[fig:qpaths\](b), what leads to interference effects. The three quantum paths correspond to the three possibilities how three photons entering the waveguide array initially in three different modes exit the array in the output state ${\left| 021 \right\rangle}$.
![Plot of the three photon coincidence probability ${\left\lvert {c_{111}} \right\rvert}^2$ and the HOM contour, where the condition ${\left\lvert {c_{111}} \right\rvert}^2$=0 is fulfilled.[]{data-label="fig:HOMcontour"}](c111.pdf)
As a last example we calculate the coefficient $c_{111}$ corresponding to a three mode coincidence event. As the matrix $V^{\{111\}}$ consists of a copy of each row it is equal to $V$ and its permanent is calculated to be $$\begin{split}
\operatorname{Perm}V^{\{1,1,1\}} = &V_{11}V_{22}V_{33}+V_{12}V_{23}V_{31}+V_{13}V_{21}V_{32} \\ + &V_{11}V_{23}V_{32}+V_{12}V_{21}V_{33}+V_{13}V_{22}V_{31} \, .
\end{split}
\label{eq:perm111}$$ It consists of six different terms each corresponding to a different indistinguishable quantum path leading to the same final state ${\left| 1,1,1 \right\rangle}$. For example, the first term corresponds to the case where all three photons exit the waveguide in the same mode they came in, the second term corresponds to the case where the photon in the first/second/third mode switches to the second/third/first mode etc., as depicted in Fig. \[fig:qpaths\](c). As in the second example the multiple indistinguishable quantum paths can interfere with each other. In particular, they can interfere in a completely destructive way. This configuration will be analyzed in the next section.
Three photon Hong-Ou-Mandel interference
========================================
In the following we investigate the particular situation displaying a three photon Hong-Ou-Mandel (HOM) interference. In analogy to the original two photon Hong-Ou-Mandel experiment [@Hong1987; @Shih1988] where two photons are never detected simultaneously at the two different output modes of a 50/50 beam splitter, the probability for all three photons leaving the waveguide at the three output ports vanishes if $$c_{111} \overset{!}{=} 0.
\label{eq:c111HOM}$$ To analyse the conditions for the three photon HOM interference, we have to calculate $c_{111}$ explicitly. From Eqs. (\[eq:c\_klm\]) and (\[eq:perm111\]) we find $$\begin{split}
c_{111} = &V_{11}V_{22}V_{33}+V_{12}V_{23}V_{31}+V_{13}V_{21}V_{32} \\ + &V_{11}V_{23}V_{32}+V_{12}V_{21}V_{33}+V_{13}V_{22}V_{31}.
\end{split}
\label{eq:c111}$$ By inserting the expression for the various matrix elements $V_{mn}$ of Eq. (\[eq:Uexpl\]) and solving Eq. (\[eq:c111HOM\]) we can find an analytical expression for the HOM contour in the variable space ($G$, $\theta$), where all states ${\left| \psi_\text{out}(G,\theta) \right\rangle}$ have a vanishing $c_{111}$ coefficient:
$${\textstyle \theta (G) = n\pi \pm \operatorname{arcsec}\left[ 4\bigg/\sqrt{8 \pm \frac{\sqrt{2} \csc ^4\left(\frac{G}{2}\right) \sqrt{\sin ^4\left(\frac{G}{2}\right) (20 \cos (G)+3 (8 \cos (2 G)+4 \cos (3 G)+3 \cos (4 G)+5))}}{3 \cos (G)+2}}\right]}
\label{eq:thetaG}$$
As can be seen from Eq. (\[eq:thetaG\]) completely destructive three photon HOM interference can take place for a large range of the parameters $g_1$ and $g_2$. Note that for some values of $G$ these equations would result in a complex valued $\theta$ and are therefore not considered as a solution. Fig. \[fig:HOMcontour\] shows a plot of the HOM contour, displaying in particular the periodicity in $G$ and $\theta$ discussed in Sect. 2.
Interesting states on the Hong-Ou-Mandel contour
================================================
Finally we investigate the states determined by the HOM contour. In the original HOM experiment a maximally entangled state of the form $\propto {\left| 2,0 \right\rangle} - {\left| 0,2 \right\rangle}$ is produced at the output [@Hong1987; @Shih1988]. Similar states can be found in the case of a three photon interference. Additionally to the condition $c_{111}=0$ we find that at certain points some coefficients $c_{klm}$ of Eq. (\[eq:GenOuput\]) will vanish as well, so that further terms are suppressed. Other coefficients will have the same absolute value, so that the states can be written in a compact form. We found three different kinds of states fulfilling this condition, which display entanglement between two and possibly three output modes.

In Fig. \[fig:sysent\](a), where the HOM contour is shown, some coordinates are marked by points where one can find maximally bipartite entangled states. A closer investigation yields that for all these coordinates we find states of the form $${\left| \psi_\text{out} \right\rangle} = \frac{1}{\sqrt{2}} \Bigl( {\left| 2_j,0_k \right\rangle} + {\left| 0_j,2_k \right\rangle} \Bigr) {\left| 1_l \right\rangle}
\label{eq:ent2}$$ with $j=1$, $k=3$ and $l=2$ at the red crosses, with $j=1$, $k=2$ and $l=3$ at the green dots and with $j=2$, $k=3$ and $l=1$ at the blue diamonds. Note that we neglected the global and relative phase factors of each state, as we just want to focus on the structure of the states (see the appendix for the exact analytical expressions for each coefficient and the corresponding coordinates). All the three states have a similar form where one mode containing one photon is separable while the remaining two modes are in a maximally entangled state. Depending on the phase $\psi / \varphi$ of the coupling coefficients $g_{1/2}$ the relative phase between the non-separable states can be varied. A closer look at the transformation matrix $V$ of Eq. (\[eq:Uexpl\]) reveals that in these cases the absolute values of the matrix elements are given by $\left|V_{ll}\right|=1$, $\left|V_{jk}\right|=\left|V_{kj}\right|=\left|V_{jj}\right|=\left|V_{kk}\right|=\frac{1}{\sqrt{2}}$ while all remaining matrix elements vanish. This means that the mode $l$ is decoupled from the system as the photon initially entering the waveguide array in this mode will always exit the waveguide array in the same mode. However, the modes $k$ and $j$ are mixed with a 50/50 ratio corresponding to the original two photon HOM interference effect. This can also be seen from Eq. (\[eq:c111\]) where under the same conditions only two of the six quantum paths survive and interfere completely destructively. For $l = 2$, the corresponding quantum paths are the two paths, one at the extreme left and the other at the extreme right, in Fig. \[fig:qpaths\](c). At the coordinates of the green dots (blue diamonds) it is evident that the outer mode mode $l=3$ ($l=1$) is physically decoupled since the coupling parameter $g_2$ ($g_1$) vanishes. However, at the red crosses both coupling constants $g_1$ and $g_2$ are physically present but the inner mode $l=2$ is effectively decoupled from the system and merely acts as a mediator between the outer modes. Therefore, it is more appropriate to speak of a two photon interference at these coordinates although three photons are present. The physical decoupling of modes has also been discussed in Campos [@Campos2000] in the context of beam splitter devices. Note that all the discussed states are created in a deterministic way so that no post selection is required.
In Figs. \[fig:sysent\](b) and \[fig:sysent\](c) the coordinates of possible tripartite entangled states along the HOM contour are displayed. Note that we have three modes and each mode has four possible states corresponding to the occupation of 0, 1, 2 and 3 photons. We are thus dealing with higher dimensional entanglement of three qudits ($d=4$). A complete classification of the classes of entangled states for qudits ($d=4$) does not exist. However the structure of tripartite states generated for the case of Figs. \[fig:sysent\](b) and \[fig:sysent\](c) suggests three qudit ($d=4$) entanglement. The form of the states of Fig. \[fig:sysent\](b) and their coordinates read $$\begin{split}
{\left| \psi_\text{out} \right\rangle}= \frac{\sqrt{3}}{4} &\Bigl({\left| 3_j,0_k \right\rangle} + {\left| 0_j,3_k \right\rangle} \Bigr) {\left| 0_l \right\rangle} \\
+ \frac{1}{4} &\Bigl( {\left| 2_j,1_k \right\rangle} + {\left| 1_j,2_k \right\rangle} \Bigr) {\left| 0_l \right\rangle} \\
+ \frac{1}{2} &\Bigl( {\left| 1_j,0_k \right\rangle} + {\left| 0_j,1_k \right\rangle} \Bigr) {\left| 2_l \right\rangle}
\end{split}
\label{eq:ent31}$$ with $j=2$, $k=3$ and $l=1$ at the red crosses, with $j=1$, $k=3$ and $l=2$ at the green dots and with $j=1$, $k=2$ and $l=3$ at the blue diamonds. The form of the wave function of the states of Fig. \[fig:sysent\](c) and their coordinates are given by $$\begin{split}
{\left| \psi_\text{out} \right\rangle} = \frac{1}{3\sqrt{2}} &\Bigl( 2 {\left| 3_j,0_k,0_l \right\rangle} + {\left| 0_j,3_k,0_l \right\rangle} + {\left| 0_j,0_k,3_l \right\rangle} \Bigr) \\
+ \frac{1}{\sqrt{6}} &\Bigl( {\left| 1_j,0_l \right\rangle} + {\left| 0_j,1_l \right\rangle} \Bigr) {\left| 2_k \right\rangle} \\
+ \frac{1}{\sqrt{6}} &\Bigl({\left| 1_j,0_l \right\rangle} + {\left| 0_j,1_k \right\rangle} \Bigr) {\left| 2_l \right\rangle}
\end{split}
\label{eq:ent32}$$ with $j=1$, $k=2$ and $l=3$ at the red crosses, with $j=2$, $k=1$ and $l=3$ at the green dots and with $j=3$, $k=1$ and $l=2$ at the blue diamonds. As before we neglected the global and relative phase factors of each state for simplicity (see the appendix for the exact analytical expressions for each coefficient and the corresponding coordinates). We have written the states in a way that a certain mode is always factored out in each term so that the entanglement between the two remaining modes is clearly visible; this suggests that the states are not separable and are good candidates for tripartite entanglement.
Conclusion
==========
In conclusion, we investigated the dynamics of a 2D 3$\times$3 waveguide where the outer modes are coupled to the inner mode by evanescent coupling but not to each other. Beginning with three indistinguishable single photons at the three input ports we showed that for a wide range of waveguide parameters this leads to completely destructive three photon interference, i.e. for these parameters the photons will never leave the waveguide in three separate ports. This is a generalisation of the well know Hong-Ou-Mandel effect from two to three photons. Additionally, the produced output states consisting of three qudits ($d=4$) exhibit highly interesting structures displaying bipartite or possibly tripartite entanglement. Clearly the entanglement of qudits is a topic for future studies.
[99]{}
C. K. Hong, Z. Y. Ou, and L. Mandel, “Measurement of subpicosecond time intervals between two photons by interference,” , 2044 (1987). Y. H. Shih and C. O. Alley, “New Type of Einstein-Podolsky-Rosen-Bohm experiment using pairs of light quanta produced by optical parametric down conversion,” , 2921 (1988). A. Politi, M. J. Cryan, J. G. Rarity, S. Yu, and J. L. O’Brien, “Silica-on-silicon waveguide quantum circuits,” Science [**320**]{}, 646 (2008). A. Rai, G. S. Agarwal, and J. H. H. Perk, “Transport and quantum walk of nonclassical light in coupled waveguides,” , 042304 (2008). Y. Bromberg, Y. Lahini, R. Morandotti, and Y. Silberberg, “Quantum and classical correlations in waveguide lattices,” , 253904 (2009). M. S. Tame, K. R. McEnery, S. K. Ozdemir, J.Lee, S. A. Maier, and M. S. Kim, “Quantum plasmonics,” Nature Physics [**9**]{}, 329 (2013). S. D. Gupta and G. S. Agarwal, “Two-photon quantum interference in plasmonics: theory and applications,” , 390 (2014). G. Di Martino, Y. Sonnefraud, M. S. Tame, S. Kéna-Cohen, F. Dieleman, S. K. Özdemir, M. S. Kim, and S. A. Maier, “Observation of quantum interference in the plasmonic Hong-Ou-Mandel effect,” Phys. Rev. Applied [**1**]{}, 034004 (2014). J. S. Fakonas, H. Lee, Y. A. Kelaita, and H. A. Atwater, “Two-plasmon quantum interference,” Nature Photonics [**8**]{}, 317 (2014). P. Maunz, D. L. Moehring, S. Olmschenk, K. C. Younge, D. N. Matsukevich, and C. Monroe, “Quantum interference of photon pairs from two remote trapped atomic ions,” Nature Physics [**3**]{}, 538 (2007). J. Gillet, G. S. Agarwal, and T. Bastin, “Tunable entanglement, antibunching, and saturation effects in dipole blockade,” , 013837 (2010). R. Wiegner, C. Thiel, J. von Zanthier, and G. S. Agarwal, “Creating path entanglement and violating Bell inequalities by independent photon sources,” , 3405 (2010). J. Hofmann, M. Krug, N. Ortegel, L. Gerard, M. Weber, W. Rosenfeld, and H. Weinfurter, “Heralded entanglement between widely separated atoms,” Science [**337**]{}, 72 (2012). R. B. Patel, A. J. Bennett, I. Farrer, C. A. Nicoll, D. A. Ritchie, and A. J. Shields, “Two-photon interference of the emission from electrically tunable remote quantum dots,” Nature Photonics [**4**]{}, 632 (2010). P. Gold, A. Thoma, S. Maier, S. Reitzenstein, C. Schneider, S. Höfling, and M. Kamp, “Two-photon interference from remote quantum dots with inhomogeneously broadened linewidths,” , 035313 (2014). X. Li, L. Yang, L. Cui, Z. Y. Ou, and D. Yu, “Observation of quantum interference between a single-photon state and a thermal state generated in optical fibers,” (17), 12505 (2008). K. Laiho, K. N. Cassemiro, and Ch. Silberhorn, “Producing high fidelity single photons with optimal brightness via waveguided parametric down-conversion,” (25), 22823 (2009). R. Wiegner, J. von Zanthier, and G. S. Agarwal, “Quantum interference and non-locality of independent photons from disparate sources,” J. Phys. B [**44**]{}, 055501 (2011). Z. Y. Ou, J.-K. Rhee, and L. J. Wang, “Observation of four-photon interference with a beam splitter by pulsed parametric down-conversion,” , 959 (1999). H. Wang and T. Kobayashi, “Phase measurement at the Heisenberg limit with three photons,” , 021802 (2005). B. H. Liu, F. W. Sun, Y. X. Gong, Y. F. Huang, G. C. Guo, and Z. Y. Ou, “Four-photon interference with asymmetric beam splitters,” (10), 1320 (2007). G. S. Agarwal, [*Quantum Optics*]{} (Cambridge University, 2012), Sec. (5.7). K. Suzuki, V. Sharma, J. G. Fujimoto, E. P. Ippen, and Y. Nasu, “Characterization of symmetric \[3x3\] directional couplers fabricated by direct writing with a femtosecond laser oscillator,” (6), 2335 (2006). S. Tanzilli, A. Martin, F. Kaiser, M. P. De Micheli, O. Alibart, and D.B. Ostrowsky, “On the genesis and evolution of integrated quantum optics,” Laser & Photon. Rev. [**6**]{}, 115 (2012). Y. L. Lim and A. Beige, “Generalized Hong-Ou-Mandel experiments with bosons and fermions,” New J. Phys. [**7**]{}, 155 (2005). M. C. Tichy, M. Tiersch, F. De Melo, F. Mintert, and A. Buchleitner, “Zero-transmission law for multiport beam splitters,” , 220405 (2010). R. Campos, “Three-photon Hong-Ou-Mandel interference at a multiport mixer,” , 013809 (2000). M. Reck, A. Zeilinger, H. J. Bernstein, and P. Bertani “Experimental realization of any discrete unitary operator,” , 58 (1994). S.-H. Tan, Y. Y. Gao, H. De Guise, and B. C. Sanders, “SU(3) quantum interferometry with single-photon input pulses,” , 113603 (2013). G. Weihs, M. Reck, H. Weinfurter, and A. Zeilinger, “Two-photon interference in optical fiber multiports,” , 893 (1996). A. Peruzzo, A. Laing, A. Politi, T. Rudolph, and J. L. O’Brien, “Multimode quantum interference of photons in multiport integrated devices,” Nature communications [**2**]{}, 224. (2011). T. Meany, M. Delanty, S. Gross, G. D. Marshall, M. J. Steel, and M. J. Withford, “Non-classical interference in integrated 3D multiports,” (24), 26895 (2012). M. A. Broome, A. Fedrizzi, S. Rahimi-Keshari, J. Dove, S. Aaronson, T. C. Ralph, and A. G. White, “Photonic boson sampling in a tunable circuit,” Science [**339**]{}, 794 (2013). J. B. Spring, B. J. Metcalf, P. C. Humphreys, W. S. Kolthammer, X.-M. Jin, M. Barbieri, A. Datta, N. Thomas-Peter, N. K. Langford, D. Kundys, J. C. Gates, B. J. Smith, P. G. R. Smith, and I. A. Walmsley, “Boson sampling on a photonic chip,” Science [**339**]{}, 798 (2013). M. Tillmann, B. Dakić, R. Heilmann, S. Nolte, A. Szameit, and P. Walther, “Experimental boson sampling,” Nature Photonics [**7**]{}, 540 (2013). A. Crespi, R. Osellame, R. Ramponi, J. B. Daniel, E. F. Galvão, N. Spagnolo, C. Vitelli, E. Maiorino, P. Mataloni, and F. Sciarrino, “Integrated multimode interferometers with arbitrary designs for photonic boson sampling,” Nature Photonics [**7**]{}, 545 (2013). Z. Chaboyer, T. Meany, L. G. Helt, M. J. Withford, and M. J. Steel, “Tuneable quantum interference in a 3D integrated circuit,” arXiv:1409.4908 \[quant-ph\] (2014). V. Tamma and S. Laibacher, “Multiboson correlation interferometry with arbitrary single-photon pure states,” arXiv:1410.8121 \[quant-ph\] (2014). N. Spagnolo, C. Vitelli, L. Aparo, P. Mataloni, F. Sciarrino, A. Crespi, R. Ramponi, and R. Osellame, “Three-photon bosonic coalescence in an integrated tritter,” Nature Communications [**4**]{}, 1606 (2013). B. E. A. Saleh and M. C. Teich, [*Fundamentals of Photonics*]{}, 2nd edition (Wiley, 2007), Sec. (8.5). J. M. Heaton and R. M. Jenkins, “General matrix theory of self-imaging in multimode interference (MMI) couplers,“ IEEE Photonics Technol. Lett. [**11**]{}, 212 (1999). E. Poem and Y. Silberberg, “Photon correlations in multimode waveguides,” , 041805(R) (2011). S. Scheel, “Permanents in linear optical networks,” arXiv:quant-ph/0406127 (2004).
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors gratefully acknowledge funding by the Erlangen Graduate School in Advanced Optical Technologies (SAOT) by the German Research Foundation (Deutsche Forschungsgemeinschaft DFG) in the framework of the German excellence initiative. We acknowledge support by the German Research Foundation (Deutsche Forschungsgemeinschaft DFG) and by the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) within the funding programme Open Access Publishing. We thank H. de Guise, T. Meany, V. Tamma, M. Tichy for bringing their works to our attention. S. M. gratefully acknowledges the hospitality at the Oklahoma State University. G. S. A. thanks I. Walmsley and P. Mataloni for discussions on HOM interference in integrated devices.
Appendix {#appendix .unnumbered}
========
For certain waveguide parameters $G$ and $\theta$ we found three interesting sets of states on the HOM contour for which the coincident event is suppressed and therefore $c_{111}$ vanishes. However, only the general form of these states was discussed but not the explicit values of the coefficients. Here we present their analytical values: Tables \[tab:ent2-1\] to \[tab:ent32-3\] contain the exact analytical expressions for the coefficients of Eq. (\[eq:ent2\]) - (\[eq:ent32\]). Note that in Eqs. (\[eq:ent31\]) and (\[eq:ent32\]) some signs depend on the value of $n \, \text{mod} 4$. Therefore in the second part of Table \[tab:ent31\] and in Table \[tab:ent32-2\] $n$ is replaced by $4\tilde{n}+0$, $4\tilde{n}+1$, $4\tilde{n}+2$ or $4\tilde{n}+3$, in which case all possible cases are considered.
------------------- ------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------ --
$G$ $\pi \left( 2m+1 \right)$ $\frac{\pi}{4} \left( 2m+1 \right)$ $\frac{\pi}{4} \left( 2m+1 \right)$
\[3pt\] $\theta$ $\frac{\pi}{8} \left( 2n+1 \right)$ $\pi n$ $\frac{\pi}{2} \left( 2n+1 \right)$
\[3pt\] $c_{210}$ $\frac{(-1)^{n+1}}{\sqrt{2}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ 0 0
\[3pt\] $c_{012}$ $\frac{(-1)^{n}}{\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ 0 0
\[3pt\] $c_{201}$ 0 $\frac{\operatorname{\mathrm{i}}(-1)^{n+m}}{\sqrt{2}} e^{-\operatorname{\mathrm{i}}\psi}$ 0
\[3pt\] $c_{021}$ 0 $\frac{\operatorname{\mathrm{i}}(-1)^{n+m}}{\sqrt{2}} e^{\operatorname{\mathrm{i}}\psi}$ 0
\[3pt\] $c_{120}$ 0 0 $\frac{\operatorname{\mathrm{i}}(-1)^{n+m+1}}{\sqrt{2}} e^{-\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{102}$ 0 0 $\frac{\operatorname{\mathrm{i}}(-1)^{n+m+1}}{\sqrt{2}} e^{\operatorname{\mathrm{i}}\varphi}$
\[3pt\]
------------------- ------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------ --
------------------- --------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------
$G$ $\pi \left( 2m+1-\frac{1}{3}\right)$ $\pi \left( 2m+1+\frac{1}{3}\right)$ $\pi \left( 2m+1-\frac{1}{3}\right)$ $\pi \left( 2m+1+\frac{1}{3}\right)$
\[3pt\] $\theta$
\[3pt\] $c_{030}$ $\frac{\sqrt{3}}{4} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $\frac{\sqrt{3}}{4} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $-\frac{\sqrt{3}}{4} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ -$\frac{\sqrt{3}}{4} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[2pt\] $c_{003}$ $\operatorname{\mathrm{i}}\frac{(-1)^n \sqrt{3}}{4} e^{\operatorname{\mathrm{i}}(\psi+2\varphi)}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{3}}{4} e^{\operatorname{\mathrm{i}}(\psi+2\varphi)}$ $\operatorname{\mathrm{i}}\frac{(-1)^n \sqrt{3}}{4} e^{\operatorname{\mathrm{i}}(\psi+2\varphi)}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n \sqrt{3}}{4} e^{\operatorname{\mathrm{i}}(\psi+2\varphi)}$
\[3pt\] $c_{210}$ $\frac{1}{2} e^{-\operatorname{\mathrm{i}}(\psi+\varphi)}$ $\frac{1}{2} e^{-\operatorname{\mathrm{i}}(\psi+\varphi)}$ $-\frac{1}{2} e^{-\operatorname{\mathrm{i}}(\psi+\varphi)}$ $-\frac{1}{2} e^{-\operatorname{\mathrm{i}}(\psi+\varphi)}$
\[3pt\] $c_{201}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{2} e^{-\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{2} e^{-\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{2} e^{-\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{2} e^{-\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{021}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{4} e^{\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{4} e^{\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{4} e^{\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{4} e^{\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{012}$ $\frac{(-1)^n}{4} e^{\operatorname{\mathrm{i}}(\psi+\varphi)}$ $\frac{(-1)^n}{4} e^{\operatorname{\mathrm{i}}(\psi+\varphi)}$ $-\frac{(-1)^n}{4} e^{\operatorname{\mathrm{i}}(\psi+\varphi)}$ $-\frac{(-1)^n}{4} e^{\operatorname{\mathrm{i}}(\psi+\varphi)}$
\[3pt\] $G$
\[3pt\] $\theta$ $\frac{\pi}{4} \left( 2(4\tilde{n}+0)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+1)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+2)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+3)+1 \right)$
\[3pt\] $c_{300}$ $\operatorname{\mathrm{i}}\frac{(-1)^m\sqrt{3}}{4} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $\operatorname{\mathrm{i}}\frac{(-1)^m\sqrt{3}}{4} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{(-1)^m\sqrt{3}}{4} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{(-1)^m\sqrt{3}}{4} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{003}$ $\operatorname{\mathrm{i}}\frac{(-1)^m\sqrt{3}}{4} e^{-\operatorname{\mathrm{i}}(\psi+2\varphi)}$ $-\operatorname{\mathrm{i}}\frac{(-1)^m\sqrt{3}}{4} e^{-\operatorname{\mathrm{i}}(\psi+2\varphi)}$ $-\operatorname{\mathrm{i}}\frac{(-1)^m\sqrt{3}}{4} e^{-\operatorname{\mathrm{i}}(\psi+2\varphi)}$ $\operatorname{\mathrm{i}}\frac{(-1)^m\sqrt{3}}{4} e^{-\operatorname{\mathrm{i}}(\psi+2\varphi)}$
\[3pt\] $c_{201}$ $-\operatorname{\mathrm{i}}\frac{(-1)^m}{4} e^{-\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{(-1)^m}{4} e^{-\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{(-1)^m}{4} e^{-\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^m}{4} e^{-\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{120}$ $\operatorname{\mathrm{i}}\frac{(-1)^m}{2} e^{-\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^m}{2} e^{-\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^m}{2} e^{-\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^m}{2} e^{-\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{021}$ $\operatorname{\mathrm{i}}\frac{(-1)^m}{2} e^{\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^m}{2} e^{\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^m}{2} e^{\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{(-1)^m}{2} e^{\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{102}$ $-\operatorname{\mathrm{i}}\frac{(-1)^m}{4} e^{\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^m}{4} e^{\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^m}{4} e^{\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^m}{4} e^{\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $G$ $\pi \left( 2m+1-\frac{1}{3}\right)$ $\pi \left( 2m+1+\frac{1}{3}\right)$ $\pi \left( 2m+1-\frac{1}{3}\right)$ $\pi \left( 2m+1+\frac{1}{3}\right)$
\[3pt\] $\theta$
\[3pt\] $c_{300}$ $\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{3}}{4} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{3}}{4} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{3}}{4} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{3}}{4} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{030}$ $\frac{\sqrt{3}}{4} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $\frac{\sqrt{3}}{4} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $-\frac{\sqrt{3}}{4} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $-\frac{\sqrt{3}}{4} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[3pt\] $c_{210}$ $\frac{1}{4} e^{-\operatorname{\mathrm{i}}(\psi+\varphi)}$ $\frac{1}{4} e^{-\operatorname{\mathrm{i}}(\psi+\varphi)}$ $-\frac{1}{4} e^{-\operatorname{\mathrm{i}}(\psi+\varphi)}$ $-\frac{1}{4} e^{-\operatorname{\mathrm{i}}(\psi+\varphi)}$
\[3pt\] $c_{120}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{4} e^{-\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{4} e^{-\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{4} e^{-\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{4} e^{-\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{102}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{2} e^{\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{2} e^{\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{2} e^{\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{2} e^{\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{012}$ $\frac{1}{2} e^{\operatorname{\mathrm{i}}(\psi+\varphi)}$ $\frac{1}{2} e^{\operatorname{\mathrm{i}}(\psi+\varphi)}$ $-\frac{1}{2} e^{\operatorname{\mathrm{i}}(\psi+\varphi)}$ $\frac{1}{2} e^{\operatorname{\mathrm{i}}(\psi+\varphi)}$
\[3pt\]
------------------- --------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------
------------------- ---------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------
$G$
\[3pt\] $\theta$ $\pi n + \arctan\left( \frac{1}{2} (1 - \sqrt{3})\right)$ $\pi n - \arctan\left( \frac{1}{2} (1 - \sqrt{3})\right)$
\[3pt\] $c_{300}$ $\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{030}$ $- \frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[3pt\] $c_{003}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{\psi+2\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{\psi+2\varphi}$
\[3pt\] $c_{120}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{021}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{102}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{012}$ $-\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $G$
\[3pt\] $\theta$ $\pi n + \arctan\left( \frac{1}{2} (1 - \sqrt{3})\right)$ $\pi n - \arctan\left( \frac{1}{2} (1 - \sqrt{3})\right)$
\[3pt\] $c_{300}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{030}$ $- \frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[3pt\] $c_{003}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{\psi+2\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{\psi+2\varphi}$
\[3pt\] $c_{120}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{021}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{102}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{012}$ $-\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $G$
\[3pt\] $\theta$ $\pi n + \arctan\left( \frac{1}{2} (1 + \sqrt{3})\right)$ $\pi n - \arctan\left( \frac{1}{2} (1 + \sqrt{3})\right)$
\[3pt\] $c_{300}$ $\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{030}$ $\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $-\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[3pt\] $c_{003}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{\psi+2\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{\psi+2\varphi}$
\[3pt\] $c_{120}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{021}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{102}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{012}$ $\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $G$
\[3pt\] $\theta$ $\pi n + \arctan\left( \frac{1}{2} (1 + \sqrt{3})\right)$ $\pi n - \arctan\left( \frac{1}{2} (1 + \sqrt{3})\right)$
\[3pt\] $c_{300}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{030}$ $\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $-\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[3pt\] $c_{003}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{\psi+2\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{\psi+2\varphi}$
\[3pt\] $c_{120}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{021}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{102}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{012}$ $\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\]
------------------- ---------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------
------------------- ----------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------
$G$
\[3pt\] $\theta$ $\frac{\pi}{4} \left( 2(4\tilde{n}+0)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+1)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+2)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+3)+1 \right)$
\[3pt\] $c_{300}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{030}$ $\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $-\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $-\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[3pt\] $c_{003}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$
\[3pt\] $c_{210}$ $\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $c_{201}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{102}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{012}$ $\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $G$
\[3pt\] $\theta$ $\frac{\pi}{4} \left( 2(4\tilde{n}+0)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+1)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+2)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+3)+1 \right)$
\[3pt\] $c_{300}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{030}$ $\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $-\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $-\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[3pt\] $c_{003}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$
\[3pt\] $c_{210}$ $\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $c_{201}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{102}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{012}$ $\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $G$
\[3pt\] $\theta$ $\frac{\pi}{4} \left( 2(4\tilde{n}+0)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+1)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+2)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+3)+1 \right)$
\[3pt\] $c_{300}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{030}$ $-\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $-\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[3pt\] $c_{003}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$
\[3pt\] $c_{210}$ $-\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $c_{201}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{102}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{012}$ $-\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $G$
\[3pt\] $\theta$ $\frac{\pi}{4} \left( 2(4\tilde{n}+0)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+1)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+2)+1 \right)$ $\frac{\pi}{4} \left( 2(4\tilde{n}+3)+1 \right)$
\[3pt\] $c_{300}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{030}$ $-\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $-\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $\frac{\sqrt{2}}{3} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[3pt\] $c_{003}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$ $\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$ $-\operatorname{\mathrm{i}}\frac{1}{3\sqrt{2}} e^{\psi+2\varphi}$
\[3pt\] $c_{210}$ $-\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $c_{201}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{102}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{012}$ $-\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\]
------------------- ----------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------
------------------- ---------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------
$G$
\[3pt\] $\theta$ $\pi n + \arctan\left( 1 + \sqrt{3}\right)$ $\pi n - \arctan\left( 1 + \sqrt{3}\right)$
\[3pt\] $c_{300}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{030}$ $\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $-\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[3pt\] $c_{003}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{\psi+2\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{\psi+2\varphi}$
\[3pt\] $c_{210}$ $\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $c_{201}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{120}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{021}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$
\[3pt\] $G$
\[3pt\] $\theta$ $\pi n + \arctan\left( 1 + \sqrt{3}\right)$ $\pi n - \arctan\left( 1 + \sqrt{3}\right)$
\[3pt\] $c_{300}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{030}$ $\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $-\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[3pt\] $c_{003}$ $\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{\psi+2\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{\psi+2\varphi}$
\[3pt\] $c_{210}$ $\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $-\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $c_{201}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{120}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{021}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$
\[3pt\] $G$
\[3pt\] $\theta$ $\pi n + \arctan\left( 1 - \sqrt{3}\right)$ $\pi n - \arctan\left( 1 - \sqrt{3}\right)$
\[3pt\] $c_{300}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{030}$ $-\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[3pt\] $c_{003}$ $\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{\psi+2\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{\psi+2\varphi}$
\[3pt\] $c_{210}$ $-\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $c_{201}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{120}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{021}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$
\[3pt\] $G$
\[3pt\] $\theta$ $\pi n + \arctan\left( 1 - \sqrt{3}\right)$ $\pi n - \arctan\left( 1 - \sqrt{3}\right)$
\[3pt\] $c_{300}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{3\sqrt{2}} e^{-\operatorname{\mathrm{i}}(2\psi+\varphi)}$
\[3pt\] $c_{030}$ $-\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$ $\frac{1}{3\sqrt{2}} e^{\operatorname{\mathrm{i}}(\psi-\varphi)}$
\[3pt\] $c_{003}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{\psi+2\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n\sqrt{2}}{3} e^{\psi+2\varphi}$
\[3pt\] $c_{210}$ $-\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$ $\frac{1}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}(\psi + \varphi)}$
\[3pt\] $c_{201}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\psi}$
\[3pt\] $c_{120}$ $\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{-\operatorname{\mathrm{i}}\varphi}$
\[3pt\] $c_{021}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$ $-\operatorname{\mathrm{i}}\frac{(-1)^n}{\sqrt{6}} e^{\operatorname{\mathrm{i}}\psi}$
\[3pt\]
------------------- ---------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------
|
---
abstract: 'The subject of this work is the adsorption transition of a long flexible self-avoiding polymer chain onto a rigid thin rod. The rod is represented by a cylinder of radius $R$ with a short-ranged attractive surface potential for the chain monomers. General scaling results are obtained by using renormalization group arguments in conjunction with available results for quantum field theories with curved boundaries \[McAvity and Osborn 1993 [*Nucl. Phys. B*]{} [**394**]{} 728\]. Relevant critical exponents are identified and estimated using geometric arguments.'
address: 'Department of Physics, University of Texas at Brownsville, Brownsville, TX 78520, USA'
author:
- Andreas Hanke
title: 'Adsorption transition of a self-avoiding polymer chain onto a rigid rod'
---
It is a pleasure to dedicate this work to L. Schäfer on the occasion of his 60th birthday.
Introduction
============
Polymers and polymer solutions belong to the most intensively studied objects in condensed matter physics [@dG79; @CJ90; @LS99]. The adsorption of polymers on surfaces and interfaces is of special importance [@Fleer]. Adsorption of free polymers in solution on the container wall or other boundaries occurs in the presence of attractive interactions between the surface and the chain monomers. Examples of such interactions include Coulomb and van der Waals forces, and more specific molecular interactions. Coulomb forces are screened by counter-ions in the solution and can be tuned to some extent by adding salt to the solvent. Thus on changing the properties of the solvent an individual polymer chain can undergo a reversible transition from a freely floating, desorbed state to an adsorbed state in which the chain monomers are close to the surface on average. The adsorption of polymers on flat surfaces has been studied theoretically and experimentally, and is by now well understood [@dG79; @Fleer; @EKB82; @PNS]. Due to the importance to colloidal dispersions the interaction of polymers with spherical and rodlike particles has been studied as well [@Napper; @Buit; @PVL95; @EHD; @Now]. The adsorption of flexible polymers on rodlike particles is relevant, for example, in gels [@PVL95], and for the binding of flexible polymers to relatively stiff biomolecules such as DNA [@Alberts]. Another class of polymer adsorption transitions involve two flexible self-avoiding but mutually attracting polymers which can form a bound, double-stranded, so-called “zipped” state. A prominent example of this kind of transition is the denaturation transition of double-stranded DNA [@PS; @WB] which recently attracted considerable attention regarding its statistical-mechanical properties and thermodynamic order [@CCG00; @KMP2000; @GMO01; @COS02; @RG04; @Schaefer] (for a recent review on the biophysics related to DNA topology, see [@MH05]). The DNA denaturation transition is usually modelled in such a way that monomer $i$ of one strand can only interact with monomer $i$ of the other strand, reflecting the key-lock principle of natural, inhomogeneous DNA with its specific sequence of base pairs. Two-chain systems in which any monomer of one chain can interact with any monomer of the other chain include diblock copolymers, which consist of linear chains of $N$ monomers of type $A$ followed by $N'$ monomers of type $B$, with different $AA$, $BB$, and $AB$ interactions; systems of this kind have been intensively studied as well [@JBL84; @SK85; @SK90; @SLK91; @FH97; @V98]. Recently it was found that self-avoiding mutually attracting diblock copolymers can adopt a zipped state in which the two components form a bound, double-stranded structure which however remains swollen and does not assume compact configurations. The zipped state is located between a swollen, unbound high-temperature state and a compact low-temperature state, separated by second-order and first-order phase transitions, respectively [@OSS2000; @BCOS01; @KGB04].
(10,8) (-0.5,-30.5)
In this work we focus on the adsorption transition of a long flexible self-avoiding polymer chain onto a rigid thin elongated rod, as shown in figure \[fig\_rod\]. We assume that the rod is endowed with a short-ranged surface potential, or adsorption energy, ${\cal E}$, for the chain monomers; the more positive ${\cal E}$, the more attractive the interaction. Thus, on increasing ${\cal E}$ from a low value, at some threshold value ${\cal E}^*$ the chain is expected to undergo a transition from an unbound, free state to a bound state in which the overall gain in binding energy compensates the loss of configurational entropy. An interesting feature of this transition is the fact that it represents a true phase transition in the thermodynamic sense; the adsorbed state forming an elongated, aligned structure, see figure \[fig\_rod\]. In contrast, for a particle of finite size, a self-avoiding polymer chain cannot undergo a true adsorption phase transition due to steric constraints. The focus of the present work is on the former case. Since the polymer adsorption transition is characterized by large fluctuations we expect scaling and universal behaviour. We thus use the renormalization group to obtain general scaling results for the chain partition function. We also obtain estimates for relevant critical exponents by geometric arguments. Before we introduce and study our model in section \[sec\_rod\] we recall some general ideas and concepts for later reference. In section \[subsec\_general\] we discuss typical scaling arguments related to the polymer adsorption transition. Since our work strongly relies on field-theoretical methods, in section \[subsec\_map\] and \[sec\_app\] we sketch the mapping of the polymer system on the Ginzburg-Landau model.
General scaling behaviour {#subsec_general}
-------------------------
Consider the adsorption of a long flexible polymer chain onto an object ${\cal S}$. For the time being, this object can be a surface, a thin rod, another flexible polymer chain, or any other extended manifold that allows for a thermodynamic adsorption phase transition. The quantity of interest is the partition function $Z$ of the chain with one end fixed close to the object ${\cal S}$ and the other end moving freely. Upon adjusting the system’s thermodynamic variables close to the adsorption transition point, only the number of chain monomers $N \gg 1$ and the adsorption energy ${\cal E} \approx {\cal E}^*$ remain as relevant parameters, where ${\cal E}^*$ is the adsorption energy at the transition point. The partition function $Z$ is expected to obey the scaling $$\label{scale}
Z(N, {\cal E}) \sim p^N N^{\gamma' - 1}
f[({\cal E}- {\cal E}^*) N^{\Phi}]$$ where $p$ is the lattice-dependent connectivity constant and $\gamma'$, $\Phi$ are critical exponents. The scaling function $f(x)$ is regular at $x = 0$ since $Z(N, {\cal E})$ has no singularity for finite $N$ and ${\cal E} \approx {\cal E}^*$. The exponent $\gamma'$ thus characterizes the scaling of $Z$ right at the transition point: $Z(N,{\cal E}^*) \sim p^N N^{\gamma' - 1}$. Note that $\gamma'$ is not necessarily equal to the critical exponent $\gamma$ for an unbounded, free chain, for which $Z_{free}(N) \sim p^N N^{\gamma - 1}$ (compare equation (\[scalbehz\]) in section \[subsec\_ren\], with $\gamma_1$ introduced in equation (\[scalbeh\]) and $L \sim N$). The exponent $\Phi$ is referred to as the crossover-exponent. Since $- {\cal E}$ acts as a chemical potential for monomers close to ${\cal S}$, the number $N_S$ of adsorbed monomers scales as $$\label{ns}
N_S \sim \frac{d}{d {\cal E}} \ln Z(N, {\cal E}) \, \, .$$ Equation (\[scale\]) implies three distinct scaling regimes for $N_S$.
\(i) ${\cal E} = {\cal E}^*$. Equations (\[scale\]) and (\[ns\]) yield $$\label{nscale}
N_S \sim N^{\Phi} \, \, , \qquad
{\cal E} = {\cal E}^* \, \, , \, \, N \to \infty \, \, .$$ For $0< \Phi < 1$ this implies that $N_S$ grows with $N$ but the [*fraction*]{} of adsorbed monomers, $N_S / N$, vanishes for $N \to \infty$. For $\Phi = 1$, the behaviour $N_S \sim N$ at ${\cal E} = {\cal E}^*$ indicates that the adsorption transition is of first order.
\(ii) ${\cal E} < {\cal E}^*$. Equation (\[scale\]) implies that the scaling behaviour of $Z$ for $N \to \infty$ is governed by the behaviour of $f(x)$ for $x \to - \infty$, regardless the precise value of ${\cal E}$. In this case ${\cal S}$ is repulsive for the chain monomers and $N_S$ stays finite even for $N \to \infty$.
\(iii) ${\cal E} > {\cal E}^*$. The chain adopts an adsorbed state and stays close to ${\cal S}$ on average. Thus, $N_S \sim N$, which implies a [*finite*]{} fraction of adsorbed monomers for $N \to \infty$: $$\label{fraction}
F({\cal E}) \equiv \lim\limits_{N \to \infty} \frac{N_S(N,{\cal E})}{N}
> 0 \, \, \, , \qquad {\cal E} > {\cal E}^* \, \, \, .$$ To analyze the behaviour of $F({\cal E})$ it is useful to consider the grand canonical ensemble. The partition function in the grand canonical ensemble, ${\cal X}(\mu, {\cal E})$, is related to $Z(N, {\cal E})$ by a Laplace transform: $$\label{grand}
{\cal X}(\mu, {\cal E}) = \int_0^{\infty} d N
e^{- \mu N} Z(N, {\cal E}) \, \, ,$$ where $\mu$ is the chemical potential conjugate to $N$. Equation (\[grand\]) is valid for $\mu > \mu_c$ with $\mu_c = \ln p$. One is allowed to set $p = 1$ for simplicity, so that $\mu_c = 0$. Equation (\[scale\]) then implies the scaling behaviour $$\label{scalex}
{\cal X}(\mu, {\cal E}) \sim \mu^{-\gamma'}
g[({\cal E}- {\cal E}^*) \mu^{- \Phi})] \, \, \, ,
\qquad \mu > 0 \, \, .$$ By reasoning similar to that below equation (\[scale\]) one finds that the scaling function $g(y)$ is regular at $y = 0$ and $\gamma'$ characterizes the scaling of ${\cal X}$ right at the transition point: ${\cal X}(\mu, {\cal E}^*) \sim \mu^{-\gamma'}$. On the other hand, we know that for ${\cal E} > {\cal E}^*$ the chain takes an adsorbed state. For the grand canonical ensemble this implies, for given ${\cal E} > {\cal E}^*$, that $N_S \to \infty$ for $\mu \searrow \mu_S({\cal E})$ with some $\mu_S({\cal E}) > 0$. In this limit we thus expect the scaling behaviour ${\cal X}(\mu, {\cal E}) \sim (\mu - \mu_S)^{-\gamma_S}$ where $\gamma_S$ is characteristic for the [*adsorbed*]{} state and [*different*]{} from $\gamma'$. For example, if ${\cal S}$ is another flexible polymer chain, the adsorbed state forms a double-stranded structure which, as a whole, behaves like an unbounded, free, self-avoiding chain, which implies $\gamma_S = \gamma$ in this case [@CCG00]. Using equation (\[scalex\]) it follows that the scaling function $g(y)$ must have a singularity at some $y_S > 0$ of the form $$\label{sing}
g(y \nearrow y_S) \sim (y_S - y)^{-\gamma_S} \, \, .$$ The relation $({\cal E}- {\cal E}^*) \mu_S^{- \Phi} = y_S$ determines the shape of the line $\mu_S({\cal E})$ as $$\label{s}
\mu_S({\cal E}) \sim ({\cal E} - {\cal E}^*) ^{1/\Phi}
\, \, \, , \quad {\cal E} > {\cal E}^* \, \, \, .$$ Figure \[fig\_pd\] shows typical phase diagrams for polymer adsorption in the grand canonical ensemble (fixed $\mu$) and canonical ensemble (fixed $N$).
(10,5.2) (-2.7,-36.2)
According to the above, for ${\cal E} > {\cal E}^*$, ${\cal X}(\mu, {\cal E})$ in equation (\[grand\]) has a singularity for $\mu \searrow \mu_S({\cal E}) > 0$, and $N$ $(\ge N_S)$ diverges in this limit. This, in turn, implies $$\label{divz}
Z(N, {\cal E}) \sim \exp[\mu_S({\cal E}) N] \, \, , \qquad
{\cal E} > {\cal E}^* \, \, , \, \, N \to \infty \, \, .$$ Using equation (\[ns\]) one finds $$\label{resns}
N_S \sim N \frac{d}{d {\cal E}} \, \mu_S({\cal E}) \, \, \, ,$$ and thus, using equations (\[fraction\]) and (\[s\]), $$\label{resfracintro}
F({\cal E}) = \lim\limits_{N \to \infty} \frac{N_S}{N}
\sim \frac{d}{d {\cal E}} \, \mu_S({\cal E})
\sim ({\cal E} - {\cal E}^*)^{\kappa} \, \, \, ,
\quad {\cal E} > {\cal E}^* \, \, \, ,$$ where the exponent $\kappa$ is related to the crossover-exponent $\Phi$ in equation (\[nscale\]) by $$\label{relation}
\kappa = \frac{1 - \Phi}{\Phi} \, \, .$$ In particular, for $\Phi = 1$ the fraction $\lim\limits_{N \to \infty} N_S / N$ jumps from zero for ${\cal E} < {\cal E}^*$ to a finite value for ${\cal E} > {\cal E}^*$, which is then independent of ${\cal E}$; this corresponds to a first-order transition (compare case (i) above).
The scaling behaviours (\[nscale\]) and (\[resfracintro\]), (\[relation\]) have been proven rigorously for the polymer adsorption transition on a [*flat*]{} surface [@EKB82; @PNS]. This system is closely related to the semi-infinite Ginzburg-Landau model, see [@Binder; @Diehl; @Diehl2] for reviews; the mapping of the polymer system on the Ginzburg-Landau model is discussed below. The scaling behaviours (\[nscale\]) and (\[resfracintro\]), (\[relation\]) also hold reasonably well in a recent numerical study of the DNA denaturation transition [@CCG00].
Mapping of the polymer system on the Ginzburg-Landau model {#subsec_map}
----------------------------------------------------------
According to Edwards’ continuous chain model we represent the configuration of a linear chain of length $L$ by a curve ${\bf R}(s)$, parameterized by its arc length $s$, in $D$-dimensional space. The chain length $L$ is proportional to the number of chain monomers $N$. In the presence of an external potential $V({\bf r})$ the partition function of the chain is given by $$\begin{aligned}
\label{pf}
Z^{(2)}({\bf r}, {\bf r}'; L) & = &
\int_{\bf r}^{{\bf r}'} {\cal D} {\bf R}
\, \exp \left\{ - \frac{1}{4}
\int_0^L ds \left(\frac{d {\bf R}}{ds}\right)^2 \right\} \\[2mm]
& & \times \, \exp \left\{-
\int d^Dr \left[V({\bf r}) \rho({\bf r})
+ \frac{u}{6} \, \rho^2({\bf r}) \right] \right\}
\nonumber \end{aligned}$$ with the monomer density $$\rho({\bf r}) = \int_0^L ds \, \delta^D( {\bf r} - {\bf R}(s)) \, \, .$$ The symbol $\int_{\bf r}^{{\bf r}'} {\cal D} {\bf R}$ denotes functional integration over all chain configurations with the chain ends fixed at ${\bf r}$ and ${\bf r}'$. The superscript “$(2)$” on $Z$ indicates that the chain is fixed with both ends. The coupling constant $u$ of the $\rho^2({\bf r})$ interaction characterizes the strength of the contact interaction between chain monomers: $u = 0$ describes a Gaussian random walk whereas $u > 0$ describes a self-avoiding chain. The case $u < 0$ is related to the polymer collapse transition to a compact state in a poor solvent [@dG75; @Dup82]; in this work we do not consider this collapse transition, and therefore exclude the case $u < 0$.
As first noticed by de Gennes [@dG72], the polymer system can be mapped on the Ginzburg-Landau model of an $n$-component order parameter field $\vec{\Phi} = (\Phi_1, \ldots, \Phi_n)$ in the limit $n \to 0$. It is worthwhile to note that this mapping not only works in perturbation theory but already on the level of the Hamiltonian in the Ginzburg-Landau model. The derivation, using a Gaussian transformation to linearize the $\rho^2({\bf r})$ interaction in equation (\[pf\]) [@C75], is left to \[sec\_app\]. The result is $$\label{res}
Z^{(2)}({\bf r}, {\bf r}'; L) =
{\cal L}_{t \to L} \, \lim\limits_{n \to 0} \,
\langle \Phi_1({\bf r}) \, \Phi_1({\bf r}') \rangle$$ where $$\label{tpf}
\langle \Phi_1({\bf r}) \, \Phi_1({\bf r}') \rangle
= \int {\cal D} \vec{\Phi} \,
\Phi_1({\bf r}) \, \Phi_1({\bf r}') \, e^{- {\cal H}\{\vec{\Phi}\}}$$ is the two-point correlation function in the Ginzburg-Landau model with the standard Hamiltonian [@Amit; @ZJ02] $$\label{action}
{\cal H}\{\vec{\Phi}\} = \int d^Dr \left[
\frac{1}{2} \, (\nabla \vec{\Phi})^2 +
\frac{t}{2} \, \vec{\Phi}^2 +
\frac{1}{2} \, V({\bf r}) \, \vec{\Phi}^2 +
\frac{u}{24} \, (\vec{\Phi}^2)^2 \right] \, \, .$$ The operation $$\label{laplace}
{\cal L}_{t \to L} = \frac{1}{2 \pi i} \int_{\cal C} dt \,
e^{t L}$$ acting on the correlation function in equation (\[res\]) is an inverse Laplace transform in which the integration path ${\cal C}$ in the complex $t$-plane is a parallel to the imaginary axis to the right of all singularities. Equations (\[res\]) - (\[laplace\]) describe the statistics of the polymer chain in terms of properties of the near-critical ferromagnetic $n$-vector model in the limit $n \to 0$. In the context of the $n$-vector model, the parameter $t \sim T - T_c$ describes the deviation of the temperature $T$ from the critical temperature $T_c$. The form of the interaction involving the potential $V({\bf r})$ in equation (\[action\]) shows that the $O(n)$-invariant scalar $\vec{\Phi}^2$ is related to the monomer density $\rho({\bf r})$ of the polymer chain in equation (\[pf\]). Thus, translated to the polymer system, the term $\vec{\Phi}^2 \cdot \vec{\Phi}^2$ is related to the contact interaction between chain monomers and $t$ plays the role of a chemical potential for chain monomers in the bulk.
Polymer adsorption transition onto a rigid rod {#sec_rod}
==============================================
The objective of this work is the study of the adsorption of a long flexible polymer chain onto a rigid thin rod. However, for the time being we model the rod by an infinitely elongated cylinder with small but finite radius $R$. The introduction of a finite cylinder radius $R$ is necessary since the limit of a rod with zero radius turns out to be too singular for the present field-theoretical treatment; see figure \[fig\_curved\] and the discussion below equation (\[fracd2\]). On the other hand, the fact that the adsorption transition now takes place on a [*surface*]{}, albeit a curved one, allows us to take advantage of available results for field theories with curved boundaries [@AO93]; see section \[subsec\_ren\] below. The chain of total length $L_0 \sim N$, where $N$ is the number of chain monomers, is fixed with one end at the point ${\bf r}_S$ on the cylinder surface $S$ while the other end is moving freely. The cylinder surface is endowed with a short-ranged surface potential $c_0$ acting on the chain monomers, and it is understood that the chain monomers are excluded from the interior of the cylinder. The potential V([**r**]{}) in equation (\[pf\]) is thus given by
(10,9) (1,-25)
$$\label{vs}
V({\bf r}) = c_0 \int_S dS' \, \delta^D({\bf r} - {\bf r}'_S)$$
and $V({\bf r}) = \infty$ if ${\bf r}$ is located in the interior of the cylinder. By virtue of equation (\[res\]), the chain partition function is given by $$\label{intz}
Z(L_0) = \int_V d^D r' \, Z^{(2)}({\bf r}_S, {\bf r}'; L_0)
= {\cal L}_{t_0 \to L_0} \, \lim\limits_{n \to 0} \,
\chi(t_0) \, \, .$$ The integration volume $V$ is the outer space of the cylinder bounded by the cylinder surface $S$. On the right hand side, $\chi(t_0) \equiv \chi({\bf r}_S; t_0)$, where the [*susceptibility*]{} $\chi({\bf r}; t_0)$ in the Ginzburg-Landau model is obtained by integrating the two-point correlation function, i.e., $$\label{intc}
\chi({\bf r}; t_0) =
\int_V d^D r' \, \langle \Phi_1({\bf r}) \,
\Phi_1({\bf r}') \rangle \, \, ,$$ in a Ginzburg-Landau type field theory with Hamiltonian [@Binder; @Diehl; @Diehl2] $$\label{ham}
{\cal H}\{\Phi\} = \int\limits_V d^Dr \left[
\frac{1}{2} \, (\nabla \vec{\Phi})^2 +
\frac{t_0}{2} \, \vec{\Phi}^2 +
\frac{u_0}{24} \, (\vec{\Phi}^2)^2 \right]
+ \int\limits_S dS \,
\frac{c_0}{2} \, \vec{\Phi}^2$$ of an $n$-component order parameter field $\vec{\Phi} = (\Phi_1, \ldots, \Phi_n)$. In the following we will understand $\chi({\bf r}; t_0)$ as the chain partition function in the grand canonical ensemble, where $t_0$ is conjugate to $L_0$ (compare section \[subsec\_general\] with $t_0 \sim \mu$, $L_0 \sim N$, and $c_0 \sim - {\cal E}$). The subscript “0” on $c_0$, $t_0$, $L_0$, $u_0$ is used to distinguish these parameters from their renormalized counterparts that will be introduced below. If ${\bf r} = {\bf r}_S$ in $\chi({\bf r}; t_0)$ we suppress the argument ${\bf r}_S$ since $\chi$ does not depend of the choice of ${\bf r}_S$ by symmetry; compare equation (\[intz\]).
Before we proceed with the renormalization of the model defined by equation (\[ham\]) we review some cases in which results are available. To this end it is useful to consider not only a cylinder in $D = 3$ dimensions but bodies of more general shape. The “generalized cylinders” have an infinitely extended axis of dimension $D - d$ and a curved surface with constant curvature radius $R$ in the subspace of co-dimension $d$. [^1] The axis can be the axis of an ordinary cylinder in three dimensions, for which $(d,D) = (2,3)$, the midplane of a plate $(d = 1)$, or the centre of a sphere $(d = D)$. The explicit form of a “generalized cylinder” is given by the set $\left\{ {\bf r} = ({\bf r}_{\perp},{\bf r}_{\parallel})
\in {\mathbb R}^{d} \times
{\mathbb R}^{D-d} ; |{\bf r}_{\perp}| \le R \right\}$. Figure \[fig\_cyl\] shows some examples in the $(d,D)$-plane; compare also references [@EHD; @HD99; @H2000].
(10,15) (-2,-18.5)
Flat surface
------------
First, consider the line $d = 1$ with $D$ arbitrary in figure \[fig\_cyl\]. Then, of course, equation (\[ham\]) corresponds to the semi-infinite $n$-vector model bounded by a [*flat*]{} surface for which many results are available [@Binder; @Diehl; @Diehl2]. In particular, for the polymer case $n = 0$, the considerations of section \[subsec\_general\] can be made explicit, proving equations (\[nscale\]) and (\[resfracintro\]), (\[relation\]) [@EKB82; @PNS].
Gaussian model {#subsec_gauss}
--------------
Next, consider the line $D = 4$ with $d$ arbitrary in figure \[fig\_cyl\]. Since $D = 4$ is the upper critical dimension of the model in equation (\[ham\]), this case corresponds to the Gaussian field theory and to Gaussian random walks, respectively. Both cases are described by $u_0 = 0$ in equation (\[ham\]) and can be solved exactly by standard methods [@EHD]. Thus, the susceptibility defined in equation (\[intc\]), corresponding to the chain partition function in the grand canonical ensemble, is given by [@EHD] $$\label{suscept}
\chi({\bf r}; t_0) =
\frac{1}{t_0} \left[1 - \frac{\zeta_0 \, \rho^{- \alpha}
K_{\alpha}\left(\rho \sqrt{\tau_0}\,\right)}
{\sqrt{\tau_0} \, K_{\alpha+1}\left(\sqrt{\tau_0} \, \right)
+ \zeta_{0\,} K_{\alpha}\left(\sqrt{\tau_0} \, \right) } \right]$$ where $\rho = |{\bf r}_{\perp}| / R$ (so that $\rho= 1$ for ${\bf r} = {\bf r}_S$), $\zeta_0 = R c_0$ and $\tau_0 = R^2 t_0$. The functions $K_{\alpha}$ and $K_{\alpha+1}$ are modified Bessel functions [@AS72] with $\alpha = (d-2)/2$. From equation (\[suscept\]) one obtains the asymptotic behaviour right at the transition point: $$\label{nsgauss}
N_S \sim N^{\Phi} \, \, , \qquad
\zeta_0 = \zeta_0^* \, \, , \, \, N \to \infty \, \, ,$$ with $$\label{zetastar}
\zeta_0^* = \left\{ \begin{array}{c@{\quad,\quad}r}
0 & d \le 2 \\
- (d-2) & d > 2
\end{array} \right. \qquad .$$ The crossover-exponent $\Phi$ in equation (\[nsgauss\]) is given by [@EHD] $$\label{co}
\Phi = \frac{|d-2|}{2} \, \, , \qquad 1 \le d < 4 \, \, ,
\quad d \neq 2 \, \, ,$$ $N_S \sim \ln N$ for $d = 2$, $N_S \sim N / \ln N$ for $d = 4$, and for $d > 4$ one finds $\Phi = 1$ corresponding to a first-order transition. For $\zeta_0 < \zeta_0^*$ the finite fraction of adsorbed monomers scales like $$\label{resfrac}
\lim\limits_{N \to \infty} \frac{N_S}{N}
\sim (\zeta_0^* - \zeta_0)^{\kappa} \, \, \, ,
\quad \zeta_0 < \zeta_0^* \, \, \, ,$$ with the exponent [@EHD] $$\label{ka}
\kappa = \frac{2 - |d-2|}{|d-2|} \, \, , \qquad 1 \le d < 4 \, \, ,
\quad d \neq 2 \, \, \, .$$ For $d = 2$ one finds $$\label{fracd2}
\lim\limits_{N \to \infty} \frac{N_S}{N}
\sim e^{- 2 / |\zeta_0|} \, \, , \qquad \zeta_0 < 0 \, \, ,
\quad d = 2 \, \, ,$$ which formally corresponds to $\kappa = \infty$. For $d = 4$ one finds $\lim\limits_{N \to \infty} N_S / N
\sim - 1 / \ln (\zeta_0^* - \zeta_0)$ while for $d > 4$ the fraction tends to a finite value, which reflects the fact that in this case the transition is of first order. Equations (\[nsgauss\]) - (\[fracd2\]) are to be compared with equations (\[nscale\]) - (\[relation\]) in section \[subsec\_general\], where the variable $\zeta_0^* - \zeta_0$ here corresponds to ${\cal E} - {\cal E}^*$ there. In particular, for given $\zeta_0 < \zeta_0^*$ the chain partition function $\chi(t_0)$ exhibits a singularity for $t_0 \searrow t_S(\zeta_0)$ where the function $t_S(\zeta_0)$ is determined by an analysis of the zero of the denominator in equation (\[suscept\]) [@EHD]. The values of the exponents $\Phi$ and $\kappa$ in equations (\[co\]) and (\[ka\]) obey the scaling relation $\kappa = (1 - \Phi)/\Phi$ from equation (\[relation\]). Note that for $d > 2$ the limit $R \to 0$ yields merely the trivial bulk result $\chi({\bf r}; t_0) = 1/t_0$, and hence no phase transition for $d > 2$. Thus, in the present treatment it is necessary to keep the cylinder radius $R$ finite even though the results for $\Phi$ and $\kappa$ do not depend on $R$.
Finally, we note that the adsorption of a Gaussian chain onto a rigid rod is equivalent to the denaturation transition of two [*flexible*]{} Gaussian chains $A$ and $B$ if monomer $s$ of chain $A$ can only interact with monomer $s$ of chain $B$. This corresponds to an interaction of the form $\sim \int_0^{L_0} ds \, \delta[{\bf R}_A(s) - {\bf R}_B(s)]$ in the partition function (\[pf\]). It is easy to see that the system of two flexible Gaussian chains with the above interaction can be mapped on the system of one flexible Gaussian chain interacting with a rigid rod, using the transformation [@CCG00; @GMO01] $$\label{coord}
{\bf R}(s) = {\bf R}_A(s) - {\bf R}_B(s)
\, \, \, \, , \, \quad
{\bf R}_{\rm CM}(s) = \frac{1}{2}
\left[ {\bf R}_A(s) + {\bf R}_B(s) \right] \, \, .$$ For Gaussian chains (and only for Gaussian chains) the degrees of freedom described by the centre of mass (CM) coordinates ${\bf R}_{\rm CM}(s)$ and the relative coordinates ${\bf R}(s)$ decouple from each other. The relative coordinates ${\bf R}(s)$ describe a Gaussian chain which interacts with the origin at ${\bf R} = 0$ in the $d$-dimensional subspace, while the degrees of freedom of the remaining $D-d$ dimensions are unbounded and independent from the degrees of freedom of the $d$-dimensional subspace. By virtue of this mapping, the above results (\[nsgauss\]) - (\[fracd2\]) have also been obtained in [@CCG00].
Renormalization of the field-theory bounded by the curved cylinder surface {#subsec_ren}
--------------------------------------------------------------------------
We now turn to the renormalization of the $n$-vector model defined by equation (\[ham\]). The objective is to determine the scaling behaviour of the renormalized chain partition function $\chi_{ren}(R,t,c)$ in the grand canonical ensemble in terms of the cylinder radius $R$ and renormalized parameters $t$ (conjugate to the renormalized chain length $L$) and $c$. The $n$-vector model in equation (\[ham\]) can be dimensionally regularized and renormalized by minimal subtraction of poles in $\varepsilon = 4 - D$. The renormalizations of the bulk field $\vec{\Phi}({\bf r})$, ${\bf r} \in V$, and the bulk parameters $t_0$, $u_0$ have the same form as in the unbounded case, and are given by [@Amit; @ZJ02; @Diehl] (we follow the convention of [@Diehl]) $$\begin{aligned}
\label{rb}
\vec{\Phi} & = & Z_{\Phi}^{\,1/2} \, \vec{\Phi}_{ren} \\[2mm]
t_0 & = & \mu^2 \, Z_t \, t \, + \, t_b \\[2mm]
u_0 & = & \mu^{\varepsilon} \, 2^D \, \pi^{D/2} \, Z_u \, u
\, \, \, \, .\end{aligned}$$ The parameters $\vec{\Phi}_{ren}$, $t$, $u$ are renormalized counterparts of the bare parameters $\vec{\Phi}$, $t_0$, $u_0$ in equation (\[ham\]). In a regularization scheme using a large momentum-cutoff $\Lambda$, the bulk renormalization factors $Z_{\Phi}$, $Z_t$, $Z_u$ absorb divergencies logarithmic in $\Lambda$, corresponding to poles in $\varepsilon$ in dimensional regularization. The parameter $t_b$ absorbs divergencies quadratic in $\Lambda$ and describes the shift of the critical temperature $T_c$ of the $n$-vector model due to fluctuations. In dimensional regularization, $t_b = 0$, and [@Amit; @ZJ02; @Diehl] $$\begin{aligned}
\label{zb}
Z_{\Phi} & = & 1 - \frac{n+2}{36 \, \varepsilon} \, u^2 \, + \,
\Or(u^3) \\[2mm]
Z_t \, Z_{\Phi} & = & 1 + \frac{n+2}{3 \, \varepsilon} \, u \, + \,
\left[
\frac{(n+2)(n+5)}{9 \, \varepsilon^2} - \frac{n+2}{6 \, \varepsilon}
\right] u^2 \, + \, \Or(u^3) \\[2mm]
Z_u & = & 1 + \frac{n+8}{3 \, \varepsilon} \, u \, + \,
\left[
\frac{(n+8)^2}{9 \, \varepsilon^2} - \frac{3n+14}{6 \, \varepsilon}
\right] u^2 \, + \, \Or(u^3) \, \, \, .\end{aligned}$$ The presence of the cylinder surface $S$ requires, in addition, renormalization of the surface field $\vec{\Phi}|_S = \vec{\Phi}({\bf r})$, ${\bf r} \in S$, and of the surface parameter $c_0$ in equation (\[ham\]). For a [*flat*]{} surface these additional renormalizations are given by [@DD81; @DD83; @Diehl] $$\begin{aligned}
\label{rbsurface}
\vec{\Phi}|_S & = & (Z_{\Phi} Z_1)^{1/2} \, (\vec{\Phi}|_S)_{ren}
= Z_1^{\,1/2} \, \vec{\Phi}_{ren}|_S \\[2mm]
c_0 & = & \mu \, Z_c \, c \, + \, c_{sp} \label{rbsurface2}\end{aligned}$$ defining the renormalized surface field $(\vec{\Phi}|_S)_{ren}$ and surface parameter $c$. The new renormalization factors $Z_1$ and $Z_c$ absorb divergencies logarithmic in $\Lambda$ which occur at a flat surface, corresponding to poles in $\varepsilon$ in dimensional regularization. The parameter $c_{sp}$ absorbs divergencies linear in $\Lambda$ and describes the shift of the multicritical point due to fluctuations (compare figure \[fig\_pd\], $- c \sim {\cal E}$). In dimensional regularization, $c_{sp} = 0$, and $Z_1$ and $Z_c$ are given by [@DD81; @DD83; @Diehl] $$\begin{aligned}
\label{zs1}
Z_1 & = & 1 + \frac{n+2}{3 \, \varepsilon} \, u \, + \,
\left[
\frac{(n+2)(n+5)}{9 \, \varepsilon^2} - \frac{n+2}{3 \, \varepsilon}
\right] u^2 \, + \, \Or(u^3) \\[2mm]
Z_c & = & 1 + \frac{n+2}{3 \, \varepsilon} \, u \, + \,
\left[
\frac{(n+2)(n+5)}{9 \, \varepsilon^2} + \frac{n+2}{36 \, \varepsilon}
\, (1 - 4 \pi^2) \right] u^2 \, + \, \Or(u^3) \, \, \, . \label{zs2}\end{aligned}$$ Equations (\[rbsurface\]) and (\[rbsurface2\]) hold for a flat surface. As shown by McAvity and Osborn [@AO93], the required renormalizations become modified if the surface $S$ is curved, like in the present case. While the renormalization of the surface field $(\vec{\Phi}|_S)_{ren}$ remains unchanged, the renormalization of the surface parameter $c_0$ requires an additional, additive term that depends on the mean curvature [@AO93]: $$\label{rs1}
c_0 = \mu \, Z_c \, c \, + \, \frac{d-1}{R} \,
{\cal C}(u,\varepsilon)$$ or, with $\zeta_0 = R c_0$ and $\zeta = \mu R c$, $$\label{rs1short}
\zeta_0 = Z_c \, \zeta \, + \, (d-1) \,
{\cal C}(u,\varepsilon) \, \, .$$ $Z_c$ is the same renormalization factor as in equation (\[rbsurface2\]) for a flat surface and ${\cal C}(u,\varepsilon)$ to second order in $u$ can be deduced from reference [@AO93]: $$\label{c}
{\cal C}(u,\varepsilon) = \frac{n+2}{9}
\left\{
\frac{u}{\varepsilon} +
\left[\frac{n+5}{3 \, \varepsilon^2} +
\frac{n+1 - 4 \pi^2}{12 \, \varepsilon} \right] u^2
\right\} \, + \, \Or(u^3) \, \, \, .$$
To proceed, we consider the two-point correlation function $$\label{intcfrage}
\langle \Phi_1({\bf r}_S) \, \Phi_1({\bf r}') \rangle
= G({\bf r}_S, {\bf r}', R; t_0, \zeta_0, u_0)
\, \, ,$$ where ${\bf r}_S$ is located on the cylinder surface $S$ and ${\bf r}' \in V$. From $G$, the chain partition function $\chi(R,t_0,\zeta_0,u_0)$ follows by an integration over ${\bf r}'$, see equation (\[intc\]); by symmetry, $\chi$ does not depend on ${\bf r}_S$. The renormalization group (RG) equation for the renormalized counterpart $G_{ren}$ of $G$ follows in the standard way, using the relation $$\label{gg}
G_{ren}({\bf r}_S, {\bf r}', R; t, \zeta, u; \mu) =
Z_{\Phi}^{\,-1} Z_1^{\,-1/2} \,
G({\bf r}_S, {\bf r}', R; t_0, \zeta_0, u_0)$$ and the fact that $G$ does not depend on $\mu$: $\mu \partial_{\mu} G|_{\mu=0} = 0$. This leads to the RG equation $$\label{rg}
\left[
{\cal D}_{\mu} + \eta_{\Phi} + \frac{1}{2} \, \eta_1 +
\mu \frac{\partial \zeta}{\partial \mu}\,\Big|_{\mu=0} \, \partial_{\zeta}
\right] G_{ren} \, = \, 0$$ where we have used the abbreviation $$\label{d}
{\cal D}_{\mu} = \mu \partial_{\mu}|_{\mu=0} +
\beta(u) \partial_u - (2 + \eta_t) \, t \, \partial_t$$ with $$\label{beta}
\beta(u) = \mu \partial_{\mu} u|_{\mu=0}$$ and the exponent functions $\eta_i(u) = \mu \partial_{\mu} \ln Z_i|_{\mu=0}$ with $i = \Phi, 1, t$. The new feature of the RG equation (\[rg\]) generated by the surface curvature is the function $$\label{zeta}
\mu \frac{\partial \zeta}{\partial \mu}\,\Big|_{\mu=0} =
- \, \eta_c \left[
\zeta + \frac{d-1}{3} + \frac{n (d-1)}{18} \, u \, + \, \Or(u^2)
\right]$$ where $$\begin{aligned}
\label{etac}
\eta_c(u) & = & \mu \partial_{\mu} \ln Z_{c\,}|_{\mu=0} \\[1mm]
& = & - \, \frac{n+2}{3} \, u \, + \,
\frac{n+2}{18} \, (4 \pi^2 - 1) u^2 \, + \, \Or(u^3) \nonumber\end{aligned}$$ is the exponent function for $c$ corresponding to a [*flat*]{} surface, with $Z_c$ from equation (\[zs2\]) [@Diehl]. A necessary condition for the two-point correlation function $G$ to be scale invariant (SI) is that the right hand side of equation (\[zeta\]) vanishes, which is either the case for $u = 0$ (Gaussian model) or if $\zeta$ takes the value for which the square bracket in equation (\[zeta\]) vanishes: $$\label{van}
\zeta = \zeta_{\rm SI} \equiv - \, \frac{d-1}{3} \, + \, \Or(u) \, \, .$$ Note that the value of $\zeta_{\rm SI}$ to leading order in $u$, $\zeta_{\rm SI} = - (d-1)/3$, is [*different*]{} from the value $\zeta_0^*$ in equation (\[zetastar\]) corresponding to the onset of the adsorption transition in the Gaussian model. It is also interesting to note that for a sphere, corresponding to $d = D = 4 - \varepsilon$, the value of $\zeta_{\rm SI}$ to leading order in $u$, $\zeta_{\rm SI} = - 1$, coincides with the value $\zeta_{\rm CI} = - (d-2)/2$ for which the Gaussian two-point correlation function at $t_0 = 0$ is conformal invariant (CI) [@ER95; @rem]. However, within the $\Phi^4$-model the special value $\zeta_{\rm SI}$ is already fixed by scale invariance, whereas in the Gaussian model the value $\zeta_{\rm CI}$ is only fixed if one requires conformal invariance.
By solving the RG equation (\[rg\]) for the two-point function in the standard way using characteristics and integrating over ${\bf r}'$, one derives the general scaling behaviour of the chain partition function in the grand canonical ensemble: $$\label{scalbeh}
\chi_{ren}(R, t, \zeta, u; \mu)
\sim t^{-\gamma_1} \,
\Theta\left(
\alpha \mu R t^{\nu}, \beta \Delta \zeta t^{\nu - \Phi_{\rm flat}} \right)$$ where $$\label{variable}
\Delta \zeta = \zeta - \zeta_{\rm SI}$$ with $\zeta_{\rm SI}$ from equation (\[van\]). The exponent $\nu = (2 + \eta_t^*)^{-1}$ is a bulk critical exponent, while $\Phi_{\rm flat} = \nu (1 + \eta_c^*)$ (the crossover exponent) and $\gamma_1 = \nu (2 - \eta_{\Phi}^* - \eta_1^* / 2)$ are critical exponents associated with a [*flat*]{} surface [@Diehl]. The exponent functions $\eta_i(u)$ for $i = t, c, \Phi, 1$ are defined below equation (\[beta\]) and in equation (\[etac\]). The values $\eta_i^*$ are the values of the exponent functions at the fixed point $u^*$. The constants $\alpha$ and $\beta$ in equation (\[scalbeh\]) are nonuniversal prefactors, while the function $\Theta(x,y)$ is a universal scaling function.
Equation (\[scalbeh\]) is in line with the Gaussian model (where $\nu = \Phi_{\rm flat} = 1/2$ and $\gamma_1 = 1$): $$\label{scalbehgauss}
\chi(R, t_0, \zeta_0)
\sim t_0^{-1} \,
\Theta_0(R t_0^{1/2}, \zeta_0) \, \, \, , \qquad \mbox{Gaussian model} \, \, ,$$ compare equation (\[suscept\]). Finally, equation (\[scalbeh\]) can be compared with the corresponding behaviour for a flat surface: $$\label{scalbehplanar}
\chi_{ren}(t, c, u; \mu)
\sim t^{-\gamma_1} \,
\Theta_{\rm flat}(\beta' c \, t^{- \Phi_{\rm flat}})
\, \, \, , \qquad \mbox{flat surface} \, \, .$$ Note that this limit can be obtained from equation (\[scalbeh\]) by rewriting the second scaling argument as $\Delta \zeta t^{\nu - \Phi_{\rm flat}} =
R \, t^{\nu} \cdot \Delta c \, t^{- \Phi_{\rm flat}}$. Equation (\[scalbehplanar\]) then follows as the limit $R \to \infty$ of the scaling form $\chi_{ren} \sim t^{-\gamma_1} \,
\widetilde{\Theta}(\alpha' \mu R \, t^{\nu},
\beta' \Delta c \, t^{- \Phi_{\rm flat}})$, where $\Delta c = c + \Or(1/R)$.
Let us come back to equation (\[scalbeh\]). According to recent estimates one has $\Phi_{\rm flat} = 0.518$ [@DS94] and $\nu = 0.588$ [@ZJ02] for $n=0$ in $D=3$ so that the exponent $\nu - \Phi_{\rm flat}$ in equation (\[scalbeh\]) is small but positive. From a naive point of view this would imply that the scaling variable $\Delta \zeta$ in the second scaling argument of $\Theta$ is [*irrelevant*]{} and could be omitted from the outset; however, one should keep in mind that the radius $R$ in the first scaling argument is also irrelevant in principle. Now, the relevant question is whether the scaling function $\Theta(x,y)$ exhibits a singularity on a certain subset of $(x,y)$, corresponding to the polymer adsorption transition; compare the related discussion below equation (\[scalex\]) and below equation (\[fracd2\]). In fact, we expect this singularity to occur for $y = y_S(x) < 0$, where now $y_S(x)$ is a function of the first scaling variable $x = \alpha \mu R t^{\nu}$. Note that the scaling function $\Theta_0(x,y)$ in equation (\[scalbehgauss\]), corresponding to the Gaussian model, exhibits this kind of singularity indeed; compare section \[subsec\_gauss\]. Thus, in the present description, the adsorption transition is characterized by a balance of the two irrelevant variables $\Delta \zeta$ and $R$. In this sense the scaling variable $\Delta \zeta$ can be considered as a dangerously irrelevant variable.
Finally we note that equation (\[scalbeh\]) implies a corresponding scaling form of the chain partition function with fixed chain length $L$: $$\label{scalbehz}
Z_{ren}(R, L, \zeta, u; \mu)
\sim L^{\gamma_1 - 1} \,
\Psi\left(
\widetilde{\alpha} \mu R L^{-\nu},
\widetilde{\beta} \Delta \zeta L^{-(\nu - \Phi_{\rm flat})} \right) \, \, ,$$ with (different) nonuniversal prefactors $\widetilde{\alpha}$ and $\widetilde{\beta}$ and a universal scaling function $\Psi$. From equations (\[scalbeh\]) and (\[scalbehz\]) the number of adsorbed monomers $N_S$ for $\Delta \zeta = 0$ and the finite fraction of adsorbed monomers $N_S / N$ for $\Delta \zeta < 0$ can be derived as outlined in section \[subsec\_general\] in principle. However, the thin rod limit $R \to 0$ corresponds to singular limits of the scaling functions $\Theta$ and $\Psi$ which are rather difficult to obtain. At least it is easy to see that the exponents $\Phi$ and $\kappa$ defined in equations (\[nscale\]) and (\[resfracintro\]) are [*universal*]{}, using the fact that $\Theta$ and $\Psi$ are universal scaling functions. To proceed, in the next section we use a different method to obtain estimates for the exponents $\Phi$ and $\kappa$ for a cylinder in $D = 3$.
Estimates for the exponents $\Phi$ and $\kappa$ by using the additivity of co-dimensions
----------------------------------------------------------------------------------------
In this section we obtain estimates for the exponents $\Phi$ and $\kappa$ for a cylinder in $D=3$ introduced in equations (\[nscale\]) and (\[resfracintro\]) by means of an interpolation procedure between two known cases. Firstly, in the Gaussian model one has $\Phi = 1/2$ for $d = 3$, see equation (\[co\]), corresponding to the point $(d,D) = (3,4)$ in figure \[fig\_cyl\]. Likewise, $\kappa = 1$ from the scaling relation $\kappa = (1 - \Phi)/\Phi$, or from equation (\[ka\]), for $(d,D) = (3,4)$. Secondly, one has $\Phi = 0$ on the whole line $v^{-1}(D) - d = 0$ in figure \[fig\_cyl\]. This result can be obtained by using the [*co-dimension additivity theorem*]{}, stating that the co-dimension of the intersection points of two objects of dimensions $D_1$ and $D_2$ is given by the sum of their co-dimensions: $D - D_{int} = (D - D_1) + (D - D_2)$, i.e., $$\label{codim}
D_{int} = D_1 + D_2 - D \, \, \, .$$ For example, two-dimensional surfaces generically intersect along curves in $D = 3$ ($D_{int} = 2 + 2 - 3 = 1$) and only at isolated points in $D = 4$ ($D_{int} = 0$). Equation (\[codim\]) is also expected to hold if one or both objects are fractal. In the present case, one object is a self-avoiding random walk with fractal (Hausdorff) dimension $v^{-1}$ and the other one is a “generalized cylinder” with co-dimension $d$ (see figure \[fig\_cyl\]); the dimension of intersection points of these two objects is thus given by $$\label{codimen}
D_{int} = \nu^{-1}(D) - d \, \, \, .$$ In figure \[fig\_cyl\], the line $D_{int} = 0$ as a function of $d$ and $D$ is shown as the blue dashed line. An unbounded, free, self-avoiding random walk does not intersect with “generalized cylinders” located above the dashed line, apart from exceptional cases. In this sense, “generalized cylinders” above the dashed line are irrelevant perturbations for a free, self-avoiding random walk. Now, “generalized cylinders” located right on the line $D_{int} = 0$ correspond to marginal cases: An unbounded, free, self-avoiding random walk [*does*]{} intersect with “generalized cylinders” located on the dashed line, but only at isolated points. We thus expect that the number of intersecting monomers $N_S$ grows with $N$ for $N \to \infty$, but only logarithmically, i.e., $N_S \sim \ln N$, which implies $\Phi = 0$; compare the case $d = 2$ for the Gaussian model discussed in section \[subsec\_gauss\], and compare the case ${\cal E} = {\cal E}^*$ with $\Phi = 0$ in section \[subsec\_general\]. It should be noted that this argument only applies to [*unperturbed*]{} random walks, and does not make any statement for walks that interact with the body.
Thus, the values of the exponent $\Phi$ at the end points of the green line in figure \[fig\_cyl\] are available. This can be used to obtain an estimate for $\Phi$ for an ordinary cylinder in $D = 3$ as follows. The shape of the dashed line in figure \[fig\_cyl\] is known quite accurately by means of the $\varepsilon$-expansion of $\nu(D)$ in conjunction with the exact value $\nu = 3/4$ for $n = 0$ in $D = 2$ [@Amit; @ZJ02]. Thus, one may estimate $\Phi$ for a cylinder in $D=3$, located at the point $(d,D) = (2,3)$ in figure \[fig\_cyl\], by means of a linear interpolation between the known values of $\Phi$ at the end points of the green line (compare references [@HD99; @H2000]). In this way we find for a cylinder in $D = 3$ the estimates, using equation (\[relation\]), $$\label{exp}
\Phi \simeq \frac{1}{6} \, \, \, , \qquad
\kappa = \frac{1 - \Phi}{\Phi} \simeq 5 \, \, \, .$$ Since these exponents are universal and do not depend on the cylinder radius $R$, they are also expected to hold for a rigid rod with vanishing radius, or for a line of lattice sites in a numerical simulation of this system.
Conclusion
==========
We have investigated the adsorption transition of a long flexible self-avoiding polymer chain onto a rigid thin rod by field-theoretical methods. The rod is endowed with a short-ranged adsorption energy ${\cal E}$ for the chain monomers so that, on increasing ${\cal E}$, at some threshold value ${\cal E}^*$ the chain undergoes a transition from an unbound state to a bound state, as shown in figure \[fig\_rod\]. The main results and remaining questions are summarized below.
- By means of general scaling arguments we obtained the scaling relation (\[relation\]) for the exponents $\Phi$ and $\kappa$ defined in equations (\[nscale\]) and (\[resfracintro\]), and the phase diagrams shown in figure \[fig\_pd\] in terms of the number of chain monomers $N$, the chemical potential $\mu$ conjugate to $N$, and the adsorption energy ${\cal E}$.
- By representing the rod by a cylinder of finite radius $R$ we could use available results for field theories with curved boundaries [@AO93]; see figure \[fig\_curved\]. By using renormalization group arguments, we derived the scaling behaviour of the chain partition function in the grand canonical ensemble, equation (\[scalbeh\]), and in the canonical ensemble, equation (\[scalbehz\]), where $L \sim N$ and $t \sim \mu$. Notable features of the scaling results are the distinct form of the scaling variable $\zeta \propto R c$, where the parameter $c$ is related to the surface potential for chain monomers, and the curvature-induced shift of $\zeta$ in equation (\[variable\]) with $\zeta_{\rm SI}$ from equation (\[van\]). It also follows that the exponents $\Phi$ and $\kappa$ introduced in equations (\[nscale\]) and (\[resfracintro\]) are universal.
- Because the cylinder radius $R$ enters the scaling functions $\Theta$ and $\Psi$ in equations (\[scalbeh\]) and (\[scalbehz\]) explicitly it is difficult to obtain the universal exponents $\Phi$ and $\kappa$ directly from them. Therefore we used the co-dimension additivity theorem in conjunction with an interpolation procedure, as shown in figure \[fig\_cyl\], to obtain the estimates for $\Phi$ and $\kappa$ in equation (\[exp\]). The check of these exponents and the scaling relation (\[relation\]) is a possible starting point for numerical simulations of this system.
- It would be interesting to introduce new methods to derive the exponents $\Phi$ and $\kappa$, possibly avoiding the introduction of a finite cylinder radius $R$ from the outset.
- It would also be interesting to explain the relation between $\zeta_0^*$ and $\zeta_{\rm SI}$ discussed below equation (\[van\]).
I would like to thank C v Ferber for useful correspondence.
Mapping of the polymer system on the $n$-vector model
=====================================================
\[sec\_app\]
The $\rho^2({\bf r})$ interaction in equation (\[pf\]) can be linearized by means of a Gaussian transformation [@C75; @CJ90]. This procedure makes use of the Gaussian integral $$\label{gauss}
\int {\cal D}X \exp\left[- \frac{1}{2} \, X^T A X + b^T X \right] =
\left( \det \frac{A}{2 \pi} \right)^{-1/2} \,
\exp\left[\frac{1}{2} \, b^T A^{-1} \, b \right]$$ where $X$ is a vector with discrete or continuous indices and the symmetric matrix $A$ must have a positive definite real part. Using $X({\bf r}) \propto i \sigma({\bf r})$ with purely imaginary $\sigma({\bf r})$, the matrix $\displaystyle{A({\bf r},{\bf r}') =
\frac{3}{u} \, \delta({\bf r}-{\bf r}')}$, and $b({\bf r}) = i \rho({\bf r})$, one finds $$\begin{aligned}
\label{aux}
& & \exp \left\{ - \frac{u}{6}
\int d^Dr \, \rho^2({\bf r}) \right\} \\[2mm]
& & = \int {\cal D} \sigma \,
\exp\left[ \frac{3}{2 u} \int d^D r \, \sigma^2({\bf r}) \, -
\int d^Dr \, \sigma({\bf r}) \rho({\bf r}) \right] \, \, . \nonumber\end{aligned}$$ Note that $A$ is positive definite due to our assumption $u > 0$. Inserting (\[aux\]) in (\[pf\]) yields $$\begin{aligned}
\label{pf2}
& & Z^{(2)}({\bf r}, {\bf r}'; L) =
\int {\cal D} \sigma \,
\exp\left[ \frac{3}{2 u}
\int d^D r \, \sigma^2({\bf r}) \right] \\[2mm]
& & \, \, \times \int_{\bf r}^{{\bf r}'} {\cal D} {\bf R}
\, \exp \left\{ - \frac{1}{4}
\int_{0}^{L} ds \left(\frac{d {\bf R}}{ds}\right)^2
- \int d^Dr \,
\left[ V({\bf r}) + \sigma({\bf r}) \right]
\rho({\bf r}) \right\} \nonumber\end{aligned}$$ The $\rho^2({\bf r})$ interaction in equation (\[pf\]) has been replaced by the interaction of $\rho({\bf r})$ with an external, fluctuating potential $\sigma({\bf r})$. The second line of equation (\[pf2\]) can be interpreted as the path integral representation of the evolution operator $\langle {\bf r}' \, | \, e^{- L \hat{H} } | \, {\bf r} \, \rangle$ in imaginary time $s$ of a quantum-mechanical particle with Hamiltonian $\hat{H} = - \Delta + V({\bf r}) + \sigma({\bf r})$. The Laplace transform of this evolution operator with respect to $L$ yields the resolvent $$\label{lapop}
\int_0^{\infty} dL \, e^{- t L} \,
\langle {\bf r}' \, \bigg| \, e^{- L \hat{H} } \bigg| \, {\bf r} \, \rangle
= \left\langle {\bf r}' \,
\Big| \frac{1}{- \Delta + t + V({\bf r}) + \sigma({\bf r})} \Big|
\, {\bf r} \right\rangle \, \, .$$ The resolvent can be represented in the standard way by the two-point correlation function of an $n$-component field $\vec{\Phi} = (\Phi_1, \ldots, \Phi_n)$ in the limit $n \to 0$. The result is $$\begin{aligned}
\label{ev}
Z^{(2)}({\bf r}, {\bf r}'; L) & = &
\int {\cal D} \sigma
\exp\left[ \frac{3}{2 u}
\int d^D r \, \sigma^2({\bf r}) \right] \\[2mm]
& \times & {\cal L}_{t \to L} \, \lim\limits_{n \to 0} \,
\int {\cal D} \vec{\Phi} \,
\Phi_1({\bf r}) \, \Phi_1({\bf r}') \, e^{- S\{\vec{\Phi}\}} \nonumber\end{aligned}$$ with the action $$\label{actionev}
S\{\vec{\Phi}\} = \int d^Dr \left[
\frac{1}{2} (\nabla \vec{\Phi})^2 +
\frac{t}{2} \, \vec{\Phi}^2 +
\frac{1}{2} \, \left[ \sigma({\bf r}) + V({\bf r}) \right]
\vec{\Phi}^2 \right] \, \, \, .$$ The Gaussian integration in equation (\[ev\]) can be carried out using equation (\[gauss\]) with the same $X({\bf r}) \propto i \sigma({\bf r})$ and matrix $\displaystyle{A({\bf r},{\bf r}') =
\frac{3}{u} \, \delta({\bf r}-{\bf r}')}$ as before, and now $\displaystyle{b({\bf r}) = \frac{i}{2} \, \vec{\Phi}^2({\bf r})}$. This leads to equations (\[res\]) - (\[action\]).
[99]{}
de Gennes P G 1979 [*Scaling Concepts in Polymer Physics*]{} (Ithaca: Cornell University)
des Cloizeaux J and Jannink G 1990 [*Polymers in Solution: Their Modelling and Structure*]{} (Oxford: Clarendon)
Schäfer L 1999 [*Excluded Volume Effects in Polymer Solutions, as Explained by the Renormalization Group*]{} (Berlin: Springer)
Fleer G, Cohen-Stuart M, Scheutjens J, Cosgrove T and Vincent B 1993 [*Polymers at Interfaces*]{} (London: Chapman and Hall)
Eisenriegler E, Kremer K and Binder K 1982 [*J. Chem. Phys.*]{} [**77**]{} 6296
Eisenriegler E 1993 [*Polymers near Surfaces*]{} (Singapore: World Scientific)
Napper D H 1989 [*Polymeric Stabilization of Colloidal Dispersions*]{} (London: Academic)
Buitenhuis J, Donselaar L N, Buining P A, Stroobants A and Lekkerkerker H N 1995 [*J. Colloid Interface Sci.*]{} [**175**]{} 46
Piculell L, Viebke C and Linse P 1995 [*J. Phys. Chem.*]{} [**99**]{} 17423
Eisenriegler E, Hanke A and Dietrich S 1996 [*Phys. Rev. E*]{} [**54**]{} 1134
Nowicki W 2002 [*Macromolecules*]{} [**35**]{} 1424
Alberts B, Johnson A, Lewis J, Raff M, Roberts K and Walter P 2002 [*Molecular Biology of the Cell*]{} fourth edition (New York: Garland)
Poland D and Scheraga H A 1970 [*Theory of Helix-Coil Transitions in Biopolymers*]{} (New York: Academic)
Wartell R M and Benight A S 1985 [*Phys. Rep.*]{} [**126**]{} 67
Causo M S, Coluzzi B and Grassberger P 2000 [*Phys. Rev. E*]{} [**62**]{} 3958
Kafri Y, Mukamel D and Peliti L 2000 [*Phys. Rev. Lett.*]{} [**85**]{} 4988
Garel T, Monthus C and Orland H 2001 [*Europhys. Lett.*]{} [**55**]{} 132
Carlon E, Orlandini E and Stella A L 2002 [*Phys. Rev. Lett.*]{} [**88**]{} 198101
Richard C and Guttmann A J 2004 [*J. Stat. Phys.*]{} [**115**]{} 925
Schäfer L 2005, [*Can Finite Size Effects in the Poland-Scheraga Model Explain Simulations of a Simple Model for DNA Denaturation?*]{} e-print cond-mat/0502668
Metzler R and Hanke A 2005 [*Knots, Bubbles, Unwinding, and Breathing: Probing the Topology of DNA and other Biomolecules*]{} in [*Handbook of Theoretical and Computational Nanotechnology*]{} Rieth M and Schommers W editors (in press)
Joanny J F, Leibler L and Ball R 1984 [*J. Chem. Phys.*]{} [**81**]{} 4640
Schäfer L and Kappeler C 1985 [*J. Physique*]{} [**46**]{} 1853
Schäfer L and Kappeler C 1990 [*Colloid Polym. Sci.*]{} [**268**]{} 995
Schäfer L, Lehr U and Kappeler C 1991 [*J. Phys. I*]{} [**1**]{} 211
von Ferber C and Holovatch Y 1997 [*Phys. Rev. E*]{} [**56**]{} 6370
Vanderzande C 1998 [*Lattice Models of Polymers*]{} (Cambridge: Cambridge University)
Orlandini E, Seno F and Stella A L 2000 [*Phys. Rev. Lett.*]{} [**84**]{} 294
Baiesi M, Carlon E, Orlandini E and Stella A L 2001 [*Phys. Rev. E*]{} [**63**]{} 041801
Kumar S, Giri D and Bhattacharjee S M 2004 [*Force induced triple point for interacting polymers*]{} e-print cond-mat/0407261
Binder K 1983 in [*Phase Transitions and Critical Phenomena*]{} Domb C and Lebowitz J L editors (London: Academic) volume 8 pages 1
Diehl H W 1986 in [*Phase Transitions and Critical Phenomena*]{} Domb C and Lebowitz J L editors (London: Academic) volume 10 pages 75
Diehl H W 1997 [*Int. J. Mod. Phys. B*]{} [**11**]{} 3503
de Gennes P G 1975 [*J. Phys. (France) Lett.*]{} [**36**]{} L55
Duplantier B 1982 [*J. Phys. (France)*]{} [**43**]{} 991
de Gennes P G 1972 [*Phys. Lett.*]{} [**38A**]{} 339
des Cloizeaux J 1975 [*J. Physique*]{} [**36**]{} 281
Amit D J 1984 [*Field Theory, the Renormalization Group, and Critical Phenomena*]{} second edition (Singapore: World Scientific)
Zinn-Justin J 2002 [*Quantum Field Theory and Critical Phenomena*]{} fourth edition (Oxford: Clarendon)
McAvity D M and Osborn H 1993 [*Nucl. Phys. B*]{} [**394**]{} 728
Hanke A and Dietrich S 1999 [*Phys. Rev. E*]{} [**59**]{} 5081
Hanke A 2000 [*Phys. Rev. Lett.*]{} [**84**]{} 2180
Abramowitz M and Stegun I A 1972 [*Handbook of Mathematical Functions*]{} (New York: Dover)
Diehl H W and Dietrich S 1981 [*Phys. Rev. B*]{} [**24**]{}, 2878
Diehl H W and Dietrich S 1983 [*Z. Phys. B*]{} [**50**]{}, 117
Eisenriegler E and Ritschel U 1995 [*Phys. Rev. B*]{} [**51**]{} 13717
Eisenriegler E [*private communication*]{}
Diehl H W and Shpot M 1994 [*Phys. Rev. Lett.*]{} [**73**]{} 3431
[^1]: The co-dimension of an object of dimension $D_{obj}$ in a space of dimension $D$ is given by $D - D_{obj}$.
|
---
abstract: 'We present a novel approach to the detection and 3D pose estimation of objects in color images. Its main contribution is that it does not require any training phases nor data for new objects, while state-of-the-art methods typically require hours of training time and hundreds of training registered images. Instead, our method relies only on the objects’ geometries. Our method focuses on objects with prominent corners, which covers a large number of industrial objects. We first learn to detect object corners of various shapes in images and also to predict their 3D poses, by using training images of a small set of objects. To detect a new object in a given image, we first identify its corners from its CAD model; we also detect the corners visible in the image and predict their 3D poses. We then introduce a RANSAC-like algorithm that robustly and efficiently detects and estimates the object’s 3D pose by matching its corners on the CAD model with their detected counterparts in the image. Because we also estimate the 3D poses of the corners in the image, detecting only 1 or 2 corners is sufficient to estimate the pose of the object, which makes the approach robust to occlusions. We finally rely on a final check that exploits the full 3D geometry of the objects, in case multiple objects have the same corner spatial arrangement. The advantages of our approach make it particularly attractive for industrial contexts, and we demonstrate our approach on the challenging T-LESS dataset.'
author:
- |
Giorgia Pitteri$^{1}$ $\quad
\quad$ Slobodan Ilic$^{2,3}$ $\quad \quad$ Vincent Lepetit$^1$\
$^1$Laboratoire Bordelais de Recherche Informatique, Université de Bordeaux, Bordeaux, France\
$^2$ Technische Universät München, Germany $\quad\quad$ $^3$ Siemens AG, München, Germany\
[ $^1$$\texttt{\small
\{first.lastname\}@u-bordeaux.fr} \quad\quad$ $^2$$\texttt{\small Slobodan.Ilic@in.tum.de}$]{}
bibliography:
- 'string.bib'
- 'vision.bib'
- 'biblio.bib'
title: |
CorNet: Generic 3D Corners for 6D Pose Estimation of\
New Objects without Retraining
---
Introduction
============
Related Work
============
Approach
========
Evaluation
==========
Conclusion
==========
|
---
abstract: 'In \[[*Séries Gevrey de type arithmétique $I$. Théorèmes de pureté et de dualité*]{}, Annals of Math. [**151**]{} (2000), 705–740\], André has introduced $E$-operators, a class of differential operators intimately related to $E$-functions, and constructed local bases of solutions for these operators. In this paper we investigate the arithmetical nature of connexion constants of $E$-operators at finite distance, and of Stokes constants at infinity. We prove that they involve values at algebraic points of $E$-functions in the former case, and in the latter one, values of $G$-functions and of derivatives of the Gamma function at rational points in a very precise way. As an application, we define and study a class of numbers having certain algebraic approximations defined in terms of $E$-functions. These types of approximations are motivated by the convergents to the number $e$, as well as by recent constructions of approximations to Euler’s constant and values of the Gamma function. Our results and methods are completely different from those in our paper \[[*On the values of $G$-functions*]{}, Commentarii Math. Helv., to appear\], where we have studied similar questions for $G$-functions.'
author:
- 'S. Fischler and T. Rivoal'
title: 'Arithmetic theory of $E$-operators'
---
Introduction
============
In a seminal paper [@YA1], André has introduced $E$-operators, a class of differential operators intimately related to $E$-functions, and constructed local bases of solutions for these operators. In this paper we investigate the arithmetical nature of connexion constants of $E$-operators, and prove that they involve values at algebraic points of $E$-functions or $G$-functions, and values at rational points of derivatives of the Gamma function. As an application, we will focus on algebraic approximations to such numbers, in connection with Aptekarev’s famous construction for Euler’s constant $\gamma$.
To begin with, let us recall the following definition.
\[def:gfunc\] An $E$-function $E$ is a formal power series $E(z)=\sum_{n=0}^{\infty} \frac{a_n}{n!} z^n$ such that the coefficients $a_n$ are algebraic numbers and there exists $C>0$ such that:
1. the maximum of the moduli of the conjugates of $a_n$ is $\leq C^{n+1}$ for any $n$.
2. there exists a sequence of rational integers $d_n$, with $\vert d_n \vert \leq C^{n+1}$, such that $d_na_m$ is an algebraic integer for all $m\le n$.
3. $F(z)$ satisfies a homogeneous linear differential equation with coefficients in $\Qb(z)$.
A $G$-function is defined similarly, as $\sum_{n=0}^{\infty} a_n z^n$ with the same assumptions $(i)$, $(ii)$, $(iii)$; throughout the paper we fix a complex embedding of ${\overline{\mathbb Q}}$.
We refer to [@YA1] for an overview of the main properties of $E$ and $G$-functions. For the sake of precision, we mention that the class of $E$-functions was first defined by Siegel in a more general way, with bounds of the shape $n!^{{\varepsilon}}$ for any ${\varepsilon}>0$ and any $n\gg _{{\varepsilon}} 1$, instead of $C^{n+1}$ for all $n\in {\mathbb{N}}= \{0,1,2,\ldots\}$. The functions covered by Definition \[def:gfunc\] are called $E^*$-functions by Shidlovskii [@shid], and are the ones used in the recent litterature under the denomination $E$-functions (see [@YA1; @beukers3; @lagarias]); it is believed that both classes coincide.
Examples of $E$-functions include $e^{\alpha z}$ with $\alpha\in{\overline{\mathbb Q}}$, hypergeometric series $_p F_p$ with rational parameters, and Bessel functions. Very precise transcendence (and even algebraic independence) results are known on values of $E$-functions, such as the Siegel-Shidlovskii theorem [@shid]. Beukers’ refinement of this result enables one to deduce the following statement (see §\[subsecrappelsE\]), whose analogue is false for $G$-functions (see [@beukers2] for interesting non-trivial examples):
\[thA\] An $E$-function with coefficients in a number field $\mathbb K$ takes at an algebraic point $\alpha$ either a transcendental value or a value in $\mathbb K(\alpha)$.
In this paper we consider the following set ${\mathbf{E}}$, which is analogous to the ring ${{\bf G}}$ of values at algebraic points of analytic continuations of $G$-functions studied in [@gvalues]; we recall that ${{\bf G}}$ might be equal to $\mathcal P[1/\pi]$, where $\mathcal P$ is the ring of periods (in the sense of Kontsevich-Zagier [@KZ]: see §2.2 of [@gvalues]).
\[defi:1\] The set $\mathbf E$ is defined as the set of all values taken by any $E$-function at any algebraic point.
Since $E$-functions are entire and $E(\alpha z)$ is an $E$-function for any $E$-function $E(z)$ and any $\alpha\in{\overline{\mathbb Q}}$, we may restrict to values at $z=1$. Moreover $E$-functions form a ring, so that ${\mathbf{E}}$ is a subring of ${\mathbb{C}}$. Its group of units contains ${{\overline{\mathbb Q}}{^*}}$ and $\exp({\overline{\mathbb Q}})$ because algebraic numbers, $\exp(z)$ and $\exp(-z)$ are $E$-functions. Other elements of ${\mathbf{E}}$ include values at algebraic points of Bessel functions, and also of any arithmetic Gevrey series of negative order (see [@YA1], Corollaire 1.3.2), for instance Airy’s oscillating integral. It seems unlikely that ${\mathbf{E}}$ is a field and we don’t know if we have a full description of its units.
A large part of our results is devoted to the arithmetic description of [*connexion constants*]{} or [*Stokes constants*]{}. Any $E$-function $E(z)$ satisfies a differential equation $Ly=0$, where $L$ is an $E$-operator (see [@YA1]); it is not necessarily minimal and its only possible singularities are 0 and $\infty$. André has proved [@YA1] that a basis of solutions of $L$ at $z=0$ is of the form $
(E_1(z), \ldots, E_\mu(z))\cdot z^M
$ where $M$ is an upper triangular $\mu \times \mu$ matrix with coefficients in $\mathbb Q$ and the $E_j(z)$ are $E$-functions. This implies that any local solution $F(z)$ of $L$ at $z=0$ is of the form $$\label{eq:base0}
F(z)=\sum_{j=1}^\mu \Big(\sum_{s \in S_j}
\sum_{k\in K_j} {\phi_{j,s,k}}z^{s}\log(z)^k\Big) E_j(z)$$ where $S_j \subset \mathbb Q, K_j \subset \mathbb N$ are finite sets and ${\phi_{j,s,k}}\in \mathbb C$. Our purpose is to study the connexion constants of $F(z)$, assuming all coefficients ${\phi_{j,s,k}}$ to be algebraic (with a special focus on the special case where $F(z)$ itself is an $E$-function).
Any point $\alpha\in{\overline{\mathbb Q}}\setminus\{0\}$ is a regular point of $L$ and there exists a basis of local holomorphic solutions $G_1(z), \ldots, G_{\mu}(z)\in{\overline{\mathbb Q}}[[z-\alpha]]$ such that, around $z=\alpha$, $$\label{eq:EG2}
F(z)=\omega_1G_1(z)+ \cdots + \omega_\mu G_{\mu}(z)$$ for some complex numbers $\omega_1, \ldots, \omega_\mu$, called the connexion constants (at finite distance).
\[theo:connexfini\] If all coefficients ${\phi_{j,s,k}}$ in are algebraic then the connexion constants $\omega_1,\ldots, \omega_\mu$ in belong to ${\mathbf{E}}[\log \alpha]$, and even to ${\mathbf{E}}$ if $F(z)$ is an $E$-function.
The situation is much more complicated around $\infty$, which is in general an irregular singularity of $L$; this part is therefore much more involved than the corresponding one for $G$-functions [@gvalues] (since $\infty$ is a regular singularity of $G$-operators, the connexion constants of $G$-functions at any $\zeta\in{\mathbb{C}}\cup\{\infty\}$ always belong to ${{\bf G}}$). The local solutions at $\infty$ involve divergent series, which give rise to Stokes phenomenon: the expression of an $E$-function $E(z)$ on a given basis is valid on certain angular sectors, and the connexion constants may change from one sector to another when crossing certain rays called anti-Stokes directions. For this reason, we speak of Stokes constants rather than connexion constants. More precisely, let $\theta\in{\mathbb{R}}$ and assume that $\theta$ does not belong to some explicit finite set (modulo $2\pi$) which contains the anti-Stokes directions. Then we compute explicitly the asymptotic expansion $$\label{eqdevintro}
E(z) {\approx}\sum_{\rho\in\Sigma} e^{\rho z}
\sum_{\al\in S } \sum_{i\in T } \sum_{n=0}^{\infty} c_{\rho, \al,i,n}z^{-n-\al}\log(1/z)^i$$ as $|z| \to \infty$ in a large sector $\theta-\frac{\pi}2-{\varepsilon}\leq \arg(z) \leq \theta+\frac{\pi}2+{\varepsilon}$ for some ${\varepsilon}>0$; in precise terms, $E(z)$ can be obtained by 1-summation from this expansion (see §\[subsecasyexp\]). Here $\Sigma\subset{\mathbb{C}}$, $S\subset{\mathbb{Q}}$ and $T\subset{\mathbb{N}}$ are finite subsets, and the coefficients $ c_{\rho, \al,i,n}$ are complex numbers; all of them are constructed explicitly in terms of the Laplace transform $g(z)$ of $E(z)$, which is annihilated by a $G$-operator. In applying or studying we shall always assume that the sets $\Sigma$, $S$ and $T$ have the least possible cardinality (so that $\alpha-\alpha'\not\in{\mathbb{Z}}$ for any distinct $\alpha,\alpha'\in S$) and that for any $\alpha$ there exist $\rho$ and $i$ with $ c_{\rho, \al,i,0}\neq 0$. Then the asymptotic expansion is uniquely determined by $E(z)$ (see §\[subsecasyexp\]).
One of our main contributions is the value of $ c_{\rho, \al,i,n}$, which is given in terms of derivatives of $1/\Gamma$ at $\al\in{\mathbb{Q}}$ and connexion constants of $g(z)$ at its finite singularities $\rho$. André has constructed ([@YA1], Théorème 4.3 $(v)$) a basis $H_1(z),\ldots,H_\mu(z)$ of formal solutions at infinity of an $E$-operator that annihilates $E(z)$; these solutions involve Gevrey divergent series of order 1, and are of the same form as the right hand side of , with algebraic coefficients $ c_{\rho, \al,i,n}$. The asymptotic expansion of $E(z)$ in a large sector bisected by $\theta$ can be written on this basis as $$\label{eqintro17}
{\omega}_{1,\theta} H_1(z) + \ldots + {\omega}_{\mu,\theta} H_\mu(z)$$ with Stokes constants ${\omega}_{i,\theta}$. To identify these constants, we first introduce another important set.
We define ${\bf S}$ as the ${{\bf G}}$-module generated by all the values of derivatives of the Gamma function at rational points. It is also the ${{\bf G}}[\gamma]$-module generated by all the values of $\Gamma$ at rational points, and it is a ring.
We show in §\[sec:structureS\] why the two modules coincide. The Rohrlich-Lang conjecture (see [@YA2] or [@MiWperiodes]) implies that the values $\Gamma(s)$, for $s\in{\mathbb{Q}}$ with $0 < s\leq 1$, are ${\overline{\mathbb Q}}$-linearly independent. We conjecture that these numbers are in fact also ${{\bf G}}[\gamma]$-linearly independent, so that ${\bf S}$ is the free ${{\bf G}}[\gamma]$-module they generate.
We then have the following result.
\[thintrocce\] Let $E(z)$ be an $E$-function, and $\theta\in{\mathbb{R}}$ be a direction which does not belong to some explicit finite set (modulo $2\pi$). Then:
1. The Stokes constants ${\omega}_{i,\theta}$ belong to ${\bf S}$.
2. All coefficients $ c_{\rho, \al,i,n}$ in belong to ${\bf S}$.
3. Let $\rho\in \Sigma$, $\alpha\in S$, and $n\geq 0$; denote by $k$ the largest $i\in T$ such that $ c_{\rho, \al,i,n}\neq 0$. If $k$ exists then for any $i \in T $ the coefficient $ c_{\rho, \al,i,n}$ is a ${{\bf G}}$-linear combination of $\Gamma(\alpha)$, $\Gamma'(\alpha)$, …, $\Gamma^{(k-i)}(\alpha)$. In particular, $ c_{\rho, \al,k,n} \in \Gamma(\alpha) \cdot {{\bf G}}$. Here $\Gamma^{(\ell)}(\alpha)$ is understood as $\Gamma^{(\ell)}(1)$ if $\alpha\in {\mathbb{Z}}_{\leq 0}$.
4. Let $F(z)$ be a local solution at $z=0$ of an $E$-operator, with algebraic coefficients ${\phi_{j,s,k}}$ in . Then assertions $(i)$ and $(ii)$ hold with $F(z)$ instead of $E(z)$.
Assertions $(i)$ and $(iv)$ of Theorem \[thintrocce\] precise André’s remark in [@YA1 p. 722]: “[*Nous privilégierons une approche formelle, qui permettrait de travailler sur ${\overline{\mathbb Q}}(\Gamma^{(k)}(a))_{k\in \mathbb N,a\in \mathbb Q}$ plutôt que sur $\mathbb C$ si l’on voulait*]{}”.
An important feature of Theorem \[thintrocce\] (assertion $(iii)$) is that $\Gamma^{(k)}(\alpha)$, for $k\geq 1$ and $\alpha\in{\mathbb{Q}}\setminus{\mathbb{Z}}_{\leq 0}$, never appears in the coefficient of a leading term of , but only combined with higher powers of $\log (1/z)$. This motivates the logarithmic factor in below, and explains an observation we had made on Euler’s constant: it always appears through $\gamma - \log(1/z)$ (see Eq. in §\[subsecnotationsinf\]). Moreover, in $(iii)$, it follows from the remarks made in §\[sec:structureS\] that, alternatively, $c_{\rho, \al,i,n}= \Gamma(\alpha) \cdot P_{\rho,\al, i,n}(\gamma)$ for some polynomial $P_{\rho,\al, i,n}(X)\in {{\bf G}}[X]$ of degree $\le k-i$.
The proof of Theorem \[thintrocce\] is based on Laplace transform, André-Chudnovski-Katz’s theorem on solutions of $G$-operators, and a specific complex integral (see [@YA1], p. 735).
As an application of Theorems \[theo:connexfini\] and \[thintrocce\], we study sequences of algebraic (or rational) approximations of special interest related to $E$-functions. In [@gvalues] we have proved that a complex number $\alpha$ belongs to the fraction field ${{\rm Frac}\,}{{\bf G}}$ of ${{\bf G}}$ if, and only if, there exist sequences $(P_n)$ and $(Q_n)$ of algebraic numbers such that $\lim_n P_n/Q_n =\alpha$ and $\sum_{n\ge 0} P_n z^n$, $\sum_{n\ge 0} Q_n z^n$ are $G$-functions. We have introduced this notion in order to give a general framework for irrationality proofs of values of $G$-functions such as zeta values. Such sequences are called $G$-approximations of $\alpha$, when $P_n$ and $Q_n$ are rational numbers. We drop this last assumption in the context of $E$-functions (see §\[subsecrappelsE\]), and consider the following definition.
\[def:Eapprox\] Sequences $(P_n)$ and $(Q_n)$ of algebraic numbers are said to be [*$E$-approximations*]{} of $\alpha\in{\mathbb{C}}$ if $$\lim_{n\to +\infty} \frac{P_n}{Q_n} =\alpha$$ and $$\sum_{n=0}^{\infty} P_n z^n= A(z)\cdot E\big(B(z)\big), \quad \sum_{n=0}^{\infty} Q_n z^n= C(z)\cdot F\big(D(z)\big)$$ where $E$ and $F$ are $E$-functions, $A, B, C, D$ are algebraic functions in ${\overline{\mathbb Q}}[[z]]$ with $B(0)=D(0)=0$.
This definition is motivated by the fact that many sequences of approximations to classical numbers are $E$-approximations, for instance diagonal Padé approximants to $e^z$ and in particular the convergents of the continued fraction expansion of $e$ (see §\[ssec:example\]). Elements in ${{\rm Frac}\,}{{\bf G}}$ also have $E$-approximations, since $G$-approximations $(P_n)$ and $(Q_n)$ of a complex number always provide $E$-approximations $P_n/n!$ and $Q_n/n!$ of the same number. In §\[ssec:example\], we construct $E$-approximations to $\Gamma(\alpha)$ for any $\alpha\in{\mathbb{Q}}\setminus{\mathbb{Z}}_{\leq 0}$, $\alpha<1$, by letting $
E_\alpha(z)=\sum_{n=0}^{\infty} \frac{z^n}{n!(n+\alpha)}
$, $Q_n(\alpha)=1$, and defining $P_n(\alpha)$ by the series expansion (for $\vert z\vert <1$) $$\frac1{(1-z)^{\alpha+1}} E_\alpha\left(-\frac{z}{1-z}\right) =\sum_{n=0}^{\infty} P_n(\alpha) z^n \in \mathbb Q[[z]];$$ then $\lim_n P_n(\alpha) = \Gamma(\alpha)$. The number $\Gamma(\alpha)$ appears in this setting as a Stokes constant. The condition $\alpha<1$ is harmless because we readily deduce $E$-approximations to $\Gamma(\alpha)$ for any $\alpha\in{\mathbb{Q}}$, $\alpha>1$, by means of the functional equation $\Gamma(s+1)=s\Gamma(s)$. Moreover, since $\frac1{(1-z)^{\alpha+1}} E_\alpha\left(-\frac{z}{1-z}\right)$ is holonomic, the sequence $(P_n(\alpha))$ satisfies a linear recurrence, of order $3$ with polynomial coefficients in $\mathbb Z[n,\alpha]$ of total degree $2$ in $n$ and $\alpha$; see §\[ssec:example\]. This construction is simpler than that in [@rivoal3] but the convergence to $\Gamma(\alpha)$ is slower.
Definition \[def:Eapprox\] enables us to consider an interesting class of numbers: those having $E$-approximations. Of course this is a countable subset of ${\mathbb{C}}$. We have seen that it contains all values of the Gamma function at rational points $s$, which are conjectured to be irrational if $s\not\in{\mathbb{Z}}$; very few results are known in this direction (see [@MiWperiodes]), and using suitable $E$-approximations may lead to prove new ones.
However we conjecture that Euler’s constant $\gamma$ does not have $E$-approximations: all approximations we have thought of seem to have generating functions not as in Definition \[def:Eapprox\]. This is a reasonable conjecture in view of Theorem \[theo:eapprox\] we are going to state now.
Given two subsets $X$ and $Y$ of $\mathbb C$, we set $$X\cdot Y=\big\{xy\,\big\vert\, x\in X, y\in Y\big\}, \quad
\displaystyle \frac{X}{Y}=\Big\{\frac xy \,\Big\vert \,x\in X, y\in Y\setminus\{0\}\Big\}.$$ We also set $\Gamma({\mathbb Q})=\{\Gamma(x)\vert x\in \mathbb Q\setminus \mathbb Z_{\le 0}\}$. If $X$ is a ring then we denote by ${{\rm Frac}\,}X = \frac{X}{X}$ its field of fractions. We recall [@gvalues] that $B(x,y)$ belongs to the group of units ${{\bf G}}{^*}$ of ${{\bf G}}$ for any $x,y\in{\mathbb{Q}}$, so that $\Gamma$ induces a group homomorphism ${\mathbb{Q}}\to {\mathbb{C}}{^*}/{{\bf G}}{^*}$ (by letting $\Gamma(-x) = 1$ for $x\in{\mathbb{N}}$). Therefore $\Gamma({\mathbb{Q}}) \cdot {{\bf G}}{^*}$ is a subgroup of ${\mathbb{C}}{^*}$, and so is $\Gamma({\mathbb{Q}}) \cdot \exp({\overline{\mathbb Q}}) \cdot {{\rm Frac}\,}{{\bf G}}$; for future reference we write $$\label{eq123}
\Gamma({\mathbb{Q}}) \cdot\Gamma({\mathbb{Q}}) \subset \Gamma({\mathbb{Q}})\cdot {{\bf G}}\quad \mbox{ and } \quad \frac{\Gamma({\mathbb{Q}}) }{\Gamma({\mathbb{Q}}) } \subset \Gamma({\mathbb{Q}}) \cdot {{\bf G}}.$$
\[theo:eapprox\] The set of numbers having $E$-approximations contains $$\label{eq:subset1}
\frac{ {\mathbf{E}}\cup \Gamma({\mathbb{Q}})}{ {\mathbf{E}}\cup \Gamma({\mathbb{Q}})} \cup {{\rm Frac}\,}{{\bf G}}$$ and it is contained in $$\label{eq:subset2}
\frac{ {\mathbf{E}}\cup (\Gamma({\mathbb{Q}}) \cdot {{\bf G}})}{ {\mathbf{E}}\cup (\Gamma({\mathbb{Q}}) \cdot {{\bf G}})} \cup \Big(\Gamma({\mathbb{Q}}) \cdot \exp({\overline{\mathbb Q}}) \cdot {{\rm Frac}\,}{{\bf G}}\Big).$$
The proof of is constructive; the one of is based on an explicit determination of the asymptotically dominating term of a sequence $(P_n)$ as in Definition \[def:Eapprox\]. This determination is based on analysis of singularities, the saddle point method, asymptotic expansions of $E(z)$, and Theorems \[theo:connexfini\] and \[thintrocce\]; it is of independent interest (see Theorem \[theoasypn\] in §\[sec:asympPn\]). The dominating term comes from the local behaviour of $E(z)$ at some $z_0\in{\mathbb{C}}$ (providing elements of ${\mathbf{E}}$, in connection with Theorem \[theo:connexfini\]) or at infinity (providing elements of $\Gamma({\mathbb{Q}})\cdot{{\bf G}}$; Theorem \[thintrocce\] is used in this case). This dichotomy leads to the unions in and ; it makes it unlikely for the set of numbers having $E$-approximations to be a field, or even a ring. We could have obtained a field by restricting Definition \[def:Eapprox\] to the case where $B(z) = D(z) = z$ and $A(z)$, $C(z)$ are not polynomials, since in this case the behavior of $E(z)$ at $\infty$ would not come into the play; this field would be simply ${{\rm Frac}\,}{\mathbf{E}}$.
It seems likely that there exist numbers having $E$-approximations but no $G$-approximations, because conjecturally ${{\rm Frac}\,}{\mathbf{E}}\cap {{\rm Frac}\,}{{\bf G}}={\overline{\mathbb Q}}$ and $\Gamma({\mathbb{Q}})\cap{{\rm Frac}\,}{{\bf G}}= {\mathbb{Q}}$. It is also an open question to prove that the number $\Gamma^{(n)}(s)$ does not have $E$-approximations, for $n\geq 1$ and $s\in {\mathbb{Q}}\setminus \mathbb Z_{\le 0}$. To obtain approximations to these numbers, one can consider the following generalization of Definition \[def:Eapprox\]: we replace $A(z)\cdot E(B(z))$ (and also $C(z)\cdot F(D(z))$) with a finite sum $$\label{eq:eapproxgen}
\sum_{i,j,k,\ell} \alpha_{i,j,k,\ell} \log(1-A_{i}(z))^j \cdot B_{k}(z) \cdot E_\ell\big(C(z)\big)$$ where $\alpha_{i,j,k,\ell}\in {\overline{\mathbb Q}}$, $A_i(z), B_k(z), C (z)$ are algebraic functions in ${\overline{\mathbb Q}}[[z]]$, $A_i(0)=C (0)=0$, and $E_\ell(z)$ are $E$-functions. For instance, let us consider the $E$-function $
E(z)=\sum_{n=1}^{\infty} \frac{z^n}{n!n}
$ and define $P_n$ by the series expansion (for $\vert z\vert <1$) $$\label{eq44}
\frac{\log(1-z)}{1-z}-\frac{1}{1-z} E\Big(-\frac{z}{1-z}\Big) = \sum_{n=0}^{\infty} P_n z^n\in \mathbb Q[[z]].$$ Then we prove in §\[ssec:extended\] that $\lim_n P_n=\gamma$, so that letting $Q_n=1$ we obtain $E$-approximations of Euler’s constant in this extended sense. Since $\frac{\log(1-z)}{1-z}-\frac1{1-z} E\left(-\frac{z}{1-z}\right)$ is holonomic, the sequence $(P_n)$ satisfies a linear recurrence, of order $3$ with polynomial coefficients in $\mathbb Z[n]$ of degree $2$; see §\[ssec:extended\]. Again, this construction is much simpler than those in [@Aptekarev; @HP2; @rivoal1] but the convergence to $\gamma$ is slower. A construction similar to , based on an immediate generalization of the final equation for $\Gamma^{(n)}(1)$ in [@Michigan], shows that the numbers $\Gamma^{(n)}(s)$ have $E$-approximations in the extended sense of for any integer $n\ge 0$ and any rational number $s\in \mathbb Q\setminus \mathbb Z_{\le 0}$.
The set of numbers having such approximations is still countable, and we prove in §\[ssec:extended\] that it is contained in $$\label{eq:subset3}
\frac{( {\mathbf{E}}\cdot \log({{\overline{\mathbb Q}}{^*}})) \cup {\bf S} }{( {\mathbf{E}}\cdot \log({{\overline{\mathbb Q}}{^*}})) \cup {\bf S} } \cup \Big( \exp({\overline{\mathbb Q}}) \cdot {{\rm Frac}\,}{\bf S} \Big)$$ where $\log({{\overline{\mathbb Q}}{^*}}) = \exp^{-1}({{\overline{\mathbb Q}}{^*}})$.
The generalization does not cover all interesting constructions of approximations to derivatives of Gamma values in the literature. For instance, it does not seem that Aptekarev’s or the second author’s approximations to $\gamma$ (in [@Aptekarev] and [@rivoal1] respectively) can be described by . This is also not the case of Hessami-Pilehrood’s approximations to $\Gamma^{(n)}(1)$ in [@HP; @HP2] but in certain cases their generating functions involve sums of products of $E$-functions at various algebraic functions, rather linear forms in $E$-functions at one algebraic function as in . Another possible generalization of is to let $\alpha_{i,j,k,\ell}\in {\mathbf{E}}$; we describe such an example in §\[ssec:extended\], related to the continued fraction $[0;1,2,3,4,\ldots]$ whose partial quotients are the consecutive positive integers.
The structure of this paper is as follows. In §\[sec:structureS\], we discuss the properties of ${\bf S}$. In §\[sec2\] we prove our results at finite distance, namely Theorems \[thA\] and \[theo:connexfini\]. Then we discuss in §\[subsecasyexp\] the definition and basic properties of asymptotic expansions. This allows us to prove Theorem \[thintrocce\] in §\[sec3\], and to determine in §\[sec:asympPn\] the asymptotic behavior of sequences $(P_n)$ as in Definition \[def:Eapprox\]. Finally, we gather in §\[sec:eapprox\] all results related to $E$-approximations.
Structure of ${\bf S}$ {#sec:structureS}
======================
In this short section, we discuss the structural properties of the ${{\bf G}}$-module ${\bf S}$ generated by the numbers $\Gamma^{(n)}(s)$, for $n\ge0$, $s\in \mathbb Q\setminus \mathbb Z_{\le 0}$. It is not used in the proof of our theorems.
The Digamma function $\Psi$ is defined as the logarithmic derivative of the Gamma function. We have $$\Psi(x)=-\gamma+\sum_{k=0}^{\infty}\Big(\frac{1}{k+1}-\frac{1}{k+x}\Big) \quad \textup{and} \quad \Psi^{(n)}(x)
=\sum_{k=0}^{\infty} \frac{(-1)^{n+1} n!}{(k+x)^{n+1}}\quad (n\ge 1).$$ From the relation $\Gamma'(x)=\Psi(x)\Gamma(x)$, we can prove by induction on the integer $n\ge 0$ that $$\Gamma^{(n)}(x)= \Gamma(x)\cdot P_n\big(\Psi(x),\Psi^{(1)}(x),\ldots, \Psi^{(n-1)}(x)\big)$$ where $P_n(X_1, X_2, \ldots, X_n)$ is a polynomial with integer coefficients. Moreover, the term of maximal degree in $X_1$ is $X_1^n$.
It is well-known that $\Psi(s)\in -\gamma + {{\bf G}}$ (Gauss’ formula, [@Andrews p. 13, Theorem 1.2.7]) and that $\Psi^{(n)}(s)\in {{\bf G}}$ for any $n\ge 1$ and any $s\in{\mathbb{Q}}\setminus \mathbb{Z}_{\le 0}$. It follows that $$\label{eq:deriGamma}
\Gamma^{(n)}(s)=\Gamma(s)\cdot P_n\big(\Psi(s),\Psi^{(1)}(s),\ldots, \Psi^{(n-1)}(s)\big)=
\Gamma(s)\cdot Q_{n,s}(\gamma)$$ where $Q_{n,s}(X)$ is a polynomial with coefficients in ${{\bf G}}$, of degree $n$ and leading coefficient equal to $(-1)^n$.
We are now ready to prove that ${\bf S}$ coincides with the ${{\bf G}}[\gamma]$-module $\widehat{{\bf S}}$ generated by the numbers $\Gamma(s)$, for $s\in \mathbb Q\setminus \mathbb Z_{\le 0}$. Indeed, Eq. shows immediately that ${\bf S}\subset \widehat{{\bf S}}$. For the converse inclusion $\widehat{{\bf S}} \subset {\bf S}$, it is enough to show that $\Gamma(s)\gamma^n\in {\bf S}$ for any $n\ge0$, $s\in \mathbb Q\setminus \mathbb Z_{\le 0}$. This can be proved by induction on $n$ from because we can rewrite it as $$\Gamma(s)\gamma^n = (-1)^n \Gamma^{(n)}(s)+\Gamma(s)\cdot \widehat{Q}_{n,s}(\gamma)$$ for some polynomial $\widehat{Q}_{n,s}(X)$ with coefficients in ${{\bf G}}$ and degree $\le n-1$.
The module $\widehat{{\bf S}}$ is easily proved to be a ring. Indeed, defining Euler’s Beta function $B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$, then for any $x,y\in{\mathbb{Q}}\setminus \mathbb Z_{\le 0}$ we have $\Gamma(x)\Gamma(y)=\Gamma(x+y)B(x,y)\in \widehat{{\bf S}}$ because $B(x,y)\in {{\bf G}}$ (see [@gvalues]). This can also be proved directly from the definition of ${\bf S}$: for any $x,y\in{\mathbb{Q}}\setminus \mathbb Z_{\le 0}$, we have $$\begin{aligned}
\Gamma^{(m)}(x) \Gamma^{(n)}(y)
&=& \frac{\partial^{m+n}}{\partial x^m \partial y^n} \Gamma(x+y) B(x,y) \\
&=& \sum_{i=0}^m \sum_{j=0}^n \binom{m}{i} \binom{n}{j} \Gamma^{(i+j)}(x+y) \frac{\partial^{m+n-i-j}}{\partial x^{m-i}\partial y^{n-j}} B(x,y) \in {\bf S}\end{aligned}$$ because $ \frac{\partial^{m+n-i-j}}{\partial x^{m-i}\partial y^{n-j}} B(x,y) \in {{\bf G}}$, arguing as in [@gvalues] for the special case $m-i = n-j = 0$.
First results on values of $E$-functions {#sec2}
========================================
Around Siegel-Shidlovskii and Beukers’ theorems {#subsecrappelsE}
-----------------------------------------------
To begin with, let us mention the following result. It is proved in [@gvalues] (and due to the referee of that paper) in the case $\mathbb K=\mathbb Q(i)$; actually the same proof, which relies on Beukers’ version [@beukers3] of the Siegel-Shidlvoskii theorem, works for any $\mathbb K$.
\[propefcts\] Let $E(z)$ be an $E$-function with coefficients in some number field $\mathbb K$, and $\al,\beta \in {\overline{\mathbb Q}}$ be such that $E(\al) = \beta $ or $E(\al) = e^\beta $. Then $\beta\in\mathbb K(\al)$.
This result implies Theorem \[thA\] stated in the introduction; without further hypotheses $E(\alpha)$ may really belong to ${\mathbb{K}}(\alpha)$, since if $E(z)$ is an $E$-function then so is $(z-\alpha)E(z)$.
Theorem \[propefcts\] shows that if we restrict the coefficients of $E$-functions to a given number field then the set of values we obtain is a proper subset of ${\mathbf{E}}$. In this respect the situation is completely different from the one with $G$-functions, since any element of ${{\bf G}}$ can be written [@gvalues] as $f(1)$ for some $G$-function $f$ with Taylor coefficients in ${\mathbb{Q}}(i)$. This is also the reason why we did not restrict to rational numbers $P_n$, $Q_n$ in Definition \[def:Eapprox\].
Connexion constants at finite distance {#ssec:11}
--------------------------------------
Let us prove Theorem \[theo:connexfini\] stated in the introduction; the strategy is analogous to the corresponding one with $G$-functions [@gvalues], and even easier because $E$-functions are entire.
We write $$L= \frac{d^\mu}{d z^\mu} + a_{\mu-1}(z) \frac{d^{\mu-1}}{d z^{\mu-1}}
+ \cdots +a_1(z) \frac{d}{dz} + a_0(z),$$ where $a_j\in {\overline{\mathbb Q}}(z)$. Then $z=0$ is the only singularity at finite distance of $L$, and it is a regular singularity with rational exponents (see [@YA1]). Hence, any wronskian $W(z)$ of $L$, i.e. any solution of the differential equation $y'(z)+a_{\mu-1}(z)y(z)=0$, is of the form $
W(z)= c z^\rho
$ with $c\in \mathbb C$ and $\rho \in \mathbb Q$. We denote by $W_G(z)$ the wronskian of $L$ built on the functions $G_1(z), \ldots, G_\mu(z)$: $$W_G(z)= \left\vert
\begin{matrix}
G_{1}(z) &\cdots &G_{\mu}(z)
\\
G_{1}^{(1)}(z) &\cdots
&G_{\mu}^{(1)}(z)
\\
\vdots &\cdots &\vdots
\\
G_{1}^{(\mu-1)}(z) &\cdots &G_{\mu}^{(\mu-1)}(z)
\end{matrix}
\right\vert.$$ All functions $G_j ^{(k)}(z)$ are holomorphic at $z=\alpha$ with Taylor coefficients in ${\overline{\mathbb Q}}$. Hence, $W_G(\alpha) \in {\overline{\mathbb Q}}$ and is non zero because we also have $W_G(\alpha)=c \alpha^\rho$ for some $c$, with $c\neq 0$ because the $G_j$ form a basis of solutions of $L$.
We now differentiate to obtain the relations $$F^{(k)}(z)=\sum_{j=1}^{\mu} \omega_j G_j^{(k)}(z), \quad k=0,\ldots, \mu-1$$ for any $z$ in some open disk $\mathcal D$ centered at $z=\alpha$. We interpret these equations as a linear system with unknowns $\omega_j$, and solve it using Cramer’s rule. We obtain this way that $$\label{eq:connectiondeter}
\omega_j=\frac1{W_G(z)}
\left\vert
\begin{matrix}
G_{1}(z) &\cdots &G_{j-1}(z)&F(z)
&G_{j+1}(z)&\cdots &G_{\mu}(z)
\\
G_{1}^{(1)}(z) &\cdots &G_{j-1}^{(1)}(z)
&F^{(1)}(z) &G_{j+1}^{(1)}(z)&\cdots
&G_{\mu}^{(1)}(z)
\\
\vdots &\cdots & \vdots & \vdots & \vdots&\cdots&\vdots
\\
G_{1}^{(\mu-1)}(z) &\cdots &G_{j-1}^{(\mu-1)}(z)
&F^{(\mu-1)}(z) &G_{j+1}^{(\mu-1)}(z)
&\cdots &G_{\mu}^{(\mu-1)}(z)
\end{matrix}
\right\vert$$ for any $z \in \mathcal D\setminus\{0\}$, since $W_G(z)\neq 0$.
We now choose $z=\alpha$. As already said, $1/W_G(\alpha), G_j^{(k)}(\alpha)\in {\overline{\mathbb Q}}\subset {\mathbf{E}}$. If we assume that $F(z)$ is an $E$-function, this is also the case of its derivatives, so that $F^{(k)}(\alpha) \in {\mathbf{E}}$ for all $k\ge 0$ and implies that $\omega_j \in {\mathbf{E}}$. To prove the general case, we simply observe that if $F(z)$ is given by with algebraic coefficients ${\phi_{j,s,k}}$ then all derivatives of $F(z)$ at $z=\alpha$ belong to $ {\mathbf{E}}[\log(\alpha)]$.
Stokes constants of $E$-functions {#sec3}
=================================
In this section we construct explicitly the asymptotic expansion of an $E$-function: our main result is Theorem \[theoprecis\], stated in §\[subsecnotationsinf\] and proved in §\[subsecdemtheoprecis\]. Before that we discuss in §\[subsecasyexp\] the asymptotic expansions used in this paper. Finally we show in §\[subsec34\] that Theorem \[theoprecis\] implies Theorem \[thintrocce\].
Throughout this section, we let $\Gh:=1/\Gamma$ for simplicity.
Asymptotic expansions {#subsecasyexp}
---------------------
The asymptotic expansions used throughout this paper are defined as follows.
\[defiasy\] Let $\theta\in{\mathbb{R}}$, and $\Sigma\subset{\mathbb{C}}$, $S\subset{\mathbb{Q}}$, $T\subset{\mathbb{N}}$ be finite subsets. Given complex numbers $ c_{\rho, \al,i,n}$, we write $$\label{eqasy1}
f(x) {\approx}\sum_{\rho\in\Sigma} e^{\rho x}
\sum_{\al\in S } \sum_{i\in T } \sum_{n=0}^{\infty} c_{\rho, \al,i,n}x^{-n-\al}(\log(1/x))^i$$ and say that the right hand side is the asymptotic expansion of $f(x)$ in a large sector bisected by the direction $\theta$, if there exist ${\varepsilon}, R, B, C > 0$ and, for any $\rho\in\Sigma$, a function $f_\rho(x)$ holomorphic on $$U = \Big\{x\in{\mathbb{C}}, \,\,|x|\geq R, \, \, \theta-\frac{\pi}2-{\varepsilon}\leq \arg(x) \leq \theta+\frac{\pi}2+{\varepsilon}\Big\},$$ such that $$f(x) = \sum_{\rho\in\Sigma} e^{\rho x} f_\rho(x)$$ and $$\Big| f_\rho(x) - \sum_{\al\in S } \sum_{i\in T } \sum_{n=0}^{N-1} c_{\rho, \al,i,n}x^{-n-\al}(\log(1/x))^i\Big| \leq C^N N! |x|^{B-N}$$ for any $x\in U$ and any $N\geq 1$.
This means exactly (see [@Ramis §§2.1 and 2.3]) that for any $\rho\in\Sigma$, $$\label{eqasy2}
\sum_{\al\in S } \sum_{i\in T } \sum_{n=0}^{N-1} c_{\rho, \al,i,n}x^{-n-\al}(\log(1/x))^i$$ is 1-summable in the direction $\theta$ and its sum is $f_\rho(x)$. In particular, using a result of Watson (see [@Ramis §2.3]), the sum $f_\rho(x)$ is determined by its expansion . Therefore the asymptotic expansion on the right hand side of determines the function $f(x)$ (up to analytic continuation). The converse is also true, as the following lemma shows.
\[lemasyunique\] A given function $f(x)$ can have at most one asymptotic expansion in the sense of Definition \[defiasy\].
Of course we assume implicitly in Lemma \[lemasyunique\] (and very often in this paper) that $\Sigma$, $S$ and $T$ in cannot trivially be made smaller, and that for any $\alpha$ there exist $\rho$ and $i$ with $c_{\rho, \al,i,0}\neq 0$.
We proceed by induction on the cardinality of $\Sigma$. If the result holds for proper subsets of $\Sigma$, we choose $\theta'$ very close to $\theta$ such that the complex numbers $\rho e^{i\theta'}$, $\rho\in\Sigma$, have pairwise distinct real parts and we denote by $\rho_0$ the element of $\Sigma$ for which ${\textup{Re}\,}( \rho_0 e^{i\theta'})$ is maximal. Then the asymptotic expansion of $f_{\rho_0}(x)$ is also an asymptotic expansion of $e^{-\rho_0 x} f(x)$ as $|x|\to\infty$ with $\arg(x) = \theta'$, in the usual sense (see for instance [@DiP p. 182]); accordingly it is uniquely determined by $f$, so that its 1-sum $f_{\rho_0}(x)$ is also uniquely determined by $f$. Applying the induction procedure to $f(x) - e^{ \rho_0 x} f_{\rho_0}(x)$ with $\Sigma\setminus\{\rho_0\}$ concludes the proof of Lemma \[lemasyunique\].
Notation and statement of Theorem \[theoprecis\] {#subsecnotationsinf}
------------------------------------------------
We consider a non-polynomial $E$-function $E(x)$ such that $E(0)= 0$, and write $$E(x)=\sum_{n=1}^\infty \frac{a_n}{n!}x^n.$$ Its associated $G$-function is $$G(z)=\sum_{n=1}^\infty a_n z^n.$$ We denote by ${{\cal {D}}}$ a $G$-operator such that ${\overline{\cal F}}{{\cal {D}}}E = 0$, where ${\overline{\cal F}}: {\mathbb{C}}[z,\frac{\dd}{\dd z}] \to {\mathbb{C}}[x,\frac{\dd}{\dd x}]$ is the Fourier transform of differential operators, i.e. the morphism of ${\mathbb{C}}$-algebras defined by ${\overline{\cal F}}(z) = \frac{\dd}{\dd x}$ and ${\overline{\cal F}}(\frac{\dd}{\dd z}) = -x$. Recall that such a ${{\cal {D}}}$ exists because $E$ is annihilated by an $E$-operator, and any $E$-operator can be written as ${\overline{\cal F}}{{\cal {D}}}$ for some $G$-operator ${{\cal {D}}}$.
We let $g(z)=\frac{1}{z}G(\frac1z)$, so that $(\frac{\dd}{\dd z })^\delta {{\cal {D}}}g = 0$ where $\delta$ is the degree of ${{\cal {D}}}$ (i.e. the order of ${\overline{\cal F}}{{\cal {D}}}$; see [@YA1], p. 716). This function is the Laplace transform of $E(x)$: for ${\textup{Re}\,}(z)>C$, where $C>0$ is such that $\vert a_n\vert \ll C^{n}$, we have $$g(z)= \int_0^\infty E(x) e^{-xz} \dd x.$$ From the definition of $g(z)$ and the assumption $E(0)=0$ we deduce that $g(z)=\mathcal{O}(1/\vert z\vert ^2)$ as $z\to \infty$.
We denote by ${\Sigma}$ the set of all finite singularities $\rho $ of $ {{\cal {D}}}$; observe that $(\frac{\dd}{\dd z })^\delta {{\cal {D}}}$ has the same singularities as $ {{\cal {D}}}$. We also let $${{\cal S}}= {\mathbb{R}}\setminus\{\arg(\rho-\rho'), \rho,\rho'\in{\Sigma}, \rho\neq\rho'\}$$ where all the values modulo $2\pi$ of the argument of $\rho-\rho'$ are considered, so that ${{\cal S}}+\pi = {{\cal S}}$.
The directions $\theta\in{\mathbb{R}}\setminus(-{{\cal S}})$ (i.e., such that $(\rho-\rho')e^{i\theta}$ is real for some $\rho\neq\rho'$ in ${\Sigma}$) may be [*anti-Stokes*]{} (or [*singular*]{}, see for instance [@Loday p. 79]): when crossing such a direction, the renormalized sum of a formal solution at infinity of ${{\cal {D}}}$ may change. In this paper we restrict to directions $\theta\in-{{\cal S}}$.
For any $\rho\in{\Sigma}$ we denote by ${\Delta_\rho}= \rho - e^{-i\theta}{\mathbb{R}}_+$ the half-line of angle $-\theta+\pi \bmod 2\pi$ starting at $\rho$. Since $-\theta\in{{\cal S}}$, no singularity $\rho'\neq\rho$ of $ {{\cal {D}}}$ lies on ${\Delta_\rho}$: these half-lines are pairwise disjoint. We shall work in the simply connected cut plane obtained from ${\mathbb{C}}$ by removing the union of these closed half-lines. We agree that for $\rho \in {\Sigma}$ and $z$ in the cut plane, $\arg(z-\rho)$ will be chosen in the open interval $(-\theta-\pi,-\theta+\pi)$. This enables one to define $\log(z-\rho)$ and $(z-\rho)^\al$ for any $\al \in {\mathbb{Q}}$.
Now let us fix $\rho\in{\Sigma}$. Combining theorems of André, Chudnovski and Katz (see [@YA1 p. 719]), there exist (non necessarily distinct) rational numbers $t_1^\rho, \ldots, t_{J(\rho)}^\rho$, with $J(\rho)\geq 1$, and $G$-functions $g_{j,k}^\rho$, for $1\leq j \leq J(\rho) $ and $0\leq k \leq K(\rho,j)$, such that a basis of local solutions of $(\frac{\dd}{\dd z })^\delta {{\cal {D}}}$ around $\rho$ (in the above-mentioned cut plane) is given by the functions $$\label{eqdeffjk}
f_{j,k}^\rho(z-\rho) = (z-\rho)^{t_j^\rho} \sum_{k'=0}^k g_{j,k-k'}^\rho(z-\rho) \frac{(\log(z-\rho))^{k'}}{k'!}$$ for $1\leq j \leq J(\rho) $ and $0\leq k \leq K(\rho,j)$. Since $(\frac{\dd}{\dd z })^\delta {{\cal {D}}}g= 0$ we can expand $g$ in this basis: $$\label{eqdefccg}
g(z) = \sum_{j=1}^{J(\rho)}\sum_{k=0}^{K(\rho,j)}\varpi_{j,k}^\rho f_{j,k}^\rho(z-\rho)$$ with connexion constants $\varpi_{j,k}^\rho$; Theorem 2 of [@gvalues] yields $\varpi_{j,k}^\rho\in{{\bf G}}$.
We denote by $\{u\} \in[0,1)$ the fractional part of a real number $u$, and agree that all derivatives of this or related functions taken at integers will be right-derivatives. We also denote by ${\star}$ the Hadamard (coefficientwise) product of formal power series in $z$, and we let $$y_{\al,i}(z) = \sum_{n=0}^\infty \frac{1}{i!} \frac{\dd^{i}}{\dd y^{i}}\Big(\frac{\Gamma(1-\{y\})}
{\Gamma(-y-n)}\Big)_{| y=\al } z^n \in{\mathbb{Q}}[[z]]$$ for $\al\in{\mathbb{Q}}$ and $i\in{\mathbb{N}}$. To compute the coefficients of $y_{\al,i}(z) $, we may restrict to values of $y$ with the same integer part as $\al$, denoted by ${\lfloor \al \rfloor}$. Then $$\label{eqconcretun}
\frac{\Gamma(1-\{y\})}{\Gamma(-y-n)} = \frac{\Gamma( - y +{\lfloor \al \rfloor}+1)}{\Gamma(-y-n)} =
\left\{ \begin{array}{l}
(-y-n)_{n+{\lfloor \al \rfloor}+ 1} \mbox{ if } n\geq - {\lfloor \al \rfloor}\\
\\
\frac1{(-y+{\lfloor \al \rfloor}+1)_{-n-{\lfloor \al \rfloor}-1}}\mbox{ if } n \leq -1-{\lfloor \al \rfloor}\end{array}\right.$$ is a rational function of $y$ with rational coefficients, so that $y_{\al,i}(z) \in {\mathbb{Q}}[[z]]$. Even though this won’t be used in the present paper, we mention that $y_{\al,i}(z)$ is an arithmetic Gevrey series of order $1$ (see [@YA1]); in particular it is divergent for any $z\neq 0$ (unless it is a polynomial, namely if $i=0$ and $\alpha\in{\mathbb{Z}}$).
Finally, we define $$\eta_{j,k}^\rho(1/x) = \sum_{m=0}^k (y_{t_j^\rho,m}{\star}g_{j,k-m}^\rho)(1/x) \in {\overline{\mathbb Q}}[[1/x]]$$ for any $1\leq j \leq J(\rho)$ and $0\leq k \leq K(j,\rho)$; this is also an arithmetic Gevrey series of order $1$. It is not difficult to see that $\eta_{j,k}^\rho(1/x) = 0$ if $f_{j,k}^\rho(z-\rho)$ is holomorphic at $\rho$. Indeed in this case $k=0$ and $t_j^\rho\in{\mathbb{Z}}$; if $t_j^\rho\geq 0$ then $y_{t_j^\rho,0}$ is identically zero, and if $t_j^\rho\leq -1$ then $y_{t_j^\rho,0}$ is a polynomial in $z$ of degree $-1-t_j^\rho$ whereas $g_{j,0}^\rho$ has valuation at least $-t_j^\rho$.
The main result of this section is the following asymptotic expansion, valid in the setting of Definition \[defiasy\] for $\theta\in-{{\cal S}}$. It is at the heart of Theorem \[thintrocce\]; recall that we assume here $E(0)=0$, and that we let $\Gh=1/\Gamma$.
\[theoprecis\] We have $$E(x) {\approx}\sum_{\rho\in\Sigma} e^{\rho x} \sum_{j=1}^{J(\rho)} \sum_{k=0}^{K(j,\rho)} \varpi_{j,k}^\rho x^{-t_j^\rho -1}
\sum_{i = 0}^{ k} \Big( \sum_{\ell = 0} ^{k -i} \frac{(-1)^{\ell}}{\ell!}
\Gh^{(\ell)}(1-\{t_j^\rho \}) \eta_{j, k-\ell-i}^\rho (1/x) \Big) \frac{ (\log(1/x))^{i} }{i!}.$$
We observe that the coefficients are naturally expressed in terms of $\Gh^{(\ell)}$. Let us write Theorem \[theoprecis\] in a slightly different way. For $t\in{\mathbb{Q}}$ and $s\in{\mathbb{N}}$, let $$\lambda_{t,s}(1/x) = \sum_{\nu=0}^s \frac{(-1)^{s-\nu}}{(s-\nu)!}
\Gh^{(s-\nu)}(1-\{t \}) \frac{ (\log(1/x))^\nu}{\nu!}.$$ In particular, $\lambda_{t,0}(1/x) = \Gh(1-\{t\}) $ and $\lambda_{t,1}(1/x)
= \Gh(1-\{t\}) \log(1/x) - \Gh^{(1)}(1-\{t\}) $; for $t\in{\mathbb{Z}}$ we have $\lambda_{t,1}(1/x) = \log(1/x) - \gamma$.
Then Theorem \[theoprecis\] reads (by letting $s = i+\ell$): $$\label{eqgammalog}
E(x) {\approx}\sum_{\rho\in{\Sigma}} e^{\rho x} \sum_{j=1}^{J(\rho)} \sum_{k=0}^{K(j,\rho)}
\varpi_{j,k}^\rho x^{-t_j^\rho-1}\sum_{s=0}^k \lambda_{t_j^\rho,s}(1/x) \eta_{j, k-s}^\rho(1/x) .$$
Here we see that the derivatives of $1/\Gamma$ do not appear in an arbitrary way, but always through these sums $\lambda_{t,s}(1/x) $. In particular $\gamma$ appears through $\lambda_{t,1}(1/x) = \log(1/x) - \gamma$, as mentioned in the introduction.
In the asymptotic expansion of Theorem \[theoprecis\], and in , the singularities $\rho\in{\Sigma}$ at which $g(z)$ is holomorphic have a zero contribution because for any $(j,k)$, either $\varpi_{j,k}^\rho=0$ or $f_{j,k}^\rho(z-\rho)$ is holomorphic at $\rho$ (and in the latter case, $k=0$ and $\eta_{j,0}^\rho(1/x) = 0$, as mentioned before the statement of Theorem \[theoprecis\]). Moreover, as the proof shows (see §\[subsecdemtheoprecis\]), it is not really necessary to assume that the functions $f_{j,k}^\rho(z-\rho)$ form a basis of local solutions of $(\frac{\dd}{\dd z })^\delta {{\cal {D}}}$ around $\rho$. Instead, it is enough to consider rational numbers $t_j^\rho$ and $G$-functions $g_{j,k}^\rho$ such that all singularities of $g_{j,k}^\rho(z-\rho)$ belong to ${\Sigma}$ and, upon defining $f_{j,k}^\rho$ by Eq. , Eq. holds with some complex numbers $\varpi_{j,k}^\rho$. In this way, to compute the asymptotic expansion of $E(x) $ it is not necessary to determine ${{\cal {D}}}$ explicitly. The finite set ${\Sigma}$ is used simply to control the singularities of the functions which appear, and prevent $\theta$ from being a possibly singular direction. This remark makes it easier to apply Theorem \[theoprecis\] to specific $E$-functions, for instance to obtain the expansions and used in §\[sec:eapprox\].
Proof of Theorem \[theoprecis\] {#subsecdemtheoprecis}
-------------------------------
We fix an oriented line ${d}$ such that the angle between ${\mathbb{R}}_+$ and ${d}$ is equal to $-\theta+\frac{\pi}2 \bmod 2\pi$, and all singularities of $ {{\cal {D}}}$ lie on the left of ${d}$. Let $R > 0$ be sufficiently large (in terms of ${d}$ and ${\Sigma}$). Then the circle ${{\cal C}(0,R)}$ centered at 0 of radius $R$ intersects ${d}$ at two distinct points $a$ and $b$, with $\arg(b-a) = -\theta+\frac{\pi}2 \bmod 2\pi$, and $$\label{eq:EG}
E(x)=\lim_{R\to\infty} \frac1{2i\pi}\int_a ^b g(z) e^{zx} \dd z$$ where the integral is taken along the line segment $ab$ contained in ${d}$.
For any $\rho \in {\Sigma}$ the circle ${{\cal C}(0,R)}$ intersects ${\Delta_\rho}$ at one point ${z_\rho}= \rho - {A_\rho}e^{ -i\theta}$, with ${A_\rho}>0$, which corresponds to two points at the border of the cut plane, namely $\rho + {A_\rho}e^{i(-\theta\pm\pi)}$ with values $-\theta\pm\pi$ of the argument. We consider the following path ${\Gamma_{\rho,R}}$: a straight line from $\rho + {A_\rho}e^{i(-\theta-\pi)}$ to $\rho$ (on one bank of the cut plane), then a circle around $\rho$ with essentially zero radius and $\arg(z-\rho)$ going up from $-\theta-\pi$ to $-\theta+\pi$, and finally a straight line from $\rho$ to $\rho + {A_\rho}e^{i(-\theta+\pi)}$ on the other bank of the cut plane. We denote by ${\Gamma_R}$ the closed loop obtained by concatenation of the line segment $ba$, the arc $a{z_{\rho_1}}$ of the circle ${{\cal C}(0,R)}$, the path ${\Gamma_{\rho_1,R}}$, the arc ${z_{\rho_1}}{z_{\rho_2}}$, the path ${\Gamma_{\rho_2,R}}$, …, and the arc ${z_{\rho_p}}b $ (where $\rho_1,\ldots,\rho_p $ are the distinct elements of ${\Sigma}$, ordered so that ${z_{\rho_1}}$, ${z_{\rho_2}}$, …, ${z_{\rho_p}}$ are met successively when going along ${{\cal C}(0,R)}$ from $a$ to $b$ in the negative direction); see Figure \[fig1\]. We refer to [@DiP pp. 183–192] for a similar computation.
We observe that $$\frac{1}{2i\pi} \int_{{\Gamma_R}} g(z) e^{zx} \dd z = 0$$ for any $x\in{\mathbb{C}}$, because ${\Gamma_R}$ is a closed simple curve inside which the integrand has no singularity.
From now on, we assume that $ \theta-\frac{\pi}2 < \arg(x) < \theta+\frac{\pi}2$. As $R\to\infty$, the integral of $ g(z) e^{zx} $ over the line segment $ba$ tends to $-E(x)$, using Eq. . Moreover, as $z$ describes ${\Gamma_{\rho,R}}$ (except maybe in a bounded neighborhood of $\rho$) we have ${\textup{Re}\,}(zx)<0$ and $g(z)=\mathcal{O}(1/\vert z^2\vert)$, so that letting $R\to\infty$ one obtains (as in [@DiP]) $$\label{eq:Esectangul}
E(x)=\sum_{\rho\in{\Sigma}} \frac{1}{2i\pi}\int_{{\Gamma_{\rho}}} g(z) e^{zx} \dd z,$$ where $ {\Gamma_{\rho}}$ is the extension of ${\Gamma_{\rho,R}}$ as $R\to \infty$.
Plugging Eq. into Eq. yields $$\label{eq54bis}
E(x) = \sum_{\rho\in{\Sigma}} \sum_{j=1}^{J(\rho)} \sum_{k=0}^{K(j,\rho)} \varpi_{j,k}^\rho \frac{1}{2i\pi}\int_{{\Gamma_{\rho}}}
f_{j,k}^\rho(z-\rho)e^{zx} \dd z .$$ To study the integrals on the right hand side we shall prove the following general claim. [*Let $\rho\in{\Sigma}$, and ${\varphi}$ be a $G$-function such that ${\varphi}(z-\rho)$ is holomorphic on the cut plane. For any $\al\in{\mathbb{Q}}$ and any $k \in {\mathbb{N}}$, let $${{\varphi}_{\al,k}}( z-\rho) = {\varphi}(z-\rho) (z-\rho)^{\al} \frac{ (\log(z-\rho))^{k }}{k!} .$$ Then $$\frac{1}{2i\pi}\int_{{\Gamma_{\rho}}} {{\varphi}_{\al,k}}( z-\rho) e^{zx} \dd z$$ admits the following asymptotic expansion in a large sector bisected by $\theta$ (with $\Gh:=1/\Gamma$): $$e^{\rho x} x^{-\al -1} \sum_{\ell= 0} ^k \frac{(-1)^{\ell}}{\ell!} \Gh^{(\ell)}(1-\{\al \})
\sum_{i = 0}^{k-\ell}\Big( y_{\al,k-\ell-i}{\star}{\varphi}\Big)(1/x)
\frac{ (\log(1/x))^{i} }{i!}.$$*]{}
To prove this claim, we first observe that $$\int_{{\Gamma_{\rho}}} {{\varphi}_{\al,k}}( z-\rho) e^{zx} \dd z = \frac1{k!} \frac{\partial^{k}}{\partial \al^{k}}
\Big[ \int_{{\Gamma_{\rho}}} {{\varphi}_{\al,0}}( z-\rho) e^{zx} \dd z\Big]$$ where the $k$-th derivative is taken at $\al$; this relation enables us to deduce the general case from the special case $k=0$ considered in [@DiP]. We write also $${\varphi}(z-\rho) = \sum_{n=0}^\infty c_{n} (z-\rho)^n.$$ Following [@DiP pp. 185-191], given ${\varepsilon}>0$ we obtain $R,C,\kappa>0$ such that, for any $n\geq 1$ and any $x$ with $|x| \geq R$ and $ \theta-\frac{\pi}2+{\varepsilon}< \arg(x) < \theta+\frac{\pi}2-{\varepsilon}$, we have $$\Big| \frac{x^{-\al -n-1}}{\Gamma(-\al -n)} - \frac1{2i\pi}e^{-\rho x}
\int_{{\Gamma_{\rho}}} (z-\rho )^{\al +n}e^{zx} \dd z \Big| \leq C^n n! |x|^{-\al-n-1}e^{-\kappa |x| \sin({\varepsilon})}.$$ Then following the proof of [@DiP pp. 191-192] and using the fact that $\limsup |c_n|^{1/n}<\infty$ because ${\varphi}$ is a $G$-function, for any ${\varepsilon}>0$ we obtain $R,B, C >0$ such that, for any $N\geq 1$ and any $x$ with $|x| \geq R$ and $ \theta-\frac{\pi}2+{\varepsilon}< \arg(x) < \theta+\frac{\pi}2-{\varepsilon}$, we have $$\label{eqdevasyun}
\Big| e^{-\rho x} \frac{1}{2i\pi}\int_{{\Gamma_{\rho}}} {{\varphi}_{\al,k}}(z-\rho) e^{zx} \dd z -
\sum_{n=0}^{N-1} \frac{ c_n }{k!} \frac{\partial^{k }}{\partial \al^{k }}
\Big[ \frac{x^{-\al-n-1}}{\Gamma(-\al-n)}\Big] \Big| \leq C^N N! |x|^{B-N}.$$ Now observe that ${{\cal S}}$ is a union of open intervals, so that $\theta$ can be made slightly larger or slightly smaller while remaining in the same open interval. In this process, the cut plane changes but the left handside of remains the same (by the residue theorem, since ${\varphi}(z-\rho)$ is holomorphic on the cut plane). The asymptotic expansion remains valid as $|x| \to \infty$ in the new sector $ \theta-\frac{\pi}2+{\varepsilon}< \arg(x) < \theta+\frac{\pi}2-{\varepsilon}$, so that finally it is valid in a large sector $ \theta-\frac{\pi}2-{\varepsilon}\leq \arg(x) \leq \theta+\frac{\pi}2+{\varepsilon}$ for some ${\varepsilon}>0$.
Now Leibniz’ formula yields the following equality between functions of $\al$: $$\begin{aligned}
\Big( \frac{x^{-\al-n-1}}{\Gamma(-\al-n)}\Big)^{(k )} &= \sum_{\ell = 0} ^k \sum_{i = 0}^{k-\ell}
\frac{k !}{ \ell! i! (k-\ell-i)!} \big(\Gh(1-\{\al\})\big)^{(\ell)}
\Big( \frac{\Gamma(1-\{\al\}) }{\Gamma(-\al-n)}\Big)^{(k-\ell-i)}
\\
& \hspace{3cm} \times (\log(1/x))^{i} x^{-\al-n-1}
\\
&= \sum_{\ell = 0} ^k \frac{k!}{\ell!} \big(\Gh(1-\{\al\}\big)^{(\ell)}
\sum_{i = 0}^{k-\ell}\Big( y_{\al,k-\ell-i}{\star}z^{n}\Big)(1/x)
x^{-\al -1} \frac{ (\log(1/x))^{i} }{i!}\end{aligned}$$ so that $$\sum_{n=0}^{\infty} \frac{ c_n }{k!} \Big( \frac{x^{-\al-n-1}}{\Gamma(-\al-n)}\Big)^{(k)} =
\sum_{\ell = 0} ^k \frac1{\ell!} \big(\Gh(1-\{\al\})\big)^{(\ell)}
\sum_{i = 0}^{k-\ell}\Big( y_{\al,k-\ell-i}{\star}{\varphi}\Big)(1/x)
x^{-\al -1} \frac{ (\log(1/x))^{i} }{i!}.$$ Using this concludes the proof of the claim.
Now we apply the claim to the $G$-functions $g_{j,k}^\rho$, since all singularities of $g_{j,k}^\rho(z-\rho)$ are singularities of $(\frac{\dd}{\dd z })^\delta {{\cal {D}}}$ and therefore belong to ${\Sigma}$. Combining this result with Eqns. and yields: $$\begin{aligned}
E(x) &=
\sum_{\rho,j,k,k'}\varpi_{j,k}^\rho \frac{1}{2i\pi}\int_{{\Gamma_{\rho}}}
g_{j,k-k'}^\rho (z-\rho)(z-\rho)^{t_j^\rho} \frac{(\log(z-\rho))^{k'}}{k'!} e^{zx} \dd z \\
&{\approx}\sum_{\rho,j,k,k'}\varpi_{j,k}^\rho e^{\rho x} x^{-t_j^\rho-1} \sum_{\ell = 0} ^{k'}
\frac{(-1)^\ell}{\ell!} \Gh^{(\ell)}(1- \{t_j^\rho \})
\sum_{i = 0}^{k'-\ell}\Big( y_{t_j^\rho, k'-\ell-i}{\star}g_{j,k-k'}^\rho \Big)(1/x)
\frac{ (\log(1/x))^{i} }{i!}
\\
&=\sum_{\rho,j,k}\varpi_{j,k}^\rho e^{\rho x} x^{-t_j^\rho-1} \sum_{\ell = 0} ^{k }
\frac{(-1)^\ell}{\ell!} \Gh^{(\ell)}(1- \{t_j^\rho \})
\sum_{i = 0}^{ k-\ell} \eta_{j, k-\ell-i}(1/x) \frac{ (\log(1/x))^{i} }{i!}.\end{aligned}$$
This concludes the proof of Theorem \[theoprecis\].
Proof of Theorem \[thintrocce\] {#subsec34}
-------------------------------
To begin with, let us prove assertions $(ii)$ and $(iii)$. Adding the constant term $E(0) \in{\overline{\mathbb Q}}\subset{{\bf G}}$ to if necessary, we may assume that $E(0)=0$. Then Theorem \[theoprecis\] applies; moreover, in the setting of §\[subsecnotationsinf\] we may assume that the rational numbers $t_j^\rho$ have different integer parts as soon as they are distinct. Then letting $S$ denote the set of all $t_j^\rho+1$, for $\rho\in{\Sigma}$ and $1\leq j \leq J(\rho)$, and denoting by $T $ the set of non-negative integers less than or equal to $\max_{j,\rho} K(j,\rho)$, the asymptotic expansion of Theorem \[theoprecis\] is exactly with coefficients $$\begin{aligned}
c_{\rho, \al,i,n} &=& \sum_{\stackrel{1\leq j \leq J(\rho)}{\mbox{{\tiny with }} \al = t_j^\rho+1}} \sum_{k=i}^{K(j,\rho)} \varpi_{j,k}^\rho
\sum_{\ell=0}^{k-i} \frac{(-1)^{\ell}}{\ell!} \Gh^{(\ell)}(1-\{\al \}) \\
&& \hspace{2cm} \sum_{m=0}^{k-\ell-i} \frac{1}{m!} \frac{\dd^{m}}{\dd y^{m}}
\Big(\frac{\Gamma(1-\{y\})}{\Gamma(-y-n)}\Big)_{| y=\al -1}
g_{j,k-\ell-i-m,n}^\rho \end{aligned}$$ where $g_{j,k-\ell-i-m}^\rho(z-\rho) = \sum_{n=0}^\infty g_{j,k-\ell-i-m,n}^\rho (z-\rho)^n$. Now the coefficients $g_{j,k-\ell-i-m,n}^\rho$ are algebraic because $g_{j,k-\ell-i-m}^\rho$ is a $G$-function, and $\frac{\dd^{m}}{\dd y^{m}}\Big(\frac{\Gamma(1-\{y\})}{\Gamma(-y-n)}\Big)_{| y=\al -1}
$ is a rational number. Since $\varpi_{j,k}^\rho \in{{\bf G}}$ and ${\overline{\mathbb Q}}\subset{{\bf G}}$, the coefficient $c_{\rho, \al,i,n} $ is a ${{\bf G}}$-linear combination of derivatives of $\Gh = 1/\Gamma$ taken at the rational point $1-\{\al\}$. By the complements formula, $\Gh(z) = \frac{\sin(\pi z)}{\pi}\Gamma(1-z)$: applying Leibniz’ formula we see that $\Gh^{(k)}(z)$ is a ${{\bf G}}$-linear combination of derivatives of $\Gamma$ at $1-z$ up to order $k$, provided $z\in{\mathbb{Q}}\setminus{\mathbb{Z}}$ (using the fact [@gvalues] that ${{\bf G}}$ contains $\pi$, $1/\pi$, and the algebraic numbers $\sin(\pi z)$ and $\cos(\pi z)$). When $z=1$, we use the identity (at $x=0$) $$\Gamma(x+1) = \exp\Big(-\gamma x + \sum_{k=2}^\infty \frac{(-1)^k \zeta(k)}{k}x^k\Big)$$ (see [@Andrews p. 3, Theorem 1.1.2]) and the properties of Bell polynomials (see for instance [@Comtet1 Chap. III, §3]). Since $\zeta(k) \in{{\bf G}}$ for any $k\geq 2$ (because polylogarithms are $G$-functions), it follows that both $\Gamma^{(k)}(1)$ and $\Gh^{(k)}(1)$ are polynomials of degree $k$ in Euler’s constant $\gamma$, with coefficients in ${{\bf G}}$; moreover the leading coefficients of these polynomials are rational numbers. This implies that $\Gh^{(k)}(1)$ is a ${{\bf G}}$-linear combination of derivatives of $\Gamma$ at 1 up to order $k$, and concludes the proof that all coefficients $c_{\rho, \al,i,n} $ in the expansion provided by Theorem \[theoprecis\] belong to ${\bf S}$.
To prove $(iii)$, we fix $\rho $ and $\alpha$ and denote by $K$ the maximal value of $K(j,\rho)$ among integers $j$ such that $\alpha = t_j^\rho+1$. Then $$c_{\rho, \al,i,n} = \sum_{\ell = 0}^{K-i} \frac{(-1)^\ell}{\ell!} \Gh^{(\ell)}(1-\{\alpha\}) g'_{\ell+i,n}$$ where $$g'_{\lambda,n} = \sum_{j} \sum_{k=\lambda}^{K(j,\rho)} \varpi_{j,k}^\rho \sum_{m=0}^{k-\lambda} \frac1{m!} \frac{\dd^{m}}{\dd y^{m}}
\Big(\frac{\Gamma(1-\{y\})}{\Gamma(-y-n)}\Big)_{| y=\al -1} g_{j,k-\lambda-m,n}^\rho \in {{\bf G}};$$ here $0 \leq \lambda \leq K$ and the first sum is on $j \in \{1,\ldots,J(\rho)\}$ such that $\alpha = t_j^\rho+1$ and $K(j,\rho) \geq \lambda$. If $n$ is fixed and $g'_{\lambda,n}\neq 0$ for some $\lambda$, then denoting by $\lambda_0$ the largest such integer $\lambda$ we have $c_{\rho, \al,\lambda_0,n}\in \Gh(1-\{\alpha\})\cdot {{\bf G}}\setminus\{0\} = \Gamma(\alpha)\cdot {{\bf G}}\setminus\{0\} $ and assertion $(iii)$ follows.
To prove $(i)$ and $(iv)$, we first observe that if $F(z)$ is given by with algebraic coefficients ${\phi_{j,s,k}}$, the asymptotic expansions of $F_j(z)$ we have just obtained can be multiplied by ${\phi_{j,s,k}}z^s \log(z)^k$ and summed up, thereby proving $(ii)$ for $F(z)$. To deduce $(i)$ from $(ii)$ for any solution $F(z)$ of an $E$-operator $L$, we recall that any formal solution $f$ of $L$ at $\infty$ can be written as with complex coefficients $c_{\rho, \al,i,n}(f) $, and denote by ${\Phi}(f)$ the family of all these coefficients. The linear map ${\Phi}$ is injective, so that there exists a finite subset ${X}$ of the set of indices $(\rho, \al,i,n)$ such that ${\Psi}: f \mapsto ( c_{\rho, \al,i,n}(f))_{(\rho, \al,i,n) \in {X}}$ is a bijective linear map. Denoting by $F_\theta$ the asymptotic expansion of $F(x)$ in a large sector bisected by $\theta$, we have $${\Psi}(F_\theta) ={\omega}_{1,\theta} {\Psi}(H_1) + \ldots + {\omega}_{\mu,\theta} {\Psi}(H_\mu)$$ with the notation of . Now $ {\Psi}(H_1) $, …, $ {\Psi}(H_\mu)$ are linearly independent elements of ${\overline{\mathbb Q}}^{{X}}$ and ${\omega}_{1,\theta} , \ldots, {\omega}_{\mu,\theta}$ can be obtained by Cramer’s rule, so that they are linear combinations of the components of ${\Psi}(F_\theta)$ with coefficients in ${\overline{\mathbb Q}}\subset{{\bf G}}$: using $(ii)$ this concludes the proof of $(i)$.
Asymptotics of the coefficients of $A(z)\cdot E\big(B(z)\big)$ {#sec:asympPn}
==============================================================
In this section we deduce from Theorem \[thintrocce\] the following result, of independent interest, which is the main step in the proof of Theorem \[theo:eapprox\] (see §\[preuvesubset2\]).
\[theoasypn\] Let $E(z)$ be an $E$-function, and $A(z), B(z) \in {\overline{\mathbb Q}}[[z]]$ be algebraic functions; assume that $P(z) = A(z)\cdot E\big(B(z)\big) = \sum_{n=0}^{\infty} P_n z^n$ is not a polynomial. Then either $$\label{eqtheoaebun}
P_n = \frac{(2\pi)^{(1-d)/(2d)}}{n!^{1/d}}q^n n^{-u-1} (\log n)^v \Big( \sum_{\te} \Gamma(-u_\te) g_\te e^{in\te} + o(1)\Big)$$ or $$\label{eqtheoaebde}
P_n = q^n e^{\sum_{\ell=1}^{d-1}{\kappa}_\ell n^{\ell/d}} n^{-u-1} (\log n)^v \Big( \sum_{\te_1,\ldots,\te_d} \omega_{\te_1,\ldots,\te_d} e^{\sum_{\ell=1}^{d} i \te_\ell n^{\ell/d}} + o(1)\Big)$$ where $q \in {\overline{\mathbb Q}}$, $u \in {\mathbb{Q}}$, $u_\te\in {\mathbb{Q}}\setminus{\mathbb{N}}$, $d,v\in{\mathbb{N}}$, $d\geq 1$, $q > 0$, $g_\te\in {{\bf G}}\setminus\{0\}$, ${\kappa}_1,\ldots,{\kappa}_{d-1}\in{\mathbb{R}}$, $\te, \te_1,\ldots,\te_d\in [-\pi,\pi)$, the sums on $\te$ and $\te_1,\ldots,\te_d$ are finite and non-empty, and $$\label{eq43}
\left\{ \begin{array}{l}
\omega_{\te_1,\ldots,\te_d} = \frac{\xi}{\Gamma(-u)} \mbox{ with } \xi\in ( {\mathbf{E}}\cup (\Gamma({\mathbb{Q}})\cdot {{\bf G}})) \setminus\{0\} \\
\hspace{5cm}\mbox{ if } v={\kappa}_1= \ldots = {\kappa}_{d-1}= \te_1=\ldots = \te_{d-1}=0,\\
\omega_{\te_1,\ldots,\te_d} \in \Gamma({\mathbb{Q}}) \cdot \exp( {\overline{\mathbb Q}}) \cdot {{\bf G}}\setminus\{0\} \mbox{ otherwise.}
\end{array}\right.$$
As in the introduction, in we let $\Gamma(-u) = 1$ if $u\in{\mathbb{N}}$. In the special case where $$P(z) = (1-z)^{\alpha} \exp\Big(\sum_{i=1}^k \frac{b_i}{(1-z)^{\alpha_i}}\Big)$$ with $\alpha,\alpha_1,\ldots,\alpha_k\in{\mathbb{Q}}$, $b_1,\ldots,b_k\in{\overline{\mathbb Q}}$, $\alpha_1 > 0$ and $b_1\neq 0$, Theorem \[theoasypn\] is consistent with Wright’s asymptotic formulas [@wright2] for $P_n$.
We shall now prove Theorem \[theoasypn\]; we distinguish between two cases (see §\[subsecentiere\] and \[subsecnonentiere\]), which lead to Eqns. and respectively. This distinction, based on the growth of $P_n$, is different from the one mentioned in the introduction (namely whether $E(z)$ plays a role as $z\to z_0\in{\mathbb{C}}$ or as $z\to\infty$, providing elements of ${\mathbf{E}}$ or $\Gamma({\mathbb{Q}})\cdot{{\bf G}}$ respectively). We start with the following consequence of Theorem \[thintrocce\], which is useful to study $E(z)$ as $z\to\infty$, in both §\[subsecentiere\] and §\[subsecdiff\].
\[lemEsomme\] For any $E$-function $E(z)$ there exist $K \geq 1$, $u_1,\ldots,u_K\in{\mathbb{Q}}$, $v_1,\ldots,v_K\in{\mathbb{N}}$, and pairwise distinct $\alpha_1,\ldots,\alpha_K\in{\overline{\mathbb Q}}$ such that $$\label{eqdevEnv}
E(z) = \sum_{k=1}^K \omega_{k} e^{\alpha_k z} z^{u_k} \log(z)^{v_k} (1+o(1))$$ as $|z|\to\infty$, uniformly with respect to $\arg(z)$, where $\omega_k \in \Gamma(-u_k) \cdot {{\bf G}}\setminus\{0\} $ with $\Gamma(-u_k)=1$ if $u_k\in{\mathbb{N}}$.
In Eq. we assume that a determination of $\log z$ is chosen in terms of $k$, with a cut in a direction where the term corresponding to $k$ is very small with respect to another one (except if $K=1$, but in this case the proof yields $v_1=0$ and $u_1\in{\mathbb{Z}}$).
For any $\alpha\in{\mathbb{C}}$, let $I_\alpha$ denote the set of all directions $\theta\in{\mathbb{R}}/2\pi{\mathbb{Z}}$ such that $E(z)$ has an asymptotic expansion in a large sector bisected by $\theta$, with $\Sigma$ having the least possible cardinality, $\alpha\in\Sigma$, and ${\textup{Re}\,}(\alpha' e^{i\theta}) \leq {\textup{Re}\,}(\alpha e^{i\theta})$ for any $\alpha'\in\Sigma$. This implies that in the direction $\theta$, the growth of $E(z)$ is comparable to that of $e^{\alpha z}$. Then $I_\alpha$ is either empty or of the form $[R_\alpha,S_\alpha] \bmod 2\pi$ with $R_\alpha\leq S_\alpha$. We denote by $\Sigma_0$ the set of all $\alpha\in{\mathbb{C}}$ such that $I_\alpha\neq\emptyset$; then $\Sigma_0$ is a subset of the finite set ${\Sigma}\subset{\overline{\mathbb Q}}$ constructed in §\[subsecnotationsinf\], so that $\Sigma_0$ is finite: we denote by $\alpha_1,\ldots,\alpha_K$ its elements, with $K\geq 1$.
If $K=1$ then $I_{\alpha_1} = {\mathbb{R}}/2\pi{\mathbb{Z}}$ and the asymptotic expansion is the same in any direction: $e^{-\alpha_1z} E(z)$ has (at most) a pole at $\infty$, and Lemma \[lemEsomme\] holds with $u_1\in{\mathbb{Z}}$, $v_1=0$, and $\omega_1\in{{\bf G}}$ (using Theorem \[thintrocce\]).
Let us assume now that $K\geq 2$. Then $S_{\alpha_k} - R_{\alpha_k} \leq \pi$ for any $k$, so that $E(z)$ admits an asymptotic expansion in a large sector that contains all directions $\theta\in I_{\alpha_k}$. Among all terms corresponding to $e^{\alpha_k z}$ in this expansion, we denote the leading one by $$\label{eqdefalk}
\omega_k e^{\alpha_k z} z^{u_k} (\log z)^{v_k}$$ with $u_k\in{\mathbb{Q}}$, $v_k\in{\mathbb{N}}$, and $\omega_k \in \Gamma(-u_k) \cdot {{\bf G}}\setminus\{0\} $ (using assertion $(iii)$ of Theorem \[thintrocce\]), where $\Gamma(-u_k)$ is understood as 1 if $u_k$ is a non-negative integer. These parameters are the one in . To conclude the proof of Lemma \[lemEsomme\], we may assume that $\arg(z)$ remains in a small segment $I$, and consider the asymptotic expansion in a large sector containing $I$. Keeping only the dominant term corresponding to each $\alpha\in\Sigma$ in this expansion, we obtain $$\label{eqdevEde}
E(z) = \sum_{\alpha\in\Sigma} \omega'_{\alpha} e^{\alpha z} z^{u'_\alpha} (\log z)^{v'_\alpha} (1+o(1)).$$ To prove that is equivalent to as $|z|\to\infty$ with $\arg(z)\in I$, we may remove from both equations all terms corresponding to values $\alpha_k$ (resp. $\alpha\in\Sigma$) such that $I_{\alpha_k} \cap I = \emptyset$ (resp. $I_{\alpha} \cap I = \emptyset$), since they fall into error terms. Now for any $\alpha = \alpha_k$ such that $I_{\alpha} \cap I \neq \emptyset$, $E(z)$ admits an asymptotic expansion in a large sector containing $I_{\alpha} \cup I$ (since $I_\alpha$ has length at most $\pi$, and the length of $I$ can be assumed to be sufficiently small in terms of $E$). Comparing the dominating exponential term of this expansion in a direction $\theta \in I_{\alpha} \cap I$ with the ones of and , we obtain $ \omega'_{\alpha} = \omega_k$, $u'_\alpha = u_k$, and $v'_\alpha = v_k$. This concludes the proof of Lemma \[lemEsomme\].
$P(z)$ is an entire function {#subsecentiere}
----------------------------
If $P(z)$ is an entire function then $A(z)$ and $B(z)$ are polynomials; we denote by $\delta \geq 0$ and $d \geq 1$ their degrees, and by $A_\delta$ and $B_d$ their leading coefficients. We shall estimate the growth of the Taylor coefficients of $P(z)$ by the saddle point method. For any circle $C_R$ of center $0$ and radius $R$, Lemma \[lemEsomme\] yields $$\begin{aligned}
P_n&=\frac{1}{2 i \pi} \int_{C_R} \frac{A(z) \cdot E(B(z))}{z^{n+1}} \dd z\nonumber \\
&=\frac{1}{2 i \pi}
\sum_{k=1}^K \omega_k A_\delta B_d^{u_k} d^{v_k} \int_{C_R } e^{\alpha_k B(z)} \cdot z^{\delta+d u_k -n-1} (\log z)^{v_k} \cdot (1+o(1))\dd z \nonumber $$ where the $o(1)$ is with respect to $R \to +\infty$ and is uniform in $n$; here $\log(z)$ is a fixed determination which depends on $k$ (see the remark after Lemma \[lemEsomme\]). We have to distinguish between the cases $\alpha_k=0$ and $\alpha_k\neq 0$. In the former case, the integral $$\frac{\omega_k}{2 i \pi}
\int_{C_R } z^{\delta+d u_k -n-1} (\log z)^{v_k} \cdot (1+o(1))\dd z$$ tends to $0$ as $R\to +\infty$ (provided $n$ is sufficiently large) and there is no contribution coming from this case.
Now $E(z)$ is not a polynomial (otherwise $P(z)$ would be a polynomial too), so that if $\alpha_k = 0$ for some $k$ then $K \geq 2$: there is always at least one integer $k$ such that $\alpha_k\neq 0$. For any such $k$, the function $$e^{\alpha_k B(z)} z^{\delta+d u_k -n-1} (\log z)^{v_k}$$ is smooth on $C_R $ (except on the cut of $\log z$) and the integral can be estimated as $n\to \infty$ by finding the critical points of $\alpha_k B(z)-n \log(z)$, i.e. the solutions $z_{1,k}(n), \ldots, z_{d,k}(n)$ of $zB'(z)=n/\alpha_k$. As $n\to\infty$, we have $z_{j,k}(n){\sim}(d B_{d} \alpha_k)^{-1/d} e^{2i \pi j/d} n^{1/d} \to \infty$, so that $\alpha_k B(z_{j,k}(n)){\sim}n/d$.
Moreover, denoting by $\Delta_{j,k}(n)$ the second derivative of $\alpha_k B(z)-n \log(z)$ at $z=z_{j,k}(n)$, we see that asymptotically $$\Delta_{j,k}(n)=\alpha_k B''(z_{j,k}(n))+\frac{n}{z_{j,k}(n)^2}
{\sim}d (d B_d \alpha_k)^{2/d}e^{-4i \pi j/d} n^{1-2/d}.$$ Then the saddle point method yields: $$P_n = \sum_{\alpha_k\neq 0} \omega'_k \sum_{j=0}^{d-1} \frac{1}{\sqrt {2\pi \Delta_{j,k}(n) }}
e^{\alpha_k B(z_{j,k}(n))} z_{j,k}(n)^{\delta+d u_k-n-1} (\log z_{j,k}(n))^{v_k} (1+o(1))$$ with $\omega'_k = \omega_k A_\delta B_d^{u_k} d^{v_k} \in {{\overline{\mathbb Q}}{^*}}\omega_k $. This relation yields $$P_n = \sum_{\alpha_k\neq 0} \frac{\omega''_k}{\sqrt{2\pi}} n^{-n/d} (ed B_d\alpha_k)^{n/d} n^{\frac{\delta}{d}+ u_k -\frac12}(\log n)^{v_k} \Big( \sum_{j=0}^{d-1} e^{2i\pi jn/d} + o(1)\Big)$$ with $\omega''_k \in {{\overline{\mathbb Q}}{^*}}\omega_k $. Now let ${\widetilde\alpha}= \max(|\alpha_1|,\ldots,|\alpha_K|)$ and consider the set ${{\mathcal K}}$ of all $k$ such that $|\alpha_k| = {\widetilde\alpha}$. For each $k\in{{\mathcal K}}$ we write $\alpha_k^{1/d} = {\widetilde\alpha}^{1/d} e^{i\te_k}$; then Stirling’s formula yields $$P_n = (2\pi)^{(1-d)/(2d)} n!^{-1/d} (dB_d{\widetilde\alpha})^{n/d} \sum_{k\in{{\mathcal K}}} \omega''_k n^{\frac{\delta}{d} +u_k -\frac12 + \frac1{2d}}(\log n)^{v_k} \sum_{j=0}^{d-1} e^{i (\te_k + \frac{2 \pi j}{d})n} (1 + o(1)).$$ Keeping only the dominant terms provides Eq. .
$P(z)$ is not an entire function {#subsecnonentiere}
--------------------------------
Let us move now to the case where $P(z)$ is not entire. It has only a finite number of singularities of minimal modulus (equal to $q^{-1}$, say), and as usual the contributions of these singularities add up to determine the asymptotic behavior of $P_n$. Therefore, for simplicity we shall restrict in the proof to the case of a unique singularity $\rho$ of minimal modulus $q^{-1}$. We consider first two special cases, and then the most difficult one.
### $B(z)$ has a finite limit at $\rho$ {#subsec421}
Let us assume that $B(z)$ admits a finite limit as $z\to \rho$, denoted by $B(\rho)$; $\rho$ can be a singularity of $B$ or not. In both cases, as $z \to \rho $ we have $$B(z) = B(\rho) + \mathfrak{B} (z-\rho)^t (1+o(1))$$ with $t\in{\mathbb{Q}}$, $t\geq 0$, and $ \mathfrak{B}\in{{\overline{\mathbb Q}}{^*}}$ (unless $B$ is a constant; in this case the proof is even easier). Now all Taylor coefficients of $E(z)$ at $B(\rho)$ belong to ${\mathbf{E}}$, so that $$E(B(z)) {\sim}\eta (z-\rho)^{t'}$$ as $z\to\rho$, with $t'\in{\mathbb{Q}}$, $t'\geq 0$, and $\eta\in{\mathbf{E}}\setminus\{0\}$. On the other hand, if $\rho$ is a singularity of the algebraic function $A(z)$ then its Puiseux expansion yields $s\in{\mathbb{Q}}\setminus{\mathbb{N}}$, $\mathfrak{A} \in{{\overline{\mathbb Q}}{^*}}$ and a polynomial $\widetilde A$ such that $$A(z) = \widetilde A(z-\rho) + \mathfrak{A} (z-\rho)^s (1+o(1))$$ as $z\to\rho$; if $\rho$ is not a singularity of $A$ we have the same expression with $s\in{\mathbb{N}}$ and $\widetilde A = 0$. In both cases we obtain finally $p \in {\mathbb{Q}}\setminus{\mathbb{N}}$, $ \mathfrak{P} \in{\mathbf{E}}\setminus\{0\}$ and a polynomial $\widetilde P$ such that $$P(z) = \widetilde P(z-\rho) + \mathfrak{P} (z-\rho)^p (1+o(1)).$$ Using standard transfer results (see [@Bible], p. 393) this implies $$P_n {\sim}\frac{(-\rho)^{-p} \mathfrak{P} }{\Gamma(-p)} \rho^{-n}n^{-p-1}.$$ Therefore the singularity contributes to through a term in which $v={\kappa}_1= \ldots = {\kappa}_{d-1}= \te_1=\ldots = \te_{d-1}=0$ and $\rho^{-1} = q e^{i\te_d}$.
### $E$ is a polynomial {#subsec422}
In this case, $P(z)$ is an algebraic function (and not a polynomial) so that $$P_n{\sim}\frac{\omega}{\Gamma(-s)}\cdot n^{-s-1} \rho^{-n}$$ with $\omega \in {{\overline{\mathbb Q}}{^*}}\subset {\mathbf{E}}\setminus\{0\}$ and $s\in {\mathbb{Q}}\setminus{\mathbb{N}}$ determined by the Puiseux expansion of $P(z)$ around $\rho$ (using the same transfer result as above). Therefore each singularity $\rho = q^{-1} e^{-i\te_d}$ contributes to a term in with $v={\kappa}_1= \ldots = {\kappa}_{d-1}= \te_1=\ldots = \te_{d-1}=0$.
### The main part of the proof {#subsecdiff}
Let us come now to the most difficult part of the proof, namely the contribution of a singularity $\rho$ at which $B(z)$ does not have a finite limit (in the case where $E(z)$ is not a polynomial). As above we assume (for simplicity) that $\rho$ is the unique singularity of $P(z)$ of minimal modulus $q^{-1}$. As $z\to\rho$, we have $$\label{eqasyab}
A(z) {\sim}\mathfrak{A} (z-\rho)^{t/s} \mbox{ and }
B(z) {\sim}\mathfrak{B} (z-\rho)^{-\tau/\sigma}$$ with $\mathfrak{A} , \mathfrak{B} \in {{\overline{\mathbb Q}}{^*}}$, $s,t,\sigma,\tau\in \mathbb Z$, $s,\sigma,\tau>0$, and $\gcd(s,t) = \gcd(\sigma,\tau) = 1$. For any circle $C_R$ of center $0$ and radius $R<\vert \rho\vert$, we have (using Lemma \[lemEsomme\] as in §\[subsecentiere\]) $$\label{eqpnde}
P_n =\frac{1}{2 i \pi}
\sum_{k=1}^K \omega_k \int_{C_R } \frac{e^{\alpha_k B(z)}}{z^{n+1}}
\cdot A(z)B(z)^{u_k}\log(B(z))^{v_k} \cdot (1+o(1))\dd z$$ where $o(1)$ is with respect to $R \to \vert \rho\vert$ and is uniform in $n$.
If $\alpha_k=0$ for some $k$, then the corresponding term in has to be treated in a specific way, since the main contribution may come from the error term $o(1)$. For this reason we observe that in Lemma \[lemEsomme\], the term corresponding to $\alpha_k=0$ can be replaced with any truncation of the asymptotic expansion of $E(z)$, namely with $$\sum_{u = -U_0}^{U_1} \sum_{v=0}^{V} \omega_{u,v} z^{u/d} (\log z)^v + o(z^{-U_0/d})$$ where $d\geq 1$ and $U_0$ can be chosen arbitrarily large. Now the corresponding term in becomes $$\label{contribexcep}
\frac1{2i\pi} \int_{C_R } \frac1{z^{n+1}} \Big( \sum_{u = -U_0}^{U_1} \sum_{v=0}^{V} \omega_{u,v}A(z) B(z)^{u/d} (\log B(z))^v + o(A(z) B(z)^{-U_0/d})\Big)dz.$$ The point is that the function $ \omega_{u,v}A(z) B(z)^{u/d} (\log B(z))^v $ may be holomorphic at $z=\rho$, because $ \omega_{u,v}=0$ or because the singularities at $\rho$ of $A(z)$ and $B(z)^{u/d} (\log B(z))^v $ cancel out; in this case the corresponding integral over $C_R$ is $o(q'^n)$ for some $q'<q=|\rho|^{-1}$ so that it falls into error terms. If this happens for any $U_0$, for any $u$ and any $v$, then the term corresponding to $\alpha_k=0$ in is $o(q^nn^{-U})$ for any $U>0$, so that it falls into the error term of the expression we are going to obtain for $P_n$. Otherwise we may consider the maximal pair $(u,v)$ (with respect to lexicographic order) for which this function is not holomorphic; then is equal to $$\frac{ \omega'_{u,v}}{2 i \pi} \int_{C_R} \frac{(\rho-z)^{T}\log(\rho-z)^{v }}{z^{n+1}}
\cdot (1+o(1))\dd z$$ for some $T\in{\mathbb{Q}}$ and $ \omega'_{u,v} \in {{\overline{\mathbb Q}}{^*}}\omega_{u,v}\subset \Gamma({\mathbb{Q}}) \cdot {{\bf G}}$ (using assertion $(iii)$ of Theorem \[thintrocce\]). We obtain finally (see [@Bible], p. 387): $$\begin{cases}
\displaystyle \frac{ \omega'_{u,v}}{\Gamma(-T)}\rho^{T-n} n^{-T-1}\log(n)^{v} (1+o(1)) \quad \textup{if}\; T\not\in{\mathbb{N}},
\\
\\
\displaystyle\omega'_{u,v} \rho^{T-n} n^{-T-1} \log(n)^{v -1} (1+o(1)) \quad \textup{if}\; T\in{\mathbb{N}}\mbox{ (so that } v\geq 1\mbox{).}
\end{cases}$$ This contribution can either fall into the error term of , or give a term with $ {\kappa}_1= \ldots = {\kappa}_{d-1}= \te_1=\ldots = \te_{d-1}=0$.
Let us now study the terms in for which $ \alpha_k\neq 0$; since $E(z)$ is not a polynomial there is at least one such term. The function $$\frac{e^{\alpha_k B(z)}}{z^{n+1}}\cdot A(z)B(z)^{u_k}\log(B(z))^{v_k}$$ is smooth on $C_R $ (except on the cuts of $\log(B(z))$) and the integral can be estimated as $n\to \infty$ by finding the critical points of $\alpha_k B(z)-n \log(z)$, i.e. the solutions of $zB'(z)=n/\alpha_k$. For large $n$, any critical point $z$ must be close to $\rho$ (since $zB'(z)$ is bounded away from $\rho$ for $|z| \leq |\rho|$). Now in a neighborhood of $z=\rho$ we have $$zB'(z) {\sim}-\frac{\rho \tau \mathfrak{B}}{\sigma} \cdot \frac{1}{(z-\rho)^{1+\tau/\sigma}}$$ so that we have $\tau+\sigma$ critical points $z_{j,k}(n)$, for $j=0, \ldots, \sigma+\tau-1$, with $$z_{j,k}(n) - \rho {\sim}e^{2 i \pi j\sigma/(\sigma+\tau)}\cdot \left(-\frac{\sigma n}{\rho \mathfrak{B} \tau \alpha_k}
\right)^{-\sigma/(\sigma+\tau)}.$$ Using and letting $\kappa=t/s \in \mathbb Q$ we deduce that $$A(z_{j,k}(n)) {\sim}\mathfrak{A} e^{2 i \pi j\sigma\kappa/(\sigma+\tau)}\cdot
\left(-\frac{\sigma n}{\rho \mathfrak{B} \tau \alpha_k}\right)^{-\sigma\kappa /(\sigma+\tau)} \neq 0.$$ Moreover we have $$\alpha_k B(z_{j,k}(n)) {\sim}\frac{-\sigma}{\tau} ( z_{j,k}(n) - \rho) \alpha_k B'(z_{j,k}(n)) {\sim}\frac{-\sigma n}{\rho \tau} ( z_{j,k}(n) - \rho) {\sim}{{\mathfrak D}_{j,k}}n^{\tau/(\sigma+\tau)}$$ with $$\label{eqdefdk}
{{\mathfrak D}_{j,k}}= \Big( \alpha_k \mathfrak{B} e^{2i\pi j}\Big)^{\sigma/(\sigma+\tau)} \Big(\frac{-\sigma}{\rho \tau}\Big)^{ \tau / (\sigma+\tau)}\neq 0.$$ To apply the saddle point method, we need to estimate the second derivative $\Delta_{j,k}(n)$ of $\alpha_kB(z)-n\log(z)$ at $z=z_{j,k}(n)$. We obtain $$\Delta_{j,k}(n)=\alpha_kB''(z_{j,k}(n))+\frac{n}{z_{j,k}(n)^2} {\sim}\frac{\tau(\sigma+\tau)}{\sigma^2}
(\alpha_k \mathfrak{B})^{-\sigma/(\sigma+\tau)} e^{-2i\pi j\frac{2\sigma+\tau}{\sigma+\tau}}\left(-\frac{\sigma}
{\rho \tau }\right)^{\frac{ 2\sigma+\tau}{ \sigma+ \tau}}
n^{\frac{ 2\sigma+\tau}{ \sigma+ \tau}}.$$ Finally, $$B(z_{j,k}(n))^{u_k}{\sim}({{\mathfrak D}_{j,k}}/ \alpha_k)^{u_k} n^{\tau u_k / (\sigma+\tau)}.$$ This enables us to apply the saddle point method. This yields a non-empty subset $J_k$ of $\{0,\ldots,\sigma+\tau-1\}$ such that the term corresponding to $\alpha_k$ in is equal to $$\sum_{j\in J_k} \frac{ \omega_k }{\sqrt{2\pi \Delta_{j,k}(n)}}
\frac{e^{\alpha_k B(z_{j,k}(n))}}{z_{j,k}(n)^{n +1}} A(z_{j,k}(n))B(z_{j,k}(n))^{u_k} \log(B(z_{j,k}(n)))^{v_k} (1+o(1)) .$$ Now for any pair $(j,k)$, $\alpha_k B(z_{j,k}(n))$ is an algebraic function of $n$ so that it can be expanded as follows as $n\to\infty$: $$\label{eqasydk}
\alpha_k B(z_{j,k}(n)) = \sum_{\ell=0}^{d'} {\kappa}_{j,k,\ell} n^{\ell / d} + o(1)$$ with ${\kappa}_{j,k,\ell} \in {\overline{\mathbb Q}}$, $0<d'<d$ and $d'/d = \frac{\tau}{\sigma+\tau}$, ${\kappa}_{j,k,d'} = {{\mathfrak D}_{j,k}}\neq 0$. Increasing $d$ and $d'$ if necessary, we may assume that they are independent from $(j,k)$. We denote by $({\kappa}_{d'},\ldots,{\kappa}_{1})$ the family $({\textup{Re}\,}{\kappa}_{j,k,d'},\ldots, {\textup{Re}\,}{\kappa}_{j,k,1})$ which is maximal with respect to lexicographic order (as $j$ and $k$ vary with $ \alpha_k\neq 0$ and $j\in J_k$), i.e. for which the real part of has maximal growth as $n\to\infty$. Among the set of pairs $(j,k) $ for which ${\textup{Re}\,}{\kappa}_{j,k,1} = {\kappa}_1$, …, ${\textup{Re}\,}{\kappa}_{j,k,d'} = {\kappa}_{d'}$, we define ${{\mathcal K}}$ to be the subset of those for which $(u_k,v_k)$ is maximal (with respect to lexicographic order), and let $(u,v)$ denote this maximal value. Then the total contribution to of all terms with $\alpha_k \neq 0$ is equal to $$\frac{n^{- \frac{\tau+2(1+\kappa)\sigma}{2\tau+2\sigma}}}{\sqrt{2\pi}} \rho^{-n} n^{\tau u / (\sigma+\tau)} \log(n)^{v }
e^{\sum_{\ell=1}^{d'}{\kappa}_\ell n^{\ell/d}} \Big( \sum_{(j,k)\in {{\mathcal K}}} \widehat{\omega}_{j,k} e^{{\kappa}_{j,k,0}}
e^{\sum_{\ell=1}^{d'} i {\textup{Im}\,}{\kappa}_{j,k,\ell} n^{\ell/d}} +o(1)\Big)$$ with $\widehat{\omega}_{j,k} \in {{\overline{\mathbb Q}}{^*}}\omega_k$. Since ${\kappa}_{d'} + i {\textup{Im}\,}{\kappa}_{j,k,d'} = {{\mathfrak D}_{j,k}}\neq 0$, this concludes the proof of Theorem \[theoasypn\].
Application to $E$-approximations {#sec:eapprox}
=================================
In this section we prove the results on $E$-approximations stated in the introduction, and discuss in §\[ssec:extended\] the generalization involving .
Examples of $E$-approximations {#ssec:example}
------------------------------
We start with an emblematic example. The diagonal Padé approximants to $\exp(z)$ are given by $Q_n(z)e^z- P_n(z)= \mathcal{O}(z^{2n+1})$ with $$Q_n(z)=\sum_{k=0}^n (-1)^{n-k}\binom{2n-k}{n}\frac{z^k}{k!} \quad \textup{and} \quad P_n(z)=-Q_n(-z).$$ It is easy to prove that, for any $z\in \mathbb C$ and any $x$ such that $\vert x\vert<1/4$, $$\sum_{n=0}^\infty Q_n(z) x^k = \frac{e^{- z/2}}{\sqrt{1+4x}}e^{\frac z2\sqrt{1+4x}}.$$ This generating function can be written as $
\frac{e^{-z/2}}{\sqrt{1+4x}}f(z,x) + g(z,x),
$ where $f(z,x)$ and $g(z,x)$ are entire functions of $x$, and $f(z,-\frac 14)=-f(-z,-\frac14)\neq 0$. Hence, the asymptotic behavior of $Q_n(z)$ and $P_n(z)$ are given by $$Q_n(z)\sim e^{-z/2}f\Big(z,-\frac14\Big) 4^n\binom{-1/2}{n}
\quad \textup{and} \quad
P_n(z){\sim}e^{z/2}f\Big(z,-\frac14\Big) 4^n\binom{-1/2}{n}.$$ It follows in particular that $$\lim_{n\to +\infty} \frac{P_n(z)}{Q_n(z)} =e^z.$$ This proves that for any $z\in {\overline{\mathbb Q}}$, $e^z$ has $E$-approximations. Moreover, it is well-known that $n!P_n(1)$ and $n!Q_n(1)$ are respectively the numerator and denominator of the $n$-th convergent of the continued fraction of the number $e$. In other words, the convergents of $e$ are $E$-approximations of $e$.
As mentioned in the introduction, any element of ${{\rm Frac}\,}{{\bf G}}$ has $E$-approximations. To complete the proof of , let us prove this for any element of $\frac{ {\mathbf{E}}\cup \Gamma({\mathbb{Q}})}{ {\mathbf{E}}\cup \Gamma({\mathbb{Q}})} $ by constructing for any $\xi \in {\mathbf{E}}\cup \Gamma({\mathbb{Q}})$ a sequence $(P_n)$ as in Definition \[def:Eapprox\] with $\lim_{n\to\infty} P_n = \xi$.
If $\xi = F(\alpha)$ where $\alpha\in{\overline{\mathbb Q}}$ and $F(z)=\sum_{n\ge 0} \frac{a_n}{n!}z^n$ is an $E$-function, we define $P_n\in {\overline{\mathbb Q}}$ by $$\sum_{n=0}^{\infty} P_n z^n = \frac{1}{1-z} F(\alpha z).$$ Then, trivially, $$P_n=\sum_{k=0}^n \frac{a_k}{k!}\alpha^k \longrightarrow F(\al) = \xi.$$
If $\xi = \Gamma(\alpha)$ with $\alpha \in \mathbb Q\setminus \mathbb Z_{\le 0}$, we consider the $E$-function $$E_\alpha(z)=\sum_{n=0}^{\infty} \frac{z^n}{n!(n+\alpha)}$$ and define $P_n(\alpha)$ as announced in the introduction, by the series expansion (for $\vert z\vert <1$) $$\frac1{(1-z)^{\alpha+1}} E_\alpha\left(-\frac{z}{1-z}\right) = \sum_{n\ge 0} P_n(\alpha) z^n \in \mathbb Q [[z]].$$ Then $$P_n(\alpha)=\sum_{k=0}^n \binom{n+\al}{k+\al}\frac{(-1)^k}{k!(k+\al)}$$ (by direct manipulations) and, provided that $\alpha<1$, $$\lim_{n\to +\infty} P_n(\alpha) = \Gamma(\alpha) = \xi .$$ To see this, we start from the asymptotic expansion $$\label{eqex1}
E_\alpha(-z){\approx}\frac{\Gamma(\alpha)}{z^{\alpha}} - e^{-z}\sum_{n=0}^{\infty} (-1)^n
\frac{(1-\alpha)_n}{z^{n+1}}$$ in a large sector bisected by any $\theta\in(-\frac\pi2, \frac\pi2)$, which is a special case of Theorem \[theoprecis\] (proved directly in [@Michigan], Proposition 1). Since $\exp\big(-\frac z{1-z}\big)=\mathcal{O}(1)$, as $z\to 1$, $\vert z\vert <1$, it follows that $$\frac1{(1-z)^{\alpha+1}} E_\alpha\left(-\frac{z}{1-z}\right) =
\frac{\Gamma(\alpha)}{1-z} + \mathcal{O}\left(\frac1{\vert 1-z\vert^{\alpha}}\right)$$ for $z\to 1$, $\vert z\vert<1$. The result follows by standard transfer theorems since $\alpha<1$; this example is of the type covered by §\[subsecdiff\] with $\alpha_1=0$.
From the differential equation $zy''(z)+(\al+1-z)y'(z)-\al y(z)=0$ satisfied by $E_{\alpha}(z)$, we easily get the differential equation satisfied by $\frac1{(1-z)^{\alpha+1}} E_\alpha\left(-\frac{z}{1-z}\right)$: $$\begin{gathered}
\label{eq:eqdiff1}
\big(3z^3-z^4-3z^2+z\big)y''(z)+\big(5z^2\alpha-4z^3-2z^3\alpha+8z^2+1+\alpha-5z-4z\alpha\big)y'(z)\\
+\big(-1-2z^2-3z^2\alpha+2z-\alpha+4z\alpha-\alpha^2+2z\alpha^2-z^2\alpha^2\big)y(z)=0.\end{gathered}$$ This immediately translates into a linear recurrence satisfied by the sequence $(P_n(\alpha))$: $$\begin{gathered}
\label{eq:rec1}
(n+3)(n+3+\alpha)P_{n+3}(\alpha)-(3n^2+4n\alpha+14n +\alpha^2+9\alpha+17)P_{n+2}(\alpha)
\\
+(3n+5+2\alpha)(n+2+\alpha) P_{n+1} (\alpha)-(n+2+\alpha)(n+1+\alpha)P_n(\alpha)
=0\end{gathered}$$ with $P_0(\alpha)=\frac1\alpha$, $P_1(\alpha)=\frac{1+\alpha+\alpha^2}{\alpha(\alpha+1)}$ and $P_2(\alpha)=\frac{4+5\alpha+6\alpha^2+4\alpha^3+\alpha^4}{2\alpha(\alpha+1)(\alpha+2)}$.
Proof of {#preuvesubset2}
---------
The proof is very similar to that of [@gvalues] so we skip the details. Let $(P_n,Q_n)$ be $E$-approximations of $\xi\in{\mathbb{C}}{^*}$. If $(P_n)$ has the first asymptotic behavior of Theorem \[theoasypn\], then so does $(Q_n)$ with the same parameters $d$, $q$, $u$, $v$, and the sum is over the same non-empty finite set of $\te$. Therefore $\xi = \frac{g_\te \Gamma(-u_\te)}{g'_\te \Gamma(-u'_\te)} \in \Gamma({\mathbb{Q}})\cdot {{\rm Frac}\,}{{\bf G}}$, using Eq. .
Now if $(P_n)$ satisfies then so does $(Q_n)$ with the same parameters $q$, $u$, $v$, ${\kappa}_1$, …, ${\kappa}_{d-1}$ (since we may assume that $d$ is the same), and the same set of $(\te_1,\ldots,\te_d)$ in the sum. If $v = {\kappa}_1=\ldots={\kappa}_{d-1}=0$ and a term in the sum corresponds to $\te_1=\ldots=\te_{d-1}=0$, then $\xi = \frac{\omega_{0,\ldots,0,\te_d}}{\omega'_{0,\ldots,0,\te_d}}\in \frac{{\mathbf{E}}\cup(\Gamma({\mathbb{Q}})\cdot{{\bf G}})}{{\mathbf{E}}\cup (\Gamma({\mathbb{Q}})\cdot{{\bf G}})}$, else $\xi \in \Gamma({\mathbb{Q}}) \cdot \exp({\overline{\mathbb Q}}) \cdot {{\rm Frac}\,}{{\bf G}}$ (using Eq. ).
Extended $E$-approximations {#ssec:extended}
---------------------------
Let us consider the $E$-function $$E(z)=\sum_{n=1}^{\infty} \frac{z^n}{n!n}.$$ We shall prove that the sequence $(P_n)$ defined in the introduction by $$\frac{\log(1-z)}{1-z}-\frac{1}{1-z}E\left(-\frac z{1-z}\right) = \sum_{n=0}^{\infty} P_n z^n \in \mathbb Q[[z]]$$ provides, together with $Q_n=1$, a sequence of $E$-approximations of Euler’s constant in the extended sense of . It is easy to see that $$P_n= \sum_{k=1}^n (-1)^{k-1} \binom{n}{k}\frac{1}{k!k}-\sum_{k=1}^n \frac1k = \sum_{k=1}^n (-1)^{k} \binom{n}{k}\frac{1}{k} \Big(1-\frac1{k!}\Big),$$ where the second equality is a consequence of the identity $\sum_{k=1}^n \frac1k=\sum_{k=1}^n (-1)^{k-1} \binom{n}{k}\frac{1}{k}$. We now observe that $E(z)$ has the asymptotic expansion $$\label{eqex2}
E(-z) {\approx}- \gamma-\log(z) - e^{-z} \sum_{n=0}^{\infty} (-1)^n
\frac{n!}{z^{n+1}}$$ in a large sector bisected by any $\theta\in(-\pi,\pi)$ (see [@Michigan Prop. 1]; this is also a special case of Theorem \[theoprecis\]). Therefore, for $z\to1$, $\vert z\vert <1$, $$-\frac{1}{1-z} E\Big(-\frac{z}{1-z}\Big) + \frac{\log(1-z)}{1-z} =\frac{\gamma}{1-z} + \mathcal{O}(1).$$ As in §\[ssec:example\] in the case of $\Gamma(\alpha)$, a transfer principle readily shows that $$\lim_{n\to +\infty} P_n = \gamma.$$ Since $E(z)$ is holonomic, this is also the case of $\frac{\log(1-z)}{1-z}-\frac1{1-z} E\left(-\frac{z}{1-z}\right)$. The latter function satisfies the differential equation $$\begin{gathered}
\label{eq:eqdiff2}
\big(3z^3- z^4-3z^2+z\big)y''(z)
+\big(1-5z+8z^2-4z^3\big)y'(z)+\big(-2z^2+2z-1\big)y(z)=0.\end{gathered}$$ This immediately translates into a linear recurrence satisfied by the sequence $(P_n)$: $$\label{eq:rec2}
(n+3)^2P_{n+3}- (3n^2+14n+17)P_{n+2}+(n+2)(3n+5)P_{n+1}-(n+1)(n+2)P_{n}=0$$ with $P_0=0$, $P_1=0$, $P_2=\frac{1}{4}$. The differential equation and the recurrence relation are the case $\alpha=0$ of and respectively.
Let us now prove that any number with extended $E$-approximations is of the form stated in the introduction. Let $P(z)$ be given by . If there is only one term in the sum, Theorems \[theo:eapprox\] and \[theoasypn\] hold and the proof extends immediately, except that ${\mathbf{E}}$ has to be replaced with ${\mathbf{E}}\cdot \log({{\overline{\mathbb Q}}{^*}})$ in §\[subsec421\] and \[subsec422\], and therefore in and . Otherwise, we apply a variant of Lemma \[lemEsomme\] to each $E$-function $E_\ell(z)$, obtaining exponential terms $e^{\alpha_{k,\ell} z }$: for each $k$ we write sufficiently many terms in the asymptotic expansion before the error term $o(1)$ (and not only the dominant one as in §\[sec:asympPn\]). Theorem \[thintrocce\] asserts that all these terms are of the same form, but now the constants $\omega$ belong to ${\bf S}$. Combining these expressions yields $$P(z) = \sum_{k=1}^K \omega_k e^{\alpha_k C(z)} U_k(z) (\log V_k(z))^{v_k}(1+o(1))$$ as $z $ tends to some point (possibly $\infty$) at which $C$ is infinite; here $U_k$, $V_k$ are algebraic functions, $v_k\in{\mathbb{N}}$, and $\omega_k\in{\bf S}$. However there is no reason why $\omega_k$ would belong to $\Gamma({\mathbb{Q}})\cdot {{\bf G}}$ in general, since it may come from non-dominant terms in the expansions of $E_\ell(z)$, due to compensations. Upon replacing $\Gamma({\mathbb{Q}})\cdot {{\bf G}}$ with ${\bf S}$ (and ${\mathbf{E}}$ with ${\mathbf{E}}\cdot \log({{\overline{\mathbb Q}}{^*}})$ as above), the proof of Theorems \[theo:eapprox\] and \[theoasypn\] extends immediately.
To conclude this section, we discuss another interesting example, which was also mentioned in the introduction. It corresponds to the more general notion of extended $E$-approximations where the coefficients of the linear form are in ${\mathbf{E}}$ and not just in ${\overline{\mathbb Q}}$. Let us consider the $E$-function $F(z^2)=\sum_{n=0}^{\infty} z^{2n}/n!^2$. It is solution of an $E$-operator $L$ of order $2$ with another solution of the form $G(z^2)+\log(z^2)F(z^2)$ where $G(z^2)=-2\sum_{n=0}^{\infty}\frac{1+\frac 12+\cdots +\frac 1n}{n!^2}z^{2n}$ is an $E$-function (in accordance with André’s theory). Then, $$F(1-z)=\sum_{n=0}^{\infty} \frac{(1-z)^n}{n!^2}=\sum_{k=0}^{\infty} \frac{(-1)^k A_k}{k!} z^k$$ with $$\label{eq:Ak}
A_k=(-1)^k \sum_{n=0}^{\infty} \frac{1}{n!(n+k)!}.$$ It is a remarkable (and known) fact that the sequence $A_k$ satisfies the recurrence relation $A_{k+1}=kA_k+A_{k-1}$, $A_0=F(1), A_1=-F'(1)$. This can be readily checked. It follows that $A_{k}=V_kF(1)-U_kF'(1)$ where the sequences of integers $U_k, V_k$ are solutions of the same recurrence.
Hence, the sequence $U_k/V_k$ is the sequence of convergents to $F(1)/F'(1)$ whose continued fraction is $[0;1,2,3,4,\ldots ]$. Moreover, we have $$\sum_{n=0}^{\infty} \frac{(-1)^k U_k}{k!}z^k = aF(1-z)+bG(1-z) + b \log(1-z) F(1-z)$$ $$\sum_{n=0}^{\infty} \frac{(-1)^k V_k}{k!}z^k = cF(1-z)+dG(1-z) + d \log(1-z) F(1-z)$$ for some constants $a,b,c,d$, because both generating functions are solutions of an operator of order $2$ obtained from $L$ by changing $z$ to $\sqrt{1-z}$. The conditions $V_0=1, U_0=0, V_1=0,U_1=1$ and $A_{k}=V_kF(1)-U_kF'(1)$ translate into a linear system in $a,b,c,d$ with solutions given by $$\begin{aligned}
a&=\frac{g}{gf'-f^2-fg'}\in {\mathbf{E}}, \qquad b=-\frac{f}{gf'-f^2-fg'} \in {\mathbf{E}},
\\
c&=-\frac{f+g'}{gf'-f^2-fg'} \in {\mathbf{E}}, \qquad d=\frac{f'}{gf'-f^2-fg'} \in{\mathbf{E}},\end{aligned}$$ where $f=F(1), f'=F'(1), g=G(1), g'=G(1)$. We observe that $gf'-f^2-fg' \in{\overline{\mathbb Q}}^*$ because it is twice the value at $z=1$ of the wronskian built on the linearly independent solutions $F(z^2)$ and $G(z^2)+\log(z^2)F(z^2)$. It follows that $U_k/V_k$ are extended $E$-approximations to the number $F(1)/F'(1)$ with “coefficients” in ${\mathbf{E}}$, but not in ${\overline{\mathbb Q}}$ (because the number $f$ was proved to be transcendental by Siegel).
[1]{}\[sec:biblio\]
Y. André, [*Séries Gevrey de type arithmétique $I$. Théorèmes de pureté et de dualité*]{}, Annals of Math. [**151**]{} (2000), 705–740. Y. André, [*Une introduction aux motifs (motifs purs, motifs mixtes, périodes)*]{}, Panoramas et Synthèses [**17**]{} (2004), Soc. Math. France, Paris. G. E. Andrews, R. A. Askey and R. Roy, [*Special Functions*]{}, The Encyclopaedia of Mathematics and its Applications, vol. 71, (G.-C. Rota, ed.), Cambridge University Press, Cambridge, 1999. A. I. Aptekarev (editor), [*Rational approximants for Euler constant and recurrence relations*]{}, Sovremennye Problemy Matematiki [**9**]{} (“Current Problems in Mathematics"), MIAN (Steklov Institute), Moscow, 2007. F. Beukers, [*Algebraic values of G-functions*]{}, J. Reine Angew. Math. [**434**]{} (1993), 45–65. F. Beukers, [*A refined version of the Siegel-Shidlovskii theorem*]{}, Annals of Math. [**163**]{} (2006), no. 1, 369–379. L. Comtet, [*Analyse combinatoire*]{}, tome 1, Presses Univ. de France, Coll. Le Mathématicien, 1970. V. Ditkine and A. Proudnikov, [*Calcul Opérationnel*]{}, Editions Mir, 1979. S. Fischler and T. Rivoal, [*On the values of $G$-functions*]{}, preprint arxiv 1103.6022 \[math.NT\], Commentarii Math. Helv., to appear. P. Flajolet and R. Sedgewick, [*Analytic combinatorics*]{}, Cambridge University Press, 2009. Kh. Hessami-Pilehrood and T. Hessami-Pilehrood, [*Rational approximations to values of Bell polynomials at points involving Euler’s constant and zeta values*]{}, J. Aust. Math. Soc. [**92**]{} (2012), no. 1, 71–98. Kh. Hessami-Pilehrood and T. Hessami-Pilehrood, [ *On a continued fraction expansion for Euler’s constant*]{}, J. Number Theory [**133**]{} (2013), no. 2, 769–786. M. Kontsevich and D. Zagier, [*Periods*]{}, in: Mathematics Unlimited – 2001 and beyond, Springer, 2001, 771–808. J. Lagarias, [*Euler’s constant: Euler’s work and modern developments*]{}, Bull. Amer. Math. Soc. [**50**]{} (2013), 527–628. M. Loday-Richaud, [*Séries formelles provenant de systèmes différentiels linéaires méromorphes*]{}, in: Séries divergentes et procédés de resommation, Journées X-UPS, 1991, pp. 69–100. J. P. Ramis, [*Séries Divergentes et Théories Asymptotiques*]{}, Panoramas et Synthèses [**21**]{} (1993), Soc. Math. France, Paris. T. Rivoal, [*Rational approximations for values of derivatives of the Gamma function*]{}, Trans. Amer. Math. Soc. [**361**]{} (2009), 6115–6149. T. Rivoal, [*Approximations rationnelles des valeurs de la fonction Gamma aux rationnels*]{}, J. Number Theory [**130**]{}(2010), 944–955. T. Rivoal, [*On the arithmetic nature of the values of the Gamma function, Euler’s constant et Gompertz’s constant*]{}, Michigan Math. Journal [**61**]{} (2012), 239–254. A. B. Shidlovskii, [*Transcendental Numbers*]{}, de Gruyter Studies in Mathematics [**12**]{}, 1989. M. Waldschmidt, [*Transcendance de périodes : état des connaissances*]{}, Proceedings of the Tunisian Mathematical Society [**11**]{} (2007), 89–116. E. M. Wright, [*On the coefficients of power series having exponential singularities (second paper)*]{}, J. Lond. Math. Soc. [**24**]{} (1949), 304–309.
S. Fischler, Équipe d’Arithmétique et de Géométrie Algébrique, Université Paris-Sud, Bâtiment 425, 91405 Orsay Cedex, France
T. Rivoal, Institut Fourier, CNRS et Université Grenoble 1, 100 rue des maths, BP 74, 38402 St Martin d’Hères Cedex, France
|
---
abstract: 'Densifying the network and deploying more antennas at each access point are two principal ways to boost the capacity of wireless networks. However, due to the complicated distributions of random signal and interference channel gains, largely induced by various space-time processing techniques, it is highly challenging to quantitatively characterize the performance of dense multi-antenna networks. In this paper, using tools from stochastic geometry, a tractable framework is proposed for the analytical evaluation of such networks. The major result is an innovative representation of the coverage probability, as an induced $\ell_1$-norm of a Toeplitz matrix. This compact representation incorporates lots of existing analytical results on single- and multi-antenna networks as special cases, and its evaluation is almost as simple as the single-antenna case with Rayleigh fading. To illustrate its effectiveness, we apply the proposed framework to investigate two kinds of prevalent dense wireless networks, i.e., physical layer security aware networks and millimeter-wave networks. In both examples, in addition to tractable analytical results of relevant performance metrics, insightful design guidelines are also analytically obtained.'
author:
- '[^1]'
bibliography:
- 'bare\_conf.bib'
title: 'A Tractable Framework for Performance Analysis of Dense Multi-Antenna Networks'
---
Introduction
============
To meet the ever-increasing mobile data traffic explosion, there is a tremendous demand in boosting the capacity of wireless networks. One promising way is to exploit the spatial domain resources by deploying more antennas at transceivers, especially at the base station (BS) side, e.g., via the recently emerged “Massive MIMO" technique [@6375940]. Another effective way to increase the network capacity is via network densification [@7010535], which can significantly improve the area spectral efficiency (ASE). However, to design and evaluate dense multi-antenna networks is a highly challenging task, which may hinder their wide deployment.
The main difficulty to analytically characterize the network-level performance comes from the complicated signal and interference distributions, which depend on the applied multi-antenna transmission strategy, as well as the channel model. Previous studies have revealed that the gamma distribution is typically encountered when evaluating various multi-antenna systems. For example, it was shown in [@4712724; @6775036] that with Rayleigh fading the channel gain for the information signal is gamma distributed under different multi-antenna transmission techniques, e.g., zero forcing (ZF) and maximal ratio transmission (MRT) beamforming. For more general multi-antenna transmission strategies, gamma distribution was shown to be an accurate approximation of the channel gain [@5953530]. Furthermore, Nakagami fading will generally lead to a gamma distributed channel gain. While existing results are mainly for the Rayleigh fading scenario, i.e., with exponentially distributed channel gains, an analytical framework that can effectively handle gamma distributed channel gains is highly desirable for studying dense multi-antenna networks. On the other hand, with network densification, the distribution of the aggregated interference becomes intricate, which brings additional challenges to the performance evaluation. A random network model based on Poisson point processes (PPPs) has been adopted extensively to model the dense BS deployments. With the help of stochastic geometry, this model turns out to be tractable and can effectively characterize the aggregated interference [@6042301].
There have been some attempts to analytically evaluate multi-antenna wireless networks based on the random network model [@4712724; @5673756; @5351444; @6932503]. Taylor expansion was used in [@4712724] for approximating the interference power distribution in ad hoc networks. Analytical expressions provided in [@5673756; @5351444] were in complicated forms via many special functions, e.g., Bell polynomials and beta functions. A more recent work [@6932503] adopted an upper bound for the cumulative probability function (cdf) to handle the gamma distributed channel gains, which led to a closed-form expression for the coverage probability. Unfortunately, the available results, typically with approximations, are all in complicated forms, which cannot yield further insights for network design and optimization.
Recently, some promising results were produced in our previous works [@6775036; @7038201; @7412737], where closed-form expressions were derived for various performance metrics in multi-antenna heterogeneous networks. These results disclosed the potential of yielding a systematic way to analyze multi-antenna networks, and provided design guidelines for some specific network models and multi-antenna transmission techniques. In this paper, we shall extend the analyses in [@6042301; @6775036; @7038201; @7412737] to a more general framework, which is applicable to networks where the signal channel gain is assumed to be gamma distributed while the interference channel gains are with arbitrary distributions. In particular, the recursive relations between the $n$-th derivatives of the Laplace transform are exploited, based on which a novel representation of the coverage probability is derived, i.e., an induced $\ell_1$-norm of a Toeplitz matrix representation. With the proposed framework, the complexity of evaluating dense multi-antenna networks becomes comparable to the single-antenna case. Moreover, many analytical techniques developed for conventional single-antenna networks can be easily transplanted to the general multi-antenna setting.
To illustrate its effectiveness, the proposed framework is then applied to two example networks, i.e., physical layer security aware networks and millimeter-wave (mmWave) networks, for which fewer analytical results are available. With the new analytical tool, we are able to derive a new set of tractable results for these networks. With these results, we also investigate two critical design problems, i.e., the trade-off between the jamming and interference nulling in security aware networks, as well as the impact of the array size in mmWave networks.
A Unified Analytical Framework {#II}
==============================
Analytical Framework for Multi-Antenna Networks
-----------------------------------------------
Consider a dense multi-antenna wireless network, where the spatial locations of transmitters are modeled as a homogeneous PPP, denoted as $\Phi$ in $\mathbb{R}^2$ with density $\lambda_\mathrm{t}$. Each transmitter communicates with multiple single-antenna receivers with fixed transmit power. We focus on the performance analysis of the typical receiver at the origin, and the signal-to-interference-plus-noise ratio (SINR) is given by $$\label{SINR1}
\mathrm{SINR}=\dfrac{g_{x_0} r_0^{-\alpha}}{\sigma_\mathrm{n}^2 + \sum_{x\in\Phi^\prime}g_x \|x\|^{-\alpha}},$$ where $r_0=\Vert x_0\Vert$ is the distance from the typical receiver to its associated transmitter located at $x_0$, with the probability density function (pdf) $f_{r_0}(r)$. The noise power is normalized, depending on the system setting, and is denoted as $\sigma_\mathrm{n}^2$. The channel gains for the information signal and interference from the transmitter located at $x$ are denoted as $g_{x_0}$ and $g_{x}$, respectively. The signal channel gain $g_{x_0}$ is gamma distributed, i.e., $g_{x_0}\sim\mathrm{Gamma}(M,\theta)$, where $M$ and $\theta$ are shape and scale parameters of the gamma distribution. We assume $(g_x)_{x\in\Phi^\prime}$ is a family of independent and non-negative random variables with arbitrary distributions. The locations of the concerned interfering transmitters are denoted as $\Phi^\prime$, which can be composed of any PPP conditional on $x_0$. In particular, $\Phi^\prime$ can be a union of several different types of interferers that are distributed according to different PPPs $\Phi_j^\prime$, and each type of interferer has different densities $\lambda_{\mathrm{t},j}$ and interference channel gains $g_{x,j}$.
We focus on the coverage probability, defined as $$\label{coveragedef}
p_\mathrm{c}(\gamma)=\mathbb{P}(\mathrm{SINR}>\gamma),$$ where $\gamma$ denotes the SINR threshold. Many other typical network performance metrics, e.g., ASE, average throughput, and energy efficiency, can be analyzed based on the results for the coverage probability [@6775036; @7038201; @7412737; @haenggi2012stochastic].
In this section, we will provide a unified analytical framework for dense multi-antenna wireless networks. First, the coverage probability defined in can be written as $$p_\mathrm{c}(\gamma)=\mathbb{P}\left[g_{x_0}>\gamma r_0^\alpha\left(\sigma_\mathrm{n}^2 +I\right)\right],\label{coverageprob}
$$ where $I\triangleq\sum_{x\in\Phi^\prime}g_x \|x\|^{-\alpha}$. As mentioned before, one main difficulty of the analysis comes from the gamma distributed random variable $g_{x_0}$. Different from previous works that adopted approximations [@4712724; @6932503], in this paper, we will derive a compact and exact expression for this probability. According to the cdf of gamma distribution, the coverage probability is firstly rewritten as $$\begin{aligned}
p_\mathrm{c}(\gamma)&=\mathbb{E}_{r_0}\left\{\sum_{n=0}^{M-1}\frac{(\gamma r_0^\alpha/\theta)^n}{n!}\mathbb{E}_I\left[(\sigma_\mathrm{n}^2+I)^ne^{-\frac{\gamma r_0^\alpha}{\theta} (\sigma_\mathrm{n}^2+I)}\right]\right\}\nonumber\\
&=\mathbb{E}_{r_0}\left[\sum_{n=0}^{M-1}\frac{(-s)^n}{n!}\mathcal{L}^{(n)}(s)\right],\label{eq4}\end{aligned}$$ where $s\triangleq\gamma r_0^\alpha/\theta$, $\mathcal{L}(s)=e^{-s\sigma_\mathrm{n}^2}\mathbb{E}_I\left[e^{-sI}\right]$ is the Laplace transform of noise and interference. The notation $\mathcal{L}^{(n)}(s)$ stands for the $n$-th derivative of $\mathcal{L}(s)$. According to the probability generating functional (PGFL) of PPP, the Laplace transform $\mathcal{L}(s)$ can be expressed in a general exponential form as $$\begin{split}
\mathcal{L}(s)=&\,\exp\Bigg\{-s\sigma_\mathrm{n}^2-\sum_{j}\lambda_{\mathrm{t},j}\times\\
&\,\int_{\mathbb{R}^2}\left(1-\mathbb{E}_{g_{x,j}}[\exp(-sg_{x,j}\Vert x\Vert^{-\alpha})]\right)\mathrm{d}x\Bigg\}\\
=&\,\exp\{\eta(s)\},\label{Ls3}
\end{split}$$ where $\eta(s)$ is the exponent of the Laplace transform $\mathcal{L}(s)$. First, the recursive relations between $n$-th derivatives of the Laplace transform are illustrated in the following lemma.
\[lem1\] Define $x_n=\frac{(-s)^n}{n!}\mathcal{L}^{(n)}(s)$, we have $$x_n=\sum_{i=0}^{n-1}\frac{n-i}{n}q_{n-i}x_i,\quad q_k=\frac{(-s)^k}{k!}\eta^{(k)}(s).\label{nd}$$
See Appendix \[AA\].
The calculation of the $n$-th derivatives commonly appears in the performance analysis of multi-antenna systems. However, direct computation leads to messy expressions [@5351444]. In contrast, the recursive relations in Lemma \[lem1\] enable us to express the $n$-th derivatives of $\mathcal{L}(s)$ in a delicate way, which leads to a compact matrix form of the coverage probability, as given in the following theorem.
($\ell_1$-Toeplitz Matrix Representation of the Coverage Probability)\[th1\] The coverage probability is given by $$p_\mathrm{c}(\gamma)=
\int_0^\infty f_{r_0}(r)\left\Vert\exp\left\{\mathbf{Q}_M(r)\right\}\right\Vert_1\mathrm{d}r,\label{frameexpr}$$ where $\mathbf{Q}_M$ is an $M\times M$ lower triangular Toeplitz matrix $$\mathbf{Q}_M=\left[{\begin{IEEEeqnarraybox*}[][c]{,c/c/c/c/c,}
q_0&{}&{}&{}&{}\\
q_1&q_0&{}&{}&{}\\
q_2&q_1&q_0&{}&{}\\
\vdots &{}&{}& \ddots &{}\\
q_{M-1}&\cdots&q_2& q_1 &q_0
\end{IEEEeqnarraybox*}} \right].\label{topmatrix}$$ The nonzero entries of $\mathbf{Q}_M$ are determined by .
See Appendix \[AA\].
------------------------------------ ------------------------------- ------------------------------------------------- --------------------------------------------------- ---------------------------------------------------------------------------------------------------
\[0\][\*]{}[**Point process of the**]{}
\[0\][\*]{}[**interfering transmitters** $\Phi^\prime$]{}
**Single-Antenna** \[0\][\*]{} \[0\][\*]{}[$\mathrm{Gamma}(1,1)$]{} \[0\][\*]{}[$\mathrm{Exp}(1)$]{} \[0\][\*]{}[$\mathcal{P}(r_0,\infty)$ with density $\lambda_\mathrm{t}$]{}
**Networks** [@6042301]
**Throughput and Energy** \[0\][\*]{}[MRT]{} \[0\][\*]{}[$\mathrm{Gamma}(N_\mathrm{t},1)$]{} \[0\][\*]{}[$\mathrm{Exp}(1)$]{} \[0\][\*]{}[$\mathcal{P}(r_0,\infty)$ with density $\lambda_\mathrm{t}$]{}
**Efficiency Analysis** [@6775036]
**Interference** \[0\][\*]{}[ZF beamforming]{} $\mathrm{Gamma}$ $g_{x,1}\sim\mathrm{Exp}(1)$ $\Phi^\prime_1$: $\mathcal{P}(r_0,\mu r_0)$ with density $\varepsilon\lambda_\mathrm{t}$
**Coordination** [@7038201] $(\max(N_\mathrm{t}-K_{x_0},1),1)$ $g_{x,2}\sim\mathrm{Exp}(1)$ $\Phi^\prime_2$: $\mathcal{P}(\mu r_0,\infty)$ with density $\lambda_\mathrm{t}$
**$K$-tier Multiuser** \[0\][\*]{}[SDMA]{} \[0\][\*]{}[$\mathrm{Gamma}(M_k-U_k+1,1)$]{} \[0\][\*]{}[$g_{x,j}\sim\mathrm{Gamma}(U_j,1)$]{} \[0\][\*]{}[$\Phi^\prime_j$: $\mathcal{P}_j(r_j,\infty)$ with density $\lambda_{\mathrm{t},j}$]{}
**MIMO HetNets** [@7412737]
**Physical Layer Security** Jamming & \[0\][\*]{}[$\mathrm{Gamma}(N_{x_0},1)$]{} [$g_{x,1}\sim\mathrm{Gamma}(N_x,1)$]{} \[0\][\*]{}[See Section \[IVA\]]{}
**Aware Networks** ZF beamforming $g_{x,2}\sim\mathrm{Exp(1)}$
**Millimeter-wave** [Analog]{} \[0\][\*]{}[$\mathrm{Gamma}(M,1/M)$]{} \[0\][\*]{} \[0\][\*]{}[See Section \[IVB\]]{}
**Networks** beamforming
------------------------------------ ------------------------------- ------------------------------------------------- --------------------------------------------------- ---------------------------------------------------------------------------------------------------
\[table1\]
$\mathcal{P}(a,b)$ denotes a PPP within a ring with inner diameter $a$ and outer diameter $b$.
$$\label{exampleeq}
q_{k,i}=\frac{1}{P_k^\delta B_k^\delta}\sum_{j=1}^K\lambda_jP_j^\delta B_j^\delta\frac{\Gamma(U_j+i)}{\Gamma(U_j)\Gamma(i+1)}\frac{\delta}{i-\delta}\left(\frac{U_kB_k}{U_jB_j}\gamma\right)^i\times {}_2F_1\left(i-\delta,U_j+i;i+1-\delta;-\frac{U_kB_k}{U_jB_j}\gamma\right)$$
Compared to the complicated approximations in [@4712724; @5673756; @5351444; @6932503], the $\ell_1$-Toeplitz matrix representation in provides a much more compact form for the coverage probability. More importantly, it enables us to leverage various powerful tools from linear algebra, especially some nice properties of the lower triangular Toeplitz matrix, to provide insightful design guidelines for further network optimization. Such properties in the setting of small cell networks can be found in [@6775036].
Single-Antenna vs. Multi-Antenna Networks
-----------------------------------------
The proposed framework incorporates the single-antenna network [@6042301] as a simple special case. Assuming Rayleigh fading, the signal channel gain is exponentially distributed in the single-antenna case, i.e., $M=\theta=1$. Then, the expression in Theorem \[th1\] can be simplified as $$p_\mathrm{c}(\gamma)=
\int_0^\infty f_{r_0}(r)\mathcal{L}(s)\mathrm{d}r,$$ which is exactly the same as the classic result in [@6042301 Equation 2]. Note that, for single-antenna networks, the main task to derive the coverage probability is to manipulate the Laplace transform $\mathcal{L}(s)$. It has been shown in [@6042301] that, under various assumptions for the interference channel gain $g$ and different point processes of concerned interfering transmitters $\Phi^\prime$, $\mathcal{L}(s)$ (equivalently $\eta(s)$) can be derived into closed forms. This also creates the possibility to express the coverage probability in a closed form or a simple integral expression.
When it comes to multi-antenna networks, Theorem \[th1\] is compatible with any specific form of $\eta(s)$ as long as the Laplace transform can be expressed as $\mathcal{L}(s)=\exp\{\eta(s)\}$. Furthermore, with the gamma distributed signal channel gain, the only additional task compared to single-antenna networks is to calculate $M-1$ derivatives of $\eta(s)$, which will not introduce much computational complexity and thus maintains the tractability. This means that many manipulation tricks and steps developed for single-antenna networks can be transplanted to the multi-antenna case. The tractability and effectiveness of the proposed framework will be firstly illustrated in Section \[IIC\] with some existing results as special cases, and then will be further demonstrated in Section \[III\] via developing new analytical results.
Examples {#IIC}
--------
When applying Theorem \[th1\] to specific multi-antenna networks, the only parameters to be determined are the nonzero entries $\{q_i\}_{i=0}^{M-1}$ in the matrix $\mathbf{Q}_M$. Thus, there are two main steps when applying the proposed framework:
- First, we derive the Laplace transform $\mathcal{L}(s)$ for the given distribution of $g$ and the specific point process for the interfering transmitters $\Phi^\prime$.
- Then, we calculate the $n$-th ($1\le n\le M-1$) derivatives of the exponent $\eta(s)$ of the Laplace transform to compose $\{q_i\}_{i=0}^{M-1}$ in the matrix $\mathbf{Q}_M$ according to .
Following is an example, which provides a closed-form expression for $\{q_i\}_{i=0}^{M-1}$ (also for $p_c(\gamma)$) in a general multiuser MIMO HetNet.
For a general $K$-tier multiuser MIMO HetNet with SDMA, as considered in [@7412737], the coverage probability can be expressed in a closed form as $$p_c(\gamma)=\sum_{k=1}^K\left\Vert\mathbf{Q}^{-1}_{M_k-U_k+1}\right\Vert_1.$$ The corresponding $\{q_{k,i}\}_{i=1}^{M_k-U_k}$ are provided in , where $\delta=\frac{2}{\alpha}$ and ${}_2 F_1 \left(a,b;c;z\right)$ is the Gauss hypergeometric function [@zwillinger2014table].
Thanks to the proposed framework, this result is in a much more compact form than existing ones, and thus is amenable for further system analysis and optimization. Moreover, it is applicable to general multiuser MIMO HetNets.
As mentioned before, Theorem \[th1\] is a generalization of our previous results in [@6775036; @7038201; @7412737]. The corresponding distributions of the channel gains and the point processes of interfering transmitters are listed in Table \[table1\][^2]. By calculating the $n$-th derivatives of $\eta(s)$ and substitute them into the Toeplitz matrix, Theorem \[th1\] specializes to the analytical results therein. Such tractable expressions yield lots of system design insights, as specified below.
- In [@6775036], it was analytically shown that the network throughput scales with the BS density first linearly, then logarithmically, and finally converge to a constant. The energy efficiency will first increase and then decrease when increasing BS density/antenna size.
- In [@7038201], a tractable coverage probability expression was derived with the proposed user-centric intercell interference nulling strategy, based on which the optimal intercell interference range was determined to further improve the network performance.
- Trade-off between ASE and link reliability in multiuser MIMO HetNets was studied in [@7412737]. Analytical results for ASE and coverage probability were given, which were incorporated in an efficient algorithm to find the optimal BS density that achieves the maximum ASE while guaranteeing a certain link reliability.
Applications of the Proposed Framework {#III}
======================================
In this section, we will apply the proposed analytical framework to two newly emerging paradigms of multi-antenna networks. With more and more mobile devices connected to the network, information security becomes one primary concern in dense wireless networks. Meanwhile, the reduced coverage requirement of dense networks makes is possible to exploit abundant bandwidth at mmWave bands. In this section, we will analytically investigate physical layer security aware networks and mmWave networks, in both of which multi-antenna transmissions play a critical role.
Physical Layer Security Aware Networks {#IVA}
--------------------------------------
While jamming is an effective way to enhance the network secrecy performance [@6587514], interference nulling is important to suppress co-channel interference in dense networks [@7038201], both of which rely on multi-antenna transmissions. In this part, we will analytically find the optimal balance between jamming and interference nulling.
### Network Model
We consider an ad hoc network consisting of legitimate nodes and eavesdroppers. The legitimate transmitters are modeled as a homogeneous PPP $\Phi$ with density $\lambda_{\rm t}$. Each transmitter is equipped with $N_\mathrm{t}$ antennas and has an intended receiver at a fixed distance $r_0$ in a random direction. The passive eavesdroppers also form a homogeneous PPP with density $\lambda_\mathrm{e}$, which is independent to $\Phi$.
### Joint Jamming and Interference Nulling
We propose a joint jamming and interference nulling scheme. To avoid strong interference and possible strong jamming signals from nearby transmitters, each legitimate receiver requests interference nulling from the interfering transmitters within a distance $d_0$, called the *coordination range*. Denote the number of requests received by the transmitter located at $x$ by $K_x$, which is random due to the random node locations, and it is possible that $K_x \geq N_\mathrm{t}$. Due to the limited spatial degrees of freedom, each transmitter can handle at most $N_\mathrm{t}-1$ requests. If a transmitter receives $K_x \geq N_\mathrm{t}$ requests, we assume it will randomly choose $N_\mathrm{t}-1$ receivers to suppress interference. After each transmitter determines the interference nulling targets, the transmitter will perform jamming aided beamforming at the subspace which is orthogonal to its intended channel and the channels to the $\min\left(K_x,N_\mathrm{t}-1\right)$ receivers. Therefore, the jamming signal sent by the transmitter will not affect its own receiver and the other $\min\left(K_x,N_\mathrm{t}-1\right)$ receivers. But it will degrade the quality of service of the eavesdroppers and all the other receivers. We denote $N_x = N_\mathrm{t} - \min\left(K_x,N_\mathrm{t}-1\right)$ as the total number of transmitted streams. Generally, increasing the coordination range $d_0$ will suppress more nearby interference but less jamming signals will be transmitted, which leads to a trade-off between the interference nulling and jamming.
### Connection Outage Probability
Consider the typical receiver at the origin, whose transmitter locates at $x_0$ and receives $K_{x_0}$ requests of interference nulling. Then, based on the proposed scheme, the SIR can be given similar to , and the needed parameters in the framework are listed as follows.
- Signal channel gain: $g_{x_0}\sim\mathrm{Gamma}(N_{x_0},1)$;
- Point processes of the interfering transmitters and corresponding interference channel gains:\
$\Phi^\prime_\mathrm{out}=\mathcal{P}(d_0,\infty)$ with $g_x\sim\mathrm{Gamma}(N_x,1)$;\
$\Phi^\prime_\mathrm{in}=\mathcal{P}(0,d_0)$ with $g_x\sim\mathrm{Exp}(1)$.
The connection outage probability $p_\mathrm{co}$ [@6587514], defined as the probability that the SIR of a typical receiver is below a certain threshold $\gamma_l$, is presented in the following proposition.
\[Thm:pco\] The connection outage probability of the typical legitimate receiver is given by $$\label{eq:pco_Num}
p_{{\rm co}}=
1-\sum_{N_{x_{0}}=1}^{N_\mathrm{t}}p_{N}\left(N_{x_{0}}\right)p_{{\rm co}}\left(N_{x_{0}}\right),$$ where $$p_{N}\left(n\right)=
\begin{cases}
\frac{\left(\pi d_{0}^{2}\lambda_\mathrm{t}\right)^{N_\mathrm{t}-n}}{\left(N_\mathrm{t}-n\right)!}e^{-\pi d_{0}^{2}\lambda_\mathrm{t}}, & n=2,3,\cdots,N_\mathrm{t},\\
1-\sum_{i=2}^{N_\mathrm{t}}p_{N}\left(i\right), & n=1,
\end{cases}$$ $$\label{eq:pcoN_rgeq0}
p_{{\rm co}}\left(N_{x_{0}}\right)=1-
\left\Vert \exp\left\{-\pi\lambda_\mathrm{t}d_{0}^{2}\left[\mathbf{Q}_{N_{x_{0}}}-\mathbf{I}_{N_{x_0}}\right]\right\}\right\Vert _{1}.$$ The nonzero elements of $\mathbf{Q}_{N_{x_0}}$ are given by .
$$\label{eq:qi_r0}
q_k=\sum_{n=1}^{N_\mathrm{t}}p_{N}\left(n\right) \frac{\Gamma\left(n+k\right)}{\Gamma\left(n\right)\Gamma\left(k+1\right)} \frac{\delta}{\delta-k} \left[\left(\frac{r_0}{d_0}\right)^{\alpha}\gamma_l \frac{N_{x_{0}}}{n}\right]^{k}\times{}_{2}F_{1}\left(k-\delta,k+n;k+1-\delta; -\left(\frac{r_0}{d_0}\right)^{\alpha}\gamma_l \frac{N_{x_{0}}}{n} \right)$$
$$\label{eq29}
\hat{q}_k=\frac{2\lambda\Gamma\left(k+\frac{1}{2}\right)\Gamma(M+k)\gamma^k}{\sqrt{\pi}d(k!)^2(\alpha k-2)\Gamma(M)}
\Bigg[y_k\left(-\gamma\right)-(\pi\lambda_\mathrm{t})^2R^{2-\alpha k}\int_0^{R^2}e^{-\pi\lambda_\mathrm{t}r}r^\frac{\alpha k}{2}J_k\left(-\frac{\gamma}{R^{\alpha}}r^\frac{1}{\delta}\right)\mathrm{d}r\Bigg]\\$$
The proof is omitted due to space limitation.
### Secrecy Outage Probability
Consider the eavesdropper located at $z$, the received SIR of this eavesdropper is given by $$\label{eq:SIR_e_def}
{\rm SIR}_{e,z}=\frac{\frac{P_{t}}{N_{x_{0}}}\tilde{g}_{0}\left\Vert x_{0}-z\right\Vert ^{-\alpha}}{\frac{P_{t}}{N_{x_{0}}}\tilde{g}_{x_{0}}\left\Vert x_{0}-z\right\Vert ^{-\alpha}+\sum_{x\in\Phi\backslash\left\{ x_{0}\right\} }\frac{P_{t}}{N_{x}}\tilde{g}_{x}\left\Vert x-z\right\Vert ^{-\alpha}},$$ where the corresponding parameters in the framework are given as follows.
- Signal channel gain: $\tilde{g}_{0}\sim{\rm Gamma}\left(1,1\right)$;
- Point processes of the interfering transmitters and corresponding interference channel gains:\
Point $x_0$ with $\tilde{g}_{x_{0}}\sim{\rm Gamma}\left(N_{x_{0}}-1,1\right)$;\
$\Phi^\prime=\mathcal{P}(0,\infty)\backslash\{x_0\}$ with $\tilde{g}_{x}\sim{\rm Gamma}\left(N_x,1\right)$.
Secrecy outage probability is defined as the probability that the SIR of at least one eavesdropper is above a certain threshold $\gamma_e$ [@6587514]. A tight upper bound of the secrecy outage probability $p_{\rm so}$ is presented in Proposition \[Thm:pso\], as the exact expression of $p_{\rm so}$ is intractable. The proof is committed due to space limitation.
\[Thm:pso\] The secrecy outage probability $p_{\rm so}$ is upper bounded as $$\label{eq:pso_UB}
p_{\rm so} \leq 1- \sum_{N_{x_{0}}=1}^{N_\mathrm{t}} p_{N}\left(N_{x_{0}}\right)e^{ -\frac{\lambda_\mathrm{e}}{\lambda_\mathrm{t}} \frac{\left(1+{\gamma}_{e}\right)^{1-N_{x_{0}}} {\gamma}_{e}^{-\delta}N_{x_0}^{-\delta}} {\Gamma\left(1-\delta\right)\sum_{n=1}^{N_\mathrm{t}}p_{N}\left(n\right) \frac{\Gamma\left(n+\delta\right)}{\Gamma\left(n\right)n^{\delta}}} }.$$
The secrecy transmission capacity is adopted as the main performance metric, which is defined as the achievable rate of confidential messages per unit area with given connection and secrecy outage constraints. To obtain the secrecy transmission capacity $C_s$ for a fixed $d_0$, we firstly find the SIR threshold $\gamma_l^{\rm th}$ satisfying the equation $p_{\rm co}=\mu$ using , and then find the SIR threshold $\gamma_e^\mathrm{th}$ satisfying the equation $p_{\rm so}=\epsilon$ using . Thus, the secrecy transmission capacity can be written as $$C_s \left(d_0\right) = \left(1-\mu\right)\lambda_\mathrm{t}\left[\log_{2}\left(\frac{1+\gamma_{l}^{{\rm th}}}{1+\gamma_{e}^{{\rm th}}}\right)\right]^{+}.$$ The design goal is to find the optimal $d_0$ to maximize the secrecy transmission capacity.
![The secrecy transmission capacity with different $d_0$, with $\lambda_\mathrm{t}=10^{-2}$ m$^{-2}$, $\lambda_\mathrm{e}=10^{-3}$ m$^{-2}$, $r_0=1$ m, and $\alpha=4$. The connection outage constraint is $\mu=0.1$ while the secrecy outage constraint is $\epsilon=0.01$. []{data-label="fig1"}](./fig1){height="5.5cm"}
In Fig. \[fig1\], we show $C_s \left(d_0\right)$ as a function of $d_0$ according to the analytical results we derive. We find that there is an optimal $d_0$ for each curve. The reason that increasing $d_0$ from $0$ can increase $C_s$ is that nearby interference is critical for the legitimate receivers. Thus, setting a protecting zone for each receiver could significantly improve the performance of legitimate receivers. However, if $d_0$ is too large, $C_s$ will decrease, as a too large $d_0$ means each receiver requests many transmitters for interference nulling. While the performance improvement of the legitimate receiver is diminishing, the degrees of freedom left for each transmitter to send jamming noise will be small. Thus, the eavesdroppers will experience a better received SIR. Based on the tractable expressions of outage probabilities, we can easily find the optimal $d_0$. As $d_0=0$ corresponds to the special case without interference nulling, Fig. \[fig1\] also shows that with the optimal $d_0$, the proposed scheme achieves significant performance gains over the scheme only based on jamming [@6587514], which implies the importance of interference management in jamming assisted networks.
Millimeter Wave Cellular Networks {#IVB}
---------------------------------
In mmWave networks, directional antenna arrays are used both to combat huge path loss and to synthesize narrow beams, which also differentiates mmWave networks from conventional ones. However, how the directional arrays effect the network performance has not been fully understood. In this subsection, we adopt the proposed framework to investigate the role of directional antenna arrays in mmWave cellular networks.
### Network Model
We assume that mobile users are distributed as a homogeneous PPP, which is independent of $\Phi$, and each user is associated with the nearest BS. In mmWave cellular networks, one unique characteristic is the blockage effect. It has been pointed out in [@6932503] that non-line-of-sight (NLOS) signals and NLOS interference are negligible in dense mmWave networks. Hence, we will focus on the analysis where the typical user is associated with a LOS BS and the interference stems from LOS BSs, which are distributed according to the PPP $\mathcal{P}(r_0,R)$ if we adopt the *line-of-sight (LOS) Ball* blockage model [@6932503], where $R$ is the LOS radius.
### Impact of Directional Arrays
With a uniformly random single path (UR-SP) channel model and analog beamforming, the SINR at the typical user can be expressed in the same form as , with the required parameters listed as follows [@7564903].
- Signal channel gain: $g_{x_0}=\left|\rho_{0}\right|^2\sim\mathrm{Gamma}\left(M,\frac{1}{M}\right)$;
- Point process of the interfering transmitters $\Phi^\prime=\mathcal{P}(r_0,R)$ and corresponding interference channel gains: $$\label{sinsin}
g_x=\left|\rho_x\right|^2\frac{\sin^2\left(\frac{d}{\lambda}\pi \varphi_x\right)}{N_\mathrm{t}^2\sin^2\left(\frac{d}{\lambda}\pi \varphi_x\right)}\triangleq \left|\rho_x\right|^2 G_\mathrm{act}(\varphi_x),$$
where $\varphi_x$ are independent uniformly distributed random variables over $\left[-1,1\right]$. The array gain function in is referred as the *actual antenna pattern*. The main difficulty in analyzing the distribution of SINR is the complicated distribution of $g_x$. In oder to obtain a tractable analysis result, we adopt an approximation for the array gain function $G_\mathrm{act}$, which is referred as the *cosine antenna pattern* $$\label{approxpattern}
G_\mathrm{cos}(x)=
\begin{cases}
\cos^2\left(\frac{\pi N_\mathrm{t}}{2}x\right)&|x|\le\frac{1}{N_\mathrm{t}},\\
0&\text{otherwise}.
\end{cases}$$ With the approximated cosine antenna pattern and Theorem 1, a lower bound for the coverage probability can be provided to present the impact of directional antenna arrays, as shown in the following proposition.
\[coro2\] The coverage probability is tightly lower bounded by $$\label{b33}
p_\mathrm{c}^{\cos}(t)\ge \left(1-e^{-\pi\lambda_\mathrm{t}R^2}\right)e^{\beta_0t}\left(1+\sum_{n=1}^{M-1}\beta_nt^n\right),$$ which is a non-decreasing concave function of the array size $N_\mathrm{t}$, and $t=\frac{1}{N_\mathrm{t}}$, $$\beta_n=
\begin{dcases}
\frac{\hat{q}_0}{1-e^{-\pi\lambda_\mathrm{t}R^2}}&n=0,\\
\frac{\left\Vert \left(\hat{\mathbf{Q}}_M-\hat{q}_0\mathbf{I}_M\right)^n\right\Vert_1}{n!\left(1-e^{-\pi\lambda_\mathrm{t}R^2}\right)}&n\ge1.
\end{dcases}$$ The nonzero entries in $\hat{\mathbf{Q}}_M$ are determined by , and $$\begin{split}
y_k(x)=&J_k(x)\left[1-e^{-\pi\lambda_\mathrm{t}R^2}\left(1+\pi\lambda_\mathrm{t}R^2\right)\right]\\
&+\mathbf{1}(k=0)\left(
\pi\lambda_\mathrm{t}R^2-1+e^{-\pi\lambda_\mathrm{t}R^2}\right).
\end{split}$$ where $$J_k\left(x\right)={}_3F_2\left(k+\frac{1}{2},k-\delta,k+M;k+1,k+1-\delta;x\right),$$ with ${}_3F_2(a_1,a_2,a_3;b_1,b_2;z)$ denoting the generalized hypergeometric function [@zwillinger2014table] and $\mathbf{1}(\cdot)$ being the indicator function.
The proof is omitted due to space limitation.
From Proposition \[coro2\], we discover that increasing the directional antenna array size will definitely benefit the coverage probability, and the concavity means that the benefits on the coverage brought from leveraging more antennas will gradually diminish with the increasing antenna size. Moreover, we see that the lower bound is a product of an exponential function and an $M$-degree polynomial function with respect to the inverse of the array size $t$. For the special case that $M=1$, i.e., the Rayleigh fading channel, the lower bound will reduce to an exponential one.
![Impact of antenna arrays in mmWave cellular networks when $R=200$ m, $\gamma=5$ dB, $P_\mathrm{t}=1$ W, $\lambda_\mathrm{t}=10^{-3}$ m$^{-2}$, $\beta=-61.4$ dB, and $\alpha=2.1$.[]{data-label="fig2"}](./fig2){height="5.5cm"}
Fig. \[fig2\] demonstrates that the analytical result in Proposition \[coro2\] well matches the simulation result. The performance gain with a larger array size is mainly because increasing the array size will narrow the interference beam, which reduces the probability that the interferers direct the main lobes towards the typical user. In addition, as stated before, when $M=1$, the lower bound will reduce to an exponential one, which is linear in the logarithm scale as shown in Fig. \[fig2\]. When the Nakagami parameter $M$ increases, the polynomial term will take effect to make the lower bound to be a concave one.
Conclusions
===========
This paper proposed a unified analytical framework based on the $\ell_1$-Toeplitz matrix representation for coverage analysis of dense multi-antenna networks. A tractable expression for a general network model was firstly derived. Two examples, i.e., the security aware wireless networks and mmWave networks, were then provided to demonstrate the generality and effectiveness of the proposed framework. Overall, this paper provided a powerful toolbox for the evaluation and design of various dense multi-antenna wireless networks, which will find ample applications.
Proof of Lemma \[lem1\] and Theorem \[th1\] {#AA}
===========================================
Defining $x_n=\frac{(-s)^n}{n!}\mathcal{L}^{(n)}(s)$, the coverage probability can be expressed as $$\label{25}
p_\mathrm{c}(\gamma)=\mathbb{E}_{r_0}\left[\sum_{n=0}^{M-1}x_n\right],$$ where $x_0=\mathcal{L}(s)=\exp\{\eta(s)\}$ is given in Lemma \[lem1\]. Next, we will express $x_n$ in a recursive form. It is obvious that $\mathcal{L}^{(1)}(s)=\eta^{(1)}(s)\mathcal{L}(s)$, and according to the formula of Leibniz for the $n$-th derivative of the product of two functions, we have $$\mathcal{L}^{(n)}(s)=\frac{\mathrm{d}^{n-1}}{\mathrm{d}s}\mathcal{L}^{(1)}(s)=\sum_{i=0}^{n-1}{{n-1}\choose i} \eta^{(n-i)}(s)\mathcal{L}^{(i)}(s),$$ followed by $$\frac{(-s)^n}{n!}\mathcal{L}^{(n)}(s)=\sum_{i=0}^{n-1}\frac{n-i}{n}\frac{(-s)^{(n-i)}}{(n-i)!}\eta^{(n-i)}(s)\frac{(-s)^i}{i!}\mathcal{L}^{(i)}(s).$$ Therefore, the recursive relationship of $x_n$ is $$\label{recur}
x_n=\sum_{i=0}^{n-1}\frac{n-i}{n}q_{n-i}x_i,$$ where $$q_k=\frac{(-s)^k}{k!}\eta^{(k)}(s).$$ This completes the proof of Lemma \[lem1\].
Then, we define two power series as follows to solve for $x_n$, $$\label{eq40}
Q(z)\triangleq\sum_{n=0}^\infty q_nz^n,\quad
X(z)\triangleq\sum_{n=0}^\infty x_nz^n.$$ Using the properties that $Q^{(1)}(z)=\sum_{n=0}^{\infty}nq_nz^{n-1}$ and $Q(z)X(z)=\sum_{n=0}^\infty\sum_{i=0}^nq_{n-i}x_iz^n$, from , we obtain the differential equation $$X^{(1)}(z)=Q^{(1)}(z)X(z),$$ whose solution is $$X(z)=\exp\left\{Q(z)\right\}.\label{40}$$ Therefore, according to , , and , the coverage probability is given by $$\begin{split}
p_\mathrm{c}(\gamma)&=\mathbb{E}_{r_0}\left[\sum_{n=0}^{M-1}x_n\right]=\mathbb{E}_{r_0}\left[\sum_{n=0}^{M-1}\frac{1}{n!}\left.{X^{(n)}(z)}\right|_{z=0}\right]\\
&=\mathbb{E}_{r_0}\left[\sum_{n=0}^{M-1}\frac{1}{n!}\frac{\mathrm{d}^n}{\mathrm{d}z^n}\left.{e^{Q(z)}}\right|_{z=0}\right].
\end{split}$$ From [@henrici1974applied Page 14], the first $M$ coefficients of the power series $e^{Q(z)}$ form the first column of the matrix exponential $\exp\{\mathbf{Q}_M\}$, whose exponent is given in .
[^1]: This work was supported by the Hong Kong Research Grants Council under Grant No. 16210216.
[^2]: The physical meanings of the notations can be found in corresponding papers [@6775036; @7038201; @7412737], and the parameters for two examples in the next section are also provided.
|
[**Non-Abelian structures in compactifications of**]{} 0.3cm [**M-theory on seven-manifolds with $SU(3)$ structure**]{}
1.2cm
[**Ofer Aharony$^{a}$, Micha Berkooz$^{a}$, Jan Louis$^{b,c}$ and Andrei Micu$^{d}$[^1]** ]{} 0.8cm $^{a}$[*Department of Particle Physics,\
Weizmann Institute of Science, Rehovot 76100, Israel*]{}\
[Ofer.Aharony@weizmann.ac.il, Micha.Berkooz@weizmann.ac.il]{} 0.4cm
$^{b}$[*II. Institut f[ü]{}r Theoretische Physik der Universit[ä]{}t Hamburg\
Luruper Chaussee 149, D-22761 Hamburg, Germany*]{}\
[jan.louis@desy.de]{} 0.4cm
$^{c}$[*Zentrum für Mathematische Physik, Universität Hamburg,\
Bundesstrasse 55, D-20146 Hamburg*]{}
0.4cm
$^{d}$[*Physikalisches Institut der Universität Bonn\
Nussallee 12, 53115 Bonn, Germany*]{}\
[amicu@th.physik.uni-bonn.de]{}
[**ABSTRACT** ]{}
We study M-theory compactified on a specific class of seven-dimensional manifolds with $SU(3)$ structure. The manifolds can be viewed as a fibration of an arbitrary Calabi-Yau threefold over a circle, with a U-duality twist around the circle. In some cases we find that in the four dimensional low energy effective theory a (broken) non-Abelian gauge group appears. Furthermore, such compactifications are shown to be dual to previously analyzed compactifications of the heterotic string on $K3\times T^2$, with background gauge field fluxes on the $T^2$.
June 2008
Introduction
============
The study of possible space-time backgrounds of string theories has been an active field of research for almost 25 years. A specific subclass of backgrounds admit a geometrical interpretation in which the space-time manifold is the product space $$M_d\, \times\, Y_{10-d}\ ,$$ where $M_d$ is an infinitely extended $d$-dimensional manifold with Minkowskian signature while $Y_{10-d}$ is a (10$-d$)-dimensional compact manifold with Euclidean signature. In standard compactifications, $Y$ is constrained to be a Calabi-Yau manifold whose holonomy controls the amount of unbroken supersymmetry present in the string background. More generally, one can turn on background fluxes for various $p$-form fields in the compact directions, and then $Y$ is no longer constrained to be Ricci-flat. Such ‘generalized’ compactifications have been studied intensively in recent years [@grana].
It has been observed early on that these generalized compactifications can be discussed in terms of ‘manifolds with $G$-structure’ [@waldram]. Such manifolds admit a globally defined spinor (or tensor) which is left invariant by the subgroup $G$ of the structure group. Generically such manifolds have torsion and they can be characterized by a set of non-vanishing torsion classes [@joyce; @CS]. In string compactifications the number of invariant spinors is directly related to the number of supersymmetries present in the background. Calabi-Yau manifolds are a specific subclass of manifolds with $G$-structure where the torsion vanishes and the invariant spinor is covariantly constant with respect to the Levi-Civita connection.
String theories have the feature that their space-time backgrounds can be dual to each other. This is firmly established for dualities which hold in string perturbation theory. For example type IIA string theory compactified on a Calabi-Yau threefold $Y$ coincides with type IIB string theory compactified on the mirror Calabi-Yau $\tilde Y$. For dualities which involve the dilaton (the string coupling) in a non-trivial way, so far there is only (strong) evidence for the validity of the duality. An example of such a duality is the heterotic string compactified on $K3\times T^2$, which is believed to be identical to type IIA string theory compactified on a $K3$-fibred Calabi-Yau threefold [@K3d; @KLM].
An interesting question is the fate of these dualities for generalized compactifications. It has been shown in refs. [@GLMW; @GMPT; @GLW; @DFT] that mirror symmetry continues to hold in the supergravity limit for compactifications on manifolds with $SU(3)\times SU(3)$ structure. On the other hand, for the heterotic–type II duality mentioned above only partial results have been obtained so far [@CKKL; @LM3]. More specifically it has been shown in [@LM3] that in the supergravity limit a particular class of $SU(3)\times
SU(3)$-structure compactifications of type IIA is dual to the heterotic string compactified on $K3\times T^2$ with a specific choice of background fluxes. However, for some of the fluxes which can be turned on in the heterotic theory no dual type IIA compactification could be identified. These are fluxes which result in a gauged $N=2$ supergravity with vector multiplets carrying a non-Abelian charge.
Let us review this in a little more detail. At the level of the low-energy effective supergravity the heterotic – type II duality corresponds to a duality between $N=2$ supergravities in $d=4$. Such supergravities can be coupled to $N=2$ vector-, hyper- and tensor-multiplets.[^2] Turning on background fluxes or compactifying on manifolds with $G$-structure correspond to the deformation of an ungauged supergravity into a gauged or massive supergravity, with the flux and torsion playing the role of non-trivial gauge charges or mass parameters. For $SU(3)\times SU(3)$-structure compactifications of type II it was shown in [@LM2; @DDSV; @GLMW; @GLW; @DFT; @DFTV; @BC] that only gauged supergravities with charged hypermultiplets or massive tensor multiplets appear. It was left as a puzzling feature of such compactifications that charged vector multiplets could not be obtained. On the other hand in the heterotic $K3\times T^2$ compactification, non-Abelian gauge symmetries do appear precisely when fluxes on the $T^2$ are turned on [@LM1].
One of the goals of this paper is to resolve this puzzle. We find that non-Abelian gauge symmetries also appear on the type IIA side if instead of $SU(3)\times SU(3)$-structure compactifications of type II one considers compactifications of M-theory on 7-dimensional manifolds with $SU(3)$ structure. We argue that these are the duals of the heterotic compactifications with fluxes on the $T^2$. These are the main results of our paper.
In this paper we do not attempt to work out the low-energy effective action for a generic 7-dimensional $SU(3)$-structure manifold, but instead focus on a very specific subclass of $SU(3)$-structure manifolds which lead to non-Abelian gauge symmetries. More specifically, we consider 7-dimensional manifolds which can be viewed as a non-trivial fibration of a Calabi-Yau threefold $CY_3$ over a circle $S^1$. Furthermore, we impose that only the second cohomology $H^{(1,1)}$ of $CY_3$ is twisted when going around the $S^1$, but that the third cohomology $H^3$ (which governs the hypermultiplet sector) is left unchanged. This constraint leads to a hypermultiplet sector which is entirely determined by $H^3(CY_3)$ and therefore can be safely neglected for the purpose of this paper. It is precisely the twisted $H^{(1,1)}$ cohomology which induces the non-Abelian structure into the theory. When $CY_3$ is $K3$-fibred and M-theory on $CY_3$ is dual to heterotic string theory on $K3\times S^1$, twists of this type provide the dual of the $T^2$ fluxes on the heterotic side.
Let us describe the twisting in slightly more detail. Consistency requires that after going around the circle which takes us from 5 to 4 dimensions, $H^{(1,1)}$ is rotated by an element of the U-duality group [@Hull; @KM] of the five-dimensional theory which corresponds to M-theory compactified on $CY_3$ or the dual heterotic theory on $K3\times S^1$. In this case we have $\Gamma({\bf Z}) = SO(1,h^{(1,1)}-2,{\bf Z})$, which on the heterotic side is just the T-duality group. On the M-theory side this symmetry exists precisely for the dual K3-fibred Calabi-Yau manifolds [@KLM].
In four dimensions the result is the appearance of non-Abelian gauge symmetries as follows. The base of the K3-fibration is a ${\bf P_1}$ whose volume is identified with the four dimensional heterotic dilaton. Heterotic weak coupling corresponds to a large ${\bf P_1}$-base, and in that limit the low-energy limits of both theories have a $SO(2,h^{(1,1)}-1,{\bf R})$ symmetry in four dimensions. In this case, we can describe the twisting in the language of four dimensional supergravity as gauging[^3] an isometry inside $SO(2,h^{(1,1)}-1, {\bf R})$. This makes the four-dimensional gauge transformations non-commuting (non-Abelian).
At a generic point in field space, the non-Abelian gauge symmetry is spontaneously broken, giving a mass to some of the gauge bosons. We find that on the M-theory side the gauge boson masses are inversely proportional to the radius of the M-theory circle. In order to consistently keep these gauge bosons in the low energy effective action we need to require that these masses are smaller than the Kaluza-Klein masses of the $CY_3$. This means that the M-theory circle has to be larger than the radii of the Calabi-Yau, which in turn forces us into the M-theory regime of type IIA string theory. This is the reason that the non-Abelian structure is not visible in $SU(3)\times SU(3)$ compactifications of type II theories.
Let us stress that the flux on the heterotic side, and similarly the non-trivial monodromy in the M-theory compactification, lead to a non-trivial potential on the moduli space. We do not discuss here the stabilization of these moduli, which can be accomplished by adding additional ingredients. Instead, we just compute and compare the resulting low-energy effective actions, without attempting to solve their equations of motion.
The paper is organized as follows. In section \[sec:geo\] we discuss the Kaluza-Klein (KK) reduction of M-theory on a seven-dimensional manifold with $SU(3)$ structure. As a warm-up, we first recall in section \[5dM\] the properties of the five-dimensional background corresponding to the reduction of M-theory on a Calabi-Yau threefold. This sets the stage for the specific $S^1$-fibration we consider in section \[SU3\]. In section \[sec:KK\] we derive the low energy effective supergravity by a Kaluza-Klein reduction from 11 to 4 dimensions, paying special attention to the gauging of the vector multiplets. In section \[sec:N2\] we rewrite the effective action in a form which shows the consistency with $N=2$ gauged supergravity. In section \[K3fibred\] we consider the specific case of a $K3$-fibred Calabi-Yau threefold which is the class of backgrounds dual to the heterotic string. Most of our explicit computations are done in the limit in which the scalar moduli space has a continuous isometry, as this makes the computations simpler; in section \[break\] we discuss what happens when we go away from this limit. In section \[hetsect\] we turn to the heterotic string compactified on $K3\times T^2$ and start by recalling a few generic properties of such backgrounds in section \[genpro\]. We then compare the mass-scales in the dual backgrounds in section \[mapmas\], showing the necessity to go to the M-theory regime on the type II side when we turn on heterotic fluxes on the $T^2$. In section \[fluxmon\] we argue that also the heterotic fluxes can be viewed as a monodromy in the T-duality group. In section \[N2het\] we then recall the heterotic effective action as computed in [@LM1]. Finally, in section \[mcomparison\] we compare the effective actions on both sides and show that for a subset of torsion parameters they perfectly match. For the convenience of the reader we briefly recall the vector multiplet sector of (gauged) $N=2$ supergravity in appendix \[sg\]. Additional details of the vector multiplets in heterotic string compactifications are assembled in appendix \[vshet\].
M-theory compactifications on manifolds with $SU(3)$ structure {#sec:geo}
==============================================================
In this section we compactify M-theory on seven-dimensional manifolds with $SU(3)$ structure. By construction this leads to an $N=2$ supersymmetric effective theory in $d=4$. However, as already explained in the introduction, we do not consider the most general manifolds with $SU(3)$ structure but instead focus on a particular subclass of manifolds which lead to a low-energy supergravity with non-Abelian vector multiplets. For simplicity we further insist that the moduli space of the hypermultiplets coincides with that of a $CY_3 \times S^1$ compactification, where all scalars in hypermultiplets are gauge neutral. Thus, we do not pay attention to the hypermultiplets but only concentrate on the vector multiplet sector. The gaugings which appear in the hypermultiplet sector in general compactifications of M-theory on manifolds with $SU(3)$ structure and the corresponding prepotentials were derived in [@MPS; @eran], but a detailed analysis in the vector multiplet sector of these compactifications was not considered so far.
We begin with a short review of the compactification of M-theory on six dimensional Calabi-Yau manifolds, and then proceed to the seven dimensional case.
M-theory compactifications on Calabi–Yau threefolds {#5dM}
---------------------------------------------------
In order to set the stage let us briefly recall the structure of the five dimensional $N=2$ supergravity[^4] which arises from compactifying M-theory on Calabi–Yau threefolds. Our discussion is based on references [@Gunaydin:1984ak; @CCAF; @AFT] but since we are only interested in the vector multiplet sector we (largely) ignore the hypermultiplets in this section.
The bosonic spectrum of eleven-dimensional supergravity is particularly simple and consists only of the metric $\hat G_{MN}$ and a three-form potential $\hat
C_3$. (We use hats $\,\hat{}\,$ in order to denote the eleven-dimensional quantities.) The eleven-dimensional action for these fields is given by (setting the eleven dimensional Newton’s constant to one) $$\label{S11}
S_{11} = \frac12 \int \Big[ \hat R *1 - \frac12 \hat F_4 \wedge * \hat F_4
- \frac16 \hat F_4 \wedge \hat F_4 \wedge \hat C_3 \Big] \; ,$$ where $\hat F_4 = d \hat C_3$ is the field strength of the three-form potential $\hat C_3$.
The five-dimensional vector fields arise from expanding $\hat C_3$ in terms of harmonic $(1,1)$-forms on the Calabi-Yau. More precisely we choose a basis $\omega_i$ of $H^{(1,1)}(CY_3)$ and expand according to $$\label{C35dexp}
\hat C_3 = A^i \wedge \omega_i + \ldots \; ,\qquad i=1,\ldots , h^{(1,1)}\ ,$$ where the $\ldots$ indicate further terms corresponding to scalar fields in hypermultiplets. One of the vector fields $A^i$ is identified with the graviphoton while the other $(h^{(1,1)}-1)$ are members of vector multiplets. Their (bosonic) superpartners correspond to Kähler deformations of the Calabi–Yau metric. More precisely, one expands also the Kähler form $J$ in terms of the basis $\omega_i$ $$\label{klrmb}
J\ =\ {\nu}^i\, \omega_i\ ,$$ such that the ${\nu}^i$ parameterize the Kähler deformations. In the five-dimensional low energy effective theory the ${\nu}^i$ appear as scalar fields. However, one of the Kähler moduli, the overall volume ${\mathcal{K}}$, is not part of any vector multiplet but instead is a member of the universal hypermultiplet. The remaining $(h^{(1,1)}-1)$ moduli are the scalar fields in vector multiplets.
Inserting and into and integrating over the Calabi-Yau manifold results in the five-dimensional $N=2$ effective action (for the bosonic fields that are not in hypermultiplets)[^5] $$\label{5daction}
S_5 = \int \left[ \tfrac12 R_5 *\mathbf{1}
- g^{(5)}_{\alpha\beta}\, d \varphi^\alpha \wedge * d\varphi^\beta -
\left. \tfrac14 g_{ij}\right |_{{\mathcal{K}}=1} F^i \wedge * F^j - \tfrac1{12}
{\mathcal{K}}_{ijk} F^i \wedge F^j \wedge A^k \right ]\; ,$$ where $F^i = dA^i$ and ${\mathcal{K}}_{ijk}$ are intersection numbers of the Calabi-Yau defined by the integral $$\label{Khatdfb}
{{\mathcal{K}}}_{ijk} = \int_{CY_3} \omega_i \wedge \omega_j \wedge \omega_k
\; .$$ To explain the other couplings in this action we need to be more explicit about the separation of the overall volume modulus ${\mathcal{K}}$ from the other Kähler moduli. Since the volume modulus is part of the universal hypermultiplet, it should not mix with the other quantities describing the vector multiplet moduli space. Therefore, all the terms in the vector multiplet action are evaluated on a hypersurface of constant ${\mathcal{K}}$ which we choose as ${\mathcal{K}}=1$. This is precisely the meaning of the matrix of gauge couplings $\left . g_{ij} \right|_{{\mathcal{K}}=1}$ in the action , which is equal to the metric on the Kähler moduli space [@Strominger] $$\label{defgb}
g_{ij} = \frac1{4 {\mathcal{K}}} \int_{CY_3} \omega_i \wedge *\omega_j
= - \frac{1}{4 {\mathcal{K}}} \left( {\mathcal{K}}_{ij} - \frac{{\mathcal{K}}_i {\mathcal{K}}_j}{4 {\mathcal{K}}}
\right) \; ,$$ evaluated on the hypersurface ${\mathcal{K}}=1$.[^6] Here the Calabi–Yau volume, ${\mathcal{K}}$, is defined as $$\label{CYvolb}
{\mathcal{K}}= \tfrac16 \int_{CY_3} J \wedge J \wedge J = \tfrac16 {\mathcal{K}}_{ijk} {\nu}^i
{\nu}^j {\nu}^k \; ,$$ and we also abbreviated $$\begin{aligned}
\label{cKdefsb}
{\mathcal{K}}_i =& \int_{CY_3} \omega_i\wedge J\wedge J =
{\mathcal{K}}_{ijk} {\nu}^j {\nu}^k\ ,\\
{\mathcal{K}}_{ij} =& \int_{CY_3} \omega_i\wedge\omega_j\wedge J =
{\mathcal{K}}_{ijk} {\nu}^k \; .
\end{aligned}$$
Finally let us discuss the kinetic terms of the scalar fields in the action . Let us denote by $\varphi^\alpha$ the $(h^{(1,1)}-1)$ scalar fields which parameterize the hypersurface ${\mathcal{K}}=1$. The metric $g^{(5)}_{\alpha\beta}$ which appears in is therefore the induced metric on that hypersurface, which is given by [@Gunaydin:1984ak; @AFT] $$\label{5dmetric}
g^{(5)}_{\alpha\beta}\ = \
g_{ij}\ \frac{\partial {\nu}^i}{\partial\varphi^\alpha}
\frac{\partial {\nu}^j}{\partial\varphi^\beta} \big|_{{\mathcal{K}}=1}
\; , \qquad \alpha,\beta = 1, \ldots, h^{(1,1)}-1\ .$$
For the purpose of our paper it is of interest to also discuss possible (global) isometries of the moduli space of the scalars in the vector multiplets. Following [@Gunaydin:1984ak] let us consider the infinitesimal linear transformations $$\label{vgtrn}
{\nu}^i \to {\nu}^i - \epsilon {M}_j^i {\nu}^j \ ,$$ where the $M^i_j$ are constant and elements of a Lie Algebra. Since the space of scalar fields in vector multiplets is defined on the hypersurface ${\mathcal{K}}=1$ the transformation is constrained by the requirement $$\label{delK}
\delta {\mathcal{K}}= 0 \; .$$ Inserting into one arrives at [@Gunaydin:1984ak] $$\label{conn}
{M}_i^l {{{\mathcal{K}}}}_{jkl} + {M}_j^l {{{\mathcal{K}}}}_{kil} + {M}_k^l
{{{\mathcal{K}}}}_{ijl} = 0 \ ,$$ which states that ${\mathcal{K}}_{ijk}$ is an invariant tensor of the Lie Algebra. Inserting into and one also computes $$\label{cKtr}
\delta {\mathcal{K}}_{ij} = \epsilon {M}_i^k {\mathcal{K}}_{kj} + \epsilon {M}_j^k {\mathcal{K}}_{ik} \;
, \qquad
\delta {\mathcal{K}}_i = \epsilon {M}_i^j {\mathcal{K}}_j \; ,$$ and $$\label{ggtr}
\delta g_{ij} = \epsilon {M}_i^k g_{kj} + \epsilon {M}_j^k g_{ik} \; .$$
By assigning the transformation law also to the $A^i$ one immediately sees the invariance of the last two terms in the action . The invariance of the second term in is less obvious but has been established in [@Gunaydin:1984ak]. A quick intuitive argument goes as follows. The full kinetic term on the Calabi–Yau moduli space of Kähler deformations $g_{ij} \partial_\mu {\nu}^i
\partial^\mu {\nu}^j$ is clearly invariant under and . Since the second term in differs from the one above by a kinetic term for the volume modulus $\partial_\mu {\mathcal{K}}\partial^\mu
{\mathcal{K}}$, which is trivially invariant due to , it follows that the the kinetic term for the Kähler moduli parameterizing the hypersurface ${\mathcal{K}}=1$ is also invariant under the transformation . Therefore the action has a global symmetry for any $M_j^i$ which solves the constraint .
For generic ${\mathcal{K}}_{ijk}$, eq. has no solutions, or in other words, a generic ${\mathcal{K}}_{ijk}$ is not an invariant tensor of any Lie Algebra. Let us therefore turn to a specific situation where global isometries do arise, which will be used in the next subsections. The case that we will discuss in detail is the special class of $K3$-fibred Calabi-Yau threefolds (over a ${\bf P_1}$ base) [@KLM]. If we denote by ${\nu}^1$ the volume of the base, then for this class of manifolds the intersection numbers obey ${\mathcal{K}}_{11i} = 0$. Furthermore, if the ${\bf P_1}$ is taken large, i.e. ${\nu}^1 \gg
{\nu}^{i\neq1}$ for fixed ${\mathcal{K}}$, then the moduli space is the scalar manifold [@Gunaydin:1984ak; @CCAF; @AFT] $$\label{g5d}
M_V = SO(1,1)\times \frac{SO(1,h^{(1,1)}-2)}{SO(h^{(1,1)}-2)}\ .$$ The isometry group of this space is $SO(1,1)\times SO(1,h^{(1,1)}-2)$ and in section \[K3fibred\] we discuss in detail the corresponding solutions of . A discrete subgroup of this isometry group, $SO(1,h^{(1,1)}-2,{\bf Z})$, is known as the U-duality group which is an exact symmetry of these compactifications.
Seven dimensional manifolds with $SU(3)$ structure {#SU3}
--------------------------------------------------
In the previous section we briefly reviewed Calabi-Yau compactifications of M-theory. Let us now turn to compactifications on seven-dimensional manifolds $X_7$ with $SU(3)$ structure. They can be characterized by a triplet of globally defined and $SU(3)$-invariant tensors $\{V, J, \Omega\},$ where $V$ is a one-form, $J$ is a two-form and $\Omega$ is a three-form [@waldram; @DAP]. This triplet is constrained to satisfy the compatibility relations $$\begin{aligned}
\label{SU3con}
& J \wedge J \wedge J = \tfrac{3 i}{4}\ \Omega \wedge \bar \Omega \
, \\
& \Omega \wedge J = V \lrcorner J\ =\ V \lrcorner \Omega\ =\ 0\ ,
\end{aligned}$$ where $\lrcorner$ denotes contraction of indices.
Due to the existence of the one-form $V$, one can define an almost product structure, in that locally the metric can be split as $$\label{metricsplit}
ds_7^2 (y,z) = ds_6^2 (y,z) + V^2(y,z)\ ,$$ where $y$ are the coordinates of the six-dimensional component $Y_6$ and $z$ is the coordinate of the one-dimensional component. On $Y_6$ the two-form $J$ defines an almost complex structure (by raising one index with the metric) and it is a $(1,1)$-tensor with respect to it. Similarly $\Omega$ is a $(3,0)$ form, and together they define the standard $SU(3)$ structure on a six-dimensional space.
The manifold $X_7$ can be characterized by the non-vanishing intrinsic torsion classes. They are defined by $dV, dJ,$ and $d\Omega$, and can be decomposed into irreducible $SU(3)$ representations. One finds 13 torsion classes denoted $R$, $c_{1,2}$, $V_{1,2,3}$, $W_{1,2}$, $A_{1,2}$, $T$ and $S_{1,2}$ in [@DAP], defined by $$\begin{aligned}
\label{su3torsion}
dV &=& R J + \bar{W_1} \lrcorner \Omega + W_1 \lrcorner \bar{\Omega} + A_1 +
V \wedge V_1 \; , {\nonumber}\\
dJ &=& \tfrac{2i}{3}\left( c_1 \Omega - \bar{c_1} \bar{\Omega} \right) + J
\wedge V_2 + S_1 + V\wedge \left[ \tfrac{1}{3} \left( c_2 + {\bar c}_2\right)
J + \bar{W_2}\lrcorner \Omega + W_2 \lrcorner \bar{\Omega} + A_2 \right]
,\ {\nonumber}\\
d\Omega &=& c_1 J \wedge J + J \wedge T + \Omega \wedge V_3 + V \wedge
\left[c_2 \Omega - 2 J \wedge W_2 + S_2 \right] \; .\end{aligned}$$
As we already stated we do not compactify on generic $SU(3)$ structure manifolds with all torsion classes non-zero. Instead we focus on manifolds which can be viewed as Calabi–Yau threefolds $CY_3$ fibred over a circle $S^1$. With these specifications our setup is closely related to the case of a six-dimensional torus $T^6$ fibred over a circle. Such backgrounds were discussed in detail in [@Hull; @KM] and in the following we can draw on their results.
We parameterize the $S^1$ direction by the coordinate $z \in [0, 1)$, while the radius of the circle is given by the value of the dilaton $e^{\phi}$, where $V
= e^{\phi} dz$. We further constrain the fibration such that when going around the $S^1$ only the second cohomology $H^{(1,1)}(CY_3)$ is twisted by a matrix $\gamma$, while the third cohomology $H^{3}(CY_3)$ is unaffected. In this way we ensure that the hypermultiplet sector, which is governed by $H^{3}(CY_3)$, coincides with that of a $CY_3\times S^1$ compactification. On the other hand, as we saw in the previous section, the vector multiplets are determined by $H^{(1,1)}(CY_3)$ and hence they do feel the twisting.
As in the previous section we denote the elements of $H^{(1,1)}$ by $\omega_i$ but now they also depend on the circle coordinate $z$, or in other words we have a set of $\omega_i(y,z)$. However, the structure of the fibration is not arbitrary but constrained by a consistency condition. If we choose a specific basis at (say) $z=0$, it rotates as we move in the $z$ direction. After a full circle, the $\omega_i$ must come back to an equivalent theory, i.e., the 5 dimensional theory returns to itself up to a discrete U-duality transformation [@Hull].[^7] We already briefly discussed the U-duality group $\Gamma({\bf Z})$ of M-theory compactified on $CY_3$ at the end of the last subsection and here it appears as the group of monodromies as we go around the circle[^8] $$\label{ofinite0}
\omega_i \to \gamma^j_i\, \omega_j \ ,\qquad
\gamma^i_j \in \Gamma({\bf Z})\ .$$ In principle, only this global information exists. However, it is convenient to choose an infinitesimal form of this relation by twisting the basis $\omega_i$ by a constant matrix, $M^i_j$, in the continuous group $\Gamma({\bf R})$ as we go along the circle. In this case we write $$\label{gammainf}
\gamma = e^{M},\qquad M \in \Gamma(\mathbf{R}) \; ,$$ and the infinitesimal version of becomes $$\label{infntsa}
\omega_i(y,z+\epsilon) =\, \omega_i(y,z) + \epsilon {M}_i^j \omega_j(y,z) \; .$$ Since on the Calabi–Yau slice the $\omega_i$ continue to be harmonic (and therefore closed), can also be expressed by the differential relation $$\label{do}
d \omega_i = {M}_i^j\, \omega_j \wedge dz \; .$$ Equations and hold whenever the monodromy is evenly distributed along the $S^1$. This turns out to be useful for carrying out a KK reduction even when there is no continuous isometry. In this case will in general not be a solution of the equations of motion, but it is still a useful ansatz for analyzing the compactification. A specific example where this ansatz gives a solution arises when we consider degenerations of the Calabi-Yau compactification in which a continuous version of the U-duality appears as an approximate global symmetry $\Gamma({\bf
R})$. As we discussed at the end of section \[5dM\] this situation occurs, for example, when the base in the $K3$-fibred $CY_3$ is large. In this case the matrix $M$ satisfies , and expresses a translation invariance along the $S^1$.
However, generically the full theory does not have the continuous symmetry and the only real information is the global data in the monodromy $\gamma$. The approach that we will take is first to discuss the situation with a continuous symmetry, obtain a quantitative understanding of what this process of twisting does, and afterwards indicate (in section \[break\]) why going away from this limit does not change the qualitative picture.
Let us define the Calabi-Yau intersection numbers exactly as in , but now with $z$-dependent $\omega_i(y,z)$. In this case the ${\mathcal{K}}_{ijk}$ can a priori also be $z$-dependent. However, inserting into , we see that precisely when there is an isometry the $z$-dependence cancels out due to . Note that the fact that the same matrix $M_i^j$ appears in and establishes the connection between an isometry in the space-time effective theory and the translational symmetry of the fibration of the $(1,1)$-forms along the $S^1$-circle.
In the Kaluza-Klein reduction which we perform in the next section we encounter the seven-dimensional integral $$\label{intno}
\hat{\mathcal{K}}_{ijk} = \int_{X_7} \omega_i \wedge \omega_j \wedge \omega_k \wedge dz
\; ,$$ which are the intersection numbers defined on the entire $X_7$. They coincide with the ${{\mathcal{K}}}_{ijk}$ precisely when holds. In this case the ${{\mathcal{K}}}_{ijk}$ are $z$-independent, and thus the integral in trivially factorizes. Note that the condition also arises in this case from the requirement of global consistency of $$\label{conseq}
\int_{X_7} d( \omega_i \wedge \omega_j \wedge \omega_k) = 0\; .$$ It is also useful to note that all the other Calabi–Yau moduli space quantities defined in equations , and have, due to , similar definitions in terms of seven-dimensional integrals. In particular the Calabi–Yau volume can also be defined as $$\label{volCY7}
{\mathcal{K}}= \tfrac16 \int_{X_7} J \wedge J \wedge J \wedge dz \; .$$ The volume of the full seven-dimensional manifold $X_7$ differs from this one by a dilaton factor, which, when the dilaton is independent of the $X_7$ coordinates is equal to $$\label{volX}
\hat {\mathcal{K}}= \tfrac16 \int_{X_7} J \wedge J \wedge J \wedge V = e^{\phi} {\mathcal{K}}\; .$$ In analogy with we expand $J$ according to $$\label{Jdef}
J\ =\ v^i\, \omega_i(y,z)\ ,$$ where the $v^i$ are again constant but now there is a $z$-dependence in $\omega_i$. The $v^i$ will appear as scalar fields in the four-dimensional effective action. Note that $J$ is not invariant under translation in the $z$-direction, but it comes back to itself when we go all the way around the circle. This follows from the fact that we identify the manifold under $z\rightarrow z+1$ together with . As a consequence $J$ is globally defined on $X_7$.
As we will see in the next subsection, it is the $z$-dependence of the $\omega_i$ in which generates mass terms for the fields $v^i$ in the four-dimensional effective action. Let us note that this can also be seen from a Scherk-Schwarz point of view [@SS] where one first compactifies to five dimensions on Calabi–Yau manifolds as in the previous section and then, in the subsequent compactification to four dimensions, gives the five-dimensional scalar fields ${\nu}^i$ a monodromy as one moves around the circle such that their $z$-dependence is given by ${\nu}^i(z + \epsilon) = {\nu}^i(z) + \epsilon M_j^i\,
{\nu}^j(z)$. Thus the relation between ${\nu}^i$ and $v^i$ is simply ${\nu}^i(z) = (e^{z
M})_j^i\, v^j$.
Inserting eq. into eq. and using eq. and $\hat{\mathcal{K}}_{ijk} = {\mathcal{K}}_{ijk}$ we obtain ${\mathcal{K}}=\tfrac16 {\mathcal{K}}_{ijk}v^iv^jv^k$ exactly as in , but now in terms of the parameters $v^i$ instead of ${\nu}^i$. Similarly, the metric on the moduli space of Kähler deformations can be defined as $$\label{defg}
g_{ij} = \frac1{4 {\mathcal{K}}} \int_{X_7} \omega_i \wedge *\omega_j \; ,$$ with no dilaton prefactor, which is in agreement with the metric Ansatz we shall consider in the next section. One can show that it coincides with the metric given in eq. with the replacement ${\nu}^i\to v^i$ in and .
Before we turn to the details of the KK-reduction let us determine the non-trivial torsion classes in for the fibration characterized by eq. or equivalently by eq. . Using the expansion with the forms $\omega_i$ satisfying we find $$\label{dJ}
dJ = v^i M_i^j\, \omega_j\wedge dz\ ,$$ which shows that the ${M}_i^j$ parameterize the non-vanishing intrinsic torsion. Comparison with reveals that the only torsion classes which can be non-trivial are $A_2$ and $\mathrm{Re}~c_2$. Actually, for the case at hand, it can be shown that $\mathrm{Re}~c_2$ vanishes and the only torsion class which is present is $A_2$. This can be seen by writing in components and contracting with $J^{mn}$. Using the $SU(3)$ structure consistency relation $V_m J^{mn} = 0$, , and the fact that $A_2$ in is primitive, i.e. $(A_2)_{mn} J^{mn} = 0$, leaves us with the following relation for $\mathrm{Re}~c_2$ $$\label{Rec}
\mathrm{Re}~c_2 \sim v^i {M}_i^j (\omega_j)_{mn} J^{mn} \; .$$ For Calabi–Yau manifolds, the contraction of the $(1,1)$ forms $\omega_j$ with $J$ was computed in [@Strominger] and shown to be proportional to ${\mathcal{K}}_{jkl} v^k v^l$. Inserting this into the above equation, the vanishing of the torsion class $\mathrm{Re}~c_2$ is simply a consequence of the constraint . Note that for ${M}=0$ the two-form $J$ is closed, the fibration is trivial and $X_7$ is the product manifold $CY_3\times S^1$.
Kaluza-Klein reduction of M-theory on $X_7$ {#sec:KK}
-------------------------------------------
We can now proceed with one of the main parts of this paper, namely the compactification of M-theory, or rather eleven-dimensional supergravity, on seven-dimensional manifolds with $SU(3)$ structure. As explained before, we concentrate on the vector multiplet sector and ignore the hypermultiplets in our analysis.
The starting point is the eleven-dimensional action . Since on seven-dimensional manifolds with $SU(3)$ structure we can define an almost product structure we consider the following Ansatz for the metric $$\label{11dg}
G_{MN} = \left(
\begin{array}[h]{ccc}
e^{4\phi/3} \left(\frac{1}{{\mathcal{K}}} G_{\mu \nu} + A^0_\mu A^0_\nu \right) &
0 & - e^{4\phi /3} A^0_\mu \\
0 & e^{-2\phi/3} G_{mn} & 0 \\
-e^{4\phi /3} A^0_\nu & 0 & e^{4 \phi /3}
\end{array}
\right) \; ,$$ where $ G_{\mu \nu}$ denotes the 4d metric, $G_{mn}$ is the metric on the Calabi–Yau manifold, $A^0_\mu$ is the $4d$ graviphoton and $\phi$ the dilaton.[^9] The scalar fields arising from the Calabi-Yau metric correspond to the deformations of $J$ which we denoted by $v^i$ in , as well as the deformations of $\Omega$. The dilaton factors are chosen in such a way that we end up in the four dimensional Einstein frame. The factor $1/{\mathcal{K}}$ – with ${\mathcal{K}}$ defined in – in front of the four-dimensional metric has been introduced to account for the additional Calabi–Yau volume factor which appears in front of the Einstein-Hilbert term after performing the integral over the internal manifold.
Next we expand the three-form potential according to $$\label{expC}
\hat C_3 = \tilde C_3 + B \wedge dz + \tilde A^i \wedge \omega_i + b^i
\omega_i \wedge dz +
\ldots
\; ,$$ where $\tilde C_3$ is a three-form in four dimensions, $B$ is a two-form, $\tilde A^i$ are vector fields and $b^i$ are scalars. The $\ldots$ stand for additional scalar fields that arise when $\hat C_3$ is expanded in a basis of $H^3(CY_3)$, which, together with the complex structure deformations, the dual of $B$, and the dilaton $\phi$, fill out $h^{(1,2)} +1$ hypermultiplets, and we omit them from our further discussion.[^10] We do keep the gravity multiplet which includes the graviton and the graviphoton $A^0$, and the $h^{(1,1)}$ vector multiplets which include the vector fields $\tilde A^i$ and the complex scalars $x^i = b^i + iv^i$. Note that compared to the five-dimensional case discussed in section \[5dM\], there is an additional vector multiplet and the Kähler moduli are complexified. Thus, in the four-dimensional effective action all Kähler moduli, including the Calabi–Yau volume, are in vector multiplets.
In the compactification process it is useful to keep track of the isometries of the internal manifold $X_7$ since they become gauge transformations in the effective theory. Let us first recall the situation for compactifications on $CY_3 \times S^1$. In this case there is an isometry corresponding to constant shifts $z\to
z+\epsilon$ of the $S^1$ coordinate. Promoting the parameter to be space-time dependent $\epsilon\to\epsilon(x^\mu)$, the compactification Ansatz given in eqs. and changes. Keeping $\hat C_3$ and the ten-dimensional line-element $ds^2_{10}$ invariant induces the local gauge transformations $$\label{11diff}
A^0 \to A^0 + d \epsilon \; , \qquad
\tilde C_3 \to \tilde C_3 - B \wedge d \epsilon \; , \qquad
\tilde A^i \to \tilde A^{i} - b^i d \epsilon \; .$$ However, the fact that the fields $\tilde C_3$ and $\tilde A^i$ transform is an artefact of the expansion and one can define the gauge-invariant fields $$\begin{aligned}
\label{defCA}
C_3 = \tilde C_3 + B \wedge A^0 \; , \qquad A^i = \tilde A^i + b^i A^0 \; .\end{aligned}$$ In the case of a non-trivial fibration of the Calabi–Yau over the circle, as considered in eq. , these fields are no longer invariant. However, the main property of eq. is that the transformations of $C_3$ and $A^i$ do not contain the derivative of the transformation parameter $\epsilon$, and therefore we will keep the same definitions in the following. Using eqs. and we can easily see that the fields $A^i$ and $b^i$ arising from the expansion of $\hat
C_3$ acquire a non-trivial gauge transformation, but the transformation law of the graviphoton is unchanged.
Exactly as for $\hat C_3$, we also need to keep $J$, defined in eq. , gauge-invariant. Since the basis of $(1,1)$ forms $\omega_i$ changes according to eq. , we need to assign a transformation law similar to also to the fields $v^i$. Another way of saying this is that our background is not invariant under arbitrary shifts of $z$, but, as shifts of $z$ are gauge symmetries, we must assign a transformation law to the $v^i$. Altogether we thus have $$\begin{aligned}
\label{epsgtr}
A^0 & \to A^0 + d \epsilon \; ,\qquad
A^i \to A^i - \epsilon {M}_j^i A^j \; , \\
v^i & \to v^i - \epsilon {M}_j^i v^j\; , \qquad b^i \to b^i -
\epsilon {M}_j^i b^j \; .
\end{aligned}$$ Note that unlike the $N=2$ gauged supergravities encountered so far in string compactifications, the symmetry is not necessarily a Peccei-Quinn shift symmetry which is usually gauged in these cases. Moreover, this gauge symmetry is generically spontaneously broken due to the non-vanishing vacuum expectation values of the Kähler moduli $v^i$.
In addition, the four-dimensional effective theory sees the remnant of the three-form gauge invariance $\hat C_3 \to \hat C_3 + d
\Lambda_2$ which is manifest in the action . Choosing $\Lambda_2 = \eta^i \omega_i$ and using we obtain the following transformation laws $$\begin{aligned}
\label{etagtr}
A^0 & \to A^0 \; ,\qquad
A^i \to A^i + d \eta ^i + {M}_j^i \eta^j A^0 \; , \\
v^i & \to v^i \; , \qquad b^i \to b^i + {M}_j^i \eta^j \; .
\end{aligned}$$ The parameters $(\epsilon, \eta^i)$ together form $h^{(1,1)}+1$ local gauge parameters. From the transformations displayed in and we already see the non-Abelian character of the gauge transformations when $M \neq 0$.
To derive the four-dimensional action we insert eqs. and into and perform the integrals over the internal manifold. Let us first concentrate on the last two terms in the action , and postpone the compactification of the Ricci scalar to the end of this section.
To make our task easier let us first compute the field strength $\hat F_4$ by taking the exterior derivative of eq. . Using and the definitions we find $$\begin{aligned}
\label{F4}
\hat F_4 = d \hat C_3 = \ & (d C_3 - B \wedge F^0) + H \wedge (dz - A^0) \\
& + (F^i - b^i F^0) \wedge \omega_i + D b^i \wedge \omega_i \wedge (dz -
A^0) + \ldots \, ,
\end{aligned}$$ where we defined $$\label{cd}
F^0 = d A^0 \ , \qquad
F^i = d A^i - {M}_j^i\, A^j \wedge A^0 \; , \qquad
D b^i = d b^i - {M}_j^i\, ( A^j - b^j A^0) \; .$$ The reason we have formally performed the expansion in the forms $dz -
A^0$ is that in this basis the metric is block diagonal, and therefore in computing $(\hat F_4)^2$ only the square of the individual terms in will appear and no mixed terms will be present. Note that in four dimensions $C_3$ is not a dynamical field and therefore we will discard its contribution in the following. In general, a proper dualization should be performed, but this has implications only on the hypermultiplet sector and is therefore not of interest for us. With these things in mind we obtain $$\label{kin4}
\int_{X_7} \hat F_4 \wedge * \hat F_4 =
e^{-4 \varphi} H_3 \wedge * H_3 + 4 {\mathcal{K}}g_{ij} (F^i - b^i
F^0) \wedge * (F^j - b^j F^0) + 4 g_{ij} Db^i \wedge * Db^j + \ldots \; ,$$ where the metric $g_{ij}$ was defined in eq. and $\varphi$ denotes the four dimensional dilaton defined as $e^{-2\varphi} = e^{- 2 \phi} {\mathcal{K}}$. For the Chern-Simons term in one finds after a straightforward but somewhat lengthy calculation $$\begin{aligned}
\label{FwF}
\int_{X_7} \hat C_3 \wedge \hat F_4 \wedge \hat F_4\ =\ & 3 F^i \wedge F^j
b^k {\mathcal{K}}_{ijk} - 3 F^i \wedge F^0 b^j b^k {\mathcal{K}}_{ijk} \\
& + F^0 \wedge F^0 b^i b^j
b^k {\mathcal{K}}_{ijk}
+ 2 {M}_i^k A^i \wedge A^l \wedge F^j {\mathcal{K}}_{jkl} \; ,
\end{aligned}$$ where ${\mathcal{K}}_{ijk}$ are the Calabi–Yau intersection numbers defined in which can also be obtained from .
Let us check explicitly that the individual terms in eq. are invariant under the gauge transformations and . Under the quantities defined in eq. transform as $$\begin{aligned}
\label{trDbF}
\delta Db^i = -\epsilon {M}_j^i Db^j \; ,\qquad
\delta F^i = -\epsilon {M}_j^i F^j \;,\qquad
\delta F^0 = 0\ .\end{aligned}$$ Together with the transformation of the moduli space metric, this shows that the terms in eq. are (individually) invariant. Under the gauge transformation , the covariant derivatives $Db^i$ are invariant as can be checked from . The field strengths $F^i, F^0$ are not individually invariant, but the combinations $$\label{Finv}
\check F^i = F^i - b^i F^0 \; ,$$ which appear in , are invariant. This completes the proof of the gauge invariance of the expression .
We can similarly check the gauge invariance of . For the transformation it follows straightforwardly from eq. and the constraint that each term in is invariant individually. To check the invariance under the transformation is also straightforward but a bit more tedious. The important difference to note is that the gauge invariance only holds for the sum of all the terms in eq. but not for the individual terms. We come back to this issue in section \[sec:N2\].
The next step is to compactify the Ricci scalar in the action . For $CY_3\times S^1$ the answer is well known [@CCAF] and yields the kinetic terms for the moduli $v^i$, a contribution to the kinetic terms of the graviphoton $A^0$ and the kinetic term for the dilaton. For the case of a non-trivial fibration the moduli are charged under the isometry of the circle and the corresponding gauge transformation is given in . This in turn leads to a coupling of the moduli to the graviphoton and a scalar potential. The generic formulae for this case are worked out in [@SS] and we can borrow some of their results. One finds $$\label{Raction}
\tfrac12 \int_{X_7} \hat R *1 = \tfrac12 R_4 * \mathbf{1} - g_{ij} Dv^i
\wedge *Dv^j - {\mathcal{K}}F^0\wedge * F^0 - d\varphi\wedge *d\varphi -V\ .$$ This is a straightforward generalization of the result obtained in $CY_3
\times S^1$ compactifications, in that the derivatives for the charged moduli are replaced by covariant derivatives $$\label{Dv}
Dv^i = dv^i + v^j M_j^i A^0 \ .$$ The derivation of the scalar potential $V$ is less obvious and an explicit calculation of the internal Ricci scalar is necessary. Note that this gives in fact the only contribution to the potential as eqs. and contain no terms without four-dimensional derivatives. Let us therefore compute the scalar curvature for the internal part of the metric which can be read off from $$\label{gint}
G_{int} = e^{-2\phi/3} \left(
\begin{array}[h!]{cc}
G_{mn}(y,z) & 0 \\
0 & e^{2\phi} \\
\end{array}
\right) \; .$$ From the seven-dimensional point of view, the overall dilaton factor is irrelevant as this is just a constant, but it will be important for the normalization of the potential. Using the fact that the Ricci tensor of the Calabi–Yau slices vanishes we find $$\label{R7}
R_7 = - e^{-4\phi /3} \Big[ \partial_z(G^{mn} \partial_z G_{mn} ) + \tfrac14
(G^{mn} \partial_z G_{mn})^2 + \tfrac14 G^{mn} G^{pq} \partial_z G_{mp}
\partial_z G_{nq} \Big] \; .$$ In order to proceed we split the metric into a background piece $G_{mn}^0$, which is constant in $z$, and the moduli dependent part $\Delta G_{mn}$ which does depend on $z$ : $$\label{pert}
G_{mn} = G_{mn}^0 + \Delta G_{mn} \; .$$ As explained before, the fibration structure we consider is such that the complex structure deformation sector is not influenced by the additional $z$ direction, and we are only interested in the dependence on the Kähler moduli $v^i$. In complex coordinates they arise from the $(1,1)$ components of the metric via $$\Delta G_{a \bar b} = - i v^i (\omega_i)_{a \bar b} \; , \qquad
a,\bar b=1,2,3\ .$$ Using eq. we immediately find $$\label{ddg}
\partial_z \Delta G_{a \bar b} = -i v^i {M}_i^j (\omega_j)_{a \bar b} \; .$$ From the fact that $\omega_j$ is a harmonic $(1,1)$-form on the Calabi-Yau threefold, one shows, following Ref. [@Strominger], that $G^{a \bar b} (\omega_j)_{a \bar b} = \tfrac{i}2 {\mathcal{K}}_j/{\mathcal{K}}$, where eqs. and were used. Combining this with eq. gives $$G^{mn} \partial_z G_{mn} = {\mathcal{K}}_{jkl} {M}_i^j v^i v^k v^l =0 \; ,$$ as a consequence of the constraint . Therefore the only contribution to the four-dimensional potential comes from the last term in . Inserting eq. into eq. we arrive at $$\label{intR}
\tfrac12 \int_{X_7} R_7 = - \tfrac14 e^{-4\phi/3} {M}_i^k {M}_j^l v^i v^j
\int_{X_7}\omega_k \wedge * \omega_l \; .$$ Using eqs. and , and taking into account the rescaling of the four-dimensional metric, we finally obtain the potential (in the Einstein frame) $$\label{4dpot}
V = \frac1{\mathcal{K}}\, v^i v^j {M}_i^k {M}_j^l g_{kl} \; .$$
Consistency with $N=2$ supergravity {#sec:N2}
-----------------------------------
In order to check the consistency with $N=2$ supergravity (reviewed in appendix \[sg\]) we have to write the resulting four-dimensional action in the general form . Putting together eqs. , and we obtain the action in four dimensions for the bosonic fields in the gravity and vector- multiplets $$\begin{aligned}
\label{S4}
S & = & \int_{M_4} \Big[\tfrac 12\, R *1 - g_{ij} Dx^i \wedge
*D{\bar x}^{j} -V
\\
& & \qquad + \tfrac14 \,\mathrm{Im} {\mathcal{N}}_{IJ} F^I \wedge *
F^J + \tfrac14\, \mathrm{Re} {\mathcal{N}}_{IJ} F^I \wedge F^J - \tfrac16\, {M}_i^l {\mathcal{K}}_{jkl}
A^i \wedge A^j \wedge dA^k\Big] \; , {\nonumber}\end{aligned}$$ where the metric $g_{ij}$ was defined in . It is a special Kähler metric derived from the Kähler potential given in for the prepotential $$\label{prep}
{\mathcal{F}}(X) = - \frac16\, \frac{{\mathcal{K}}_{ijk} X^i X^j X^k}{X^0} \; .$$ The $X^I, \ I=0, \ldots, h^{(1,1)}$, are projective coordinates which are related to the scalar fields via so-called special coordinates $x^i$ given by $$\label{prcoord}
x^i = \frac{X^i}{X^0} = b^i + i v^i \; , \qquad i=1, \ldots h^{(1,1)} \; .$$ The prepotential also determines the gauge coupling matrix ${\mathcal{N}}$ via , and one finds $$\begin{aligned}
\label{Nexp}
\mathrm{Re}\, {\cal N}_{00} &= - \tfrac13{\mathcal{K}}_{ijk} b^i b^j b^k\ , \qquad
\mathrm{Re}\, {\cal N}_{i0} = \tfrac12{\mathcal{K}}_{ijk} b^j b^k\ , \qquad
\mathrm{Re}\, {\cal N}_{ij} = - {\mathcal{K}}_{ijk} b^k\ , \\
\mathrm{Im}\, {\cal N}_{00} &= - {\mathcal{K}}(1 + 4 g_{ij} b^i b^j) \ , \qquad
\mathrm{Im}\, {\cal N}_{i0} = 4{\mathcal{K}}g_{ij} b^j \ , \qquad
\mathrm{Im}\, {\cal N}_{ij} = -4 {\mathcal{K}}g_{ij} \ .
\end{aligned}$$ The field strengths in eq. are given by $$\label{strcon}
F^I = d A^I + \tfrac12 f^I_{JK}A^J\wedge A^K \ , \qquad\mathrm{with}\qquad
f_{IJ}^0 = 0 = f_{ij}^k \; , \qquad f_{i0}^j = - {M}_i^j \; ,$$ while the covariant derivatives read $$\begin{aligned}
\label{kvA}
Dx^i = dx^i - k^i_I A^I \ , \qquad\mathrm{with}\qquad k_0^j = - x^k {M}_k^j
\; , \qquad k_i^j = {M}_i^j \; .\end{aligned}$$ These holomorphic Killing vectors can be obtained via from the Killing prepotentials $$P_0 = - x^i M_i^j K_j \,, \qquad P_i = M_i^j K_j\ ,$$ where $K_j= \partial_j K$ is the first derivative of the Kähler potential. The consistency of the non-Abelian gauge algebra can be checked in that eq. is fulfilled and we have $$\label{algk}
[k_i,k_j] = 0 = [k_0,k_0] \ , \qquad [k_i,k_0] = - M_i^j k_j \ ,$$ corresponding to a semi-direct sum of two Abelian sub-algebras. [^11] Finally, using it is easy to see that the potential is consistent with .
Except for the last term in eq. everything looks like a standard $N=2$ gauged supergravity as spelled out in ref. [@N=2review]. The last term is also known, and has to be introduced in the action (in order to make it gauge-invariant) whenever the prepotential is not invariant under the gauge transformations, but transforms into a second order polynomial in $X$ with real coefficients [@dWvP]. Inserting the transformation into the definition of the projective coordinates we find that the prepotential changes as $$\label{Ftr}
\delta_{\eta} {\mathcal{F}}= - \frac12 \eta ^i {M}^l_i {\mathcal{K}}_{ljk} X^j X^k \; ,$$ which is precisely of the form with $C_{ijk} =\tfrac12{M}^l_i {\mathcal{K}}_{ljk}$. Note that for the specific structure constants given in eq. , the last term of eq. vanishes, which explains why such a term is absent from eq. .
Before we continue it is worthwhile to stress that the vector multiplet geometry on the M-theory side specified by the prepotential is exact, since the “dilaton” (the radius of the M-theory circle) is part of a hypermultiplet, and therefore cannot correct this geometry. The same holds for the gauging as specified in .
$K3$-fibred Calabi-Yau threefolds {#K3fibred}
---------------------------------
So far our discussion was generic, in that we did not specify the intersection numbers ${\mathcal{K}}_{ijk}$ and the matrix $M_i^j$. We did however assume that the seven-dimensional $X_7$ is a fibred product of a Calabi-Yau threefold $CY_3$ over a circle, and that the $CY_3$ is such that a continuous isometry of the form exists. In this section we discuss more concretely the specific case of $K3$-fibred Calabi-Yau threefolds; type IIA string theory compactified on such threefolds is dual to heterotic string theory compactified on $K3\times T^2$.
$K3$-fibred Calabi-Yau threefolds consist of $K3$ fibres over a ${\bf P_1}$ base [@KLM]. The volume of the base in string units is identified with the dilaton on the heterotic side. Furthermore, two additional two-cycles in the $K3$, related to the heterotic torus, can be singled out. Let us denote these three special cycles by $1$, $2$ and $3$, while the rest of the two-cycles are denoted by an index $a$. In the limit of a large ${\bf P_1}$ base (i.e. large heterotic dilaton) the prepotential becomes $$\label{IIAprep}
{\mathcal{F}}= \frac{X^1(X^2 X^3 - X^a X^a)}{X^0}$$ and so the only non-vanishing intersection numbers for the Calabi–Yau threefold are [@KLM] $$\label{intcy}
{\mathcal{K}}_{123} = -1 \; , \qquad {\mathcal{K}}_{1ab} = 2 \delta_{ab} \; , \qquad
a,b=4,\ldots,h^{(1,1)}\ .$$ Inserting eq. into eq. and computing the corresponding Kähler metric one sees that this factorizes and becomes the metric on the space $$\label{gIIA}
M_V = \frac{SU(1,1)}{U(1)}\times \frac{SO(2,h^{(1,1)}-1)}{SO(2)\times
SO(h^{(1,1)}-1)}\ .$$ The first factor is spanned by the coordinate $x^1$ which parameterizes the volume of the ${\bf P_1}$ base, while $x^2, x^3$ and $x^a$ span the second factor. We immediately see that $M_V$ has the continuous isometry group $SU(1,1)\times SO(2,h^{(1,1)}-1)$. As discussed above, in the same limit the five dimensional vector multiplet moduli space has the continuous isometry group $SO(1,1)\times SO(1,h^{(1,1)}-2)$.
As a consequence, we expect that the constraint has non-trivial solutions. Indeed, solving eq. for the torsion parameters ${M}_i^j$, given the intersection numbers , we find that one can choose to express all matrix elements in terms of $\tfrac12
(h^{(1,1)} - 1) (h^{(1,1)}-2) +1$ independent parameters $$\label{indpar}
m_2 \equiv {M}_2^2 \; , \qquad m_a \equiv {M}_a^2 \; , \qquad
m_3 \equiv {M}_3^3 \; , \qquad \tilde m_a \equiv {M}_a^3 \; , \qquad
m_b^a \equiv - {M}_a^b \ ,$$ where $m^a_{b} = -m^b_{a}$. The other matrix elements are then given by $$\label{deppar}
\begin{aligned}
{M}_2^a = \tfrac12 \tilde m_a \; , & \qquad {M}_3^a = \tfrac12 m_a \; ,
\qquad {M}_a^a = - \tfrac12 M_1^1\ =\ \tfrac12 (m_2 + m_3) \;
,\\
& M_1^{2,3} = M_1^a = M_a^{1} = M_{2,3}^1 = M_2^3= M_3^2 = 0
\ .
\end{aligned}$$ Note that these solutions describe the mixing of $SO(1,1)\times
SO(1,h^{(1,1)}-2)$ into the gauge symmetry, i.e, we have accounted for the most general monodromy allowed on the circle. However, this is not the most general global symmetry of the four dimensional theory, which can be as large as $SU(1,1)\times SO(2,h^{(1,1)}-1)$. In section \[hetsect\] we discuss how the parameters in are related to the dual heterotic background. Before we do so let us return to the situation where the ${\bf P_1}$-base is not necessarily large.
Breaking the continuous isometry {#break}
---------------------------------
So far our analysis assumed that the Calabi-Yau moduli space has a continuous isometry, or in other words that ${\mathcal{K}}_{ijk}$ are such that eq. has a solution. As we saw this is indeed the case for $K3$-fibred Calabi-Yau manifolds in the large ${\bf P_1}$ limit where the moduli space has a continuous $SO(1,h^{(1,1)}-2)$ symmetry. However, this symmetry is broken (for example, by non-zero intersection numbers ${\mathcal{K}}_{abc}$ or ${\mathcal{K}}_{23a}$) to a discrete subgroup $\Gamma({\bf Z}) = SO(1,h^{(1,1)}-2,{\bf Z})$ (which is the T-duality group of the heterotic string) for finite ${\bf P_1}$ volume. As we discussed before, the only information that we can really specify at finite ${\bf P_1}$ volume is an element $\gamma_i^j \in \Gamma({\bf Z})$, which rotates the $(1,1)$-forms $\omega_i$ as described in section \[SU3\]. Of course the absence of the continuous isometry also holds for compactifications on $CY_3\times S^1$ without any monodromy; in this case the corresponding continuous symmetry is broken to a discrete subgroup $\Gamma'(\bf Z)$, which is the T-duality group of the dual heterotic string on $K3\times
T^2$. Furthermore in four dimensions type IIA world-sheet instantons also contribute to the breaking of the continuous isometry.
Even though the continuous isometry of the Calabi-Yau moduli space is broken, we want to argue that our M-theory backgrounds retain a subgroup of this isometry. The key difference from the $CY_3\times S^1$ background is that the non-trivial monodromy has the effect that in the four-dimensional effective action part of the would-be isometry (which is indeed an isometry at infinite ${\mathbf P_1}$ volume) is gauged (see eqs. and ). Since it is part of a gauge symmetry, consistency requires that it must persist in the four-dimensional effective action for any value of the parameters – and in particular for finite $\mathbf{P_1}$ volume. To reiterate, this must be true even when the continuous symmetry is not present for the theory without the monodromy, or in the five dimensional effective action.
In order to see in slightly more details how this happens let us first reconsider the computation of the four-dimensional effective action performed in section \[sec:KK\]. Without the isometry in the Calabi-Yau moduli space the intersection numbers ${\mathcal{K}}_{ijk}$ defined in eq. with $\omega_i$ obeying eq. are $z$-dependent and thus vary along the circle. Instead, it is the intersection numbers $\hat{\mathcal{K}}_{ijk}$ defined in that appear in the four-dimensional effective action. Now they no longer coincide with the ${\mathcal{K}}_{ijk}$ as was the case in the presence of a Calabi-Yau isometry. Nevertheless, if we still require that the monodromy is evenly distributed along the circle, or in other words if we continue to impose for constant $M_i^j$, then eq. implies $$\label{Khcon}
{M}_i^l {{\hat {\mathcal{K}}}}_{jkl} + {M}_j^l {{\hat {\mathcal{K}}}}_{kil} + {M}_k^l {{\hat
{\mathcal{K}}}}_{ijl} = 0 \ .$$ Thus, the Ansatz with constant $M_i^j$ implies the presence of an isometry in the moduli space of $X_7$ even though the isometry of the fibred Calabi-Yau manifold is broken. The existence of this isometry can be viewed as a direct consequence of the gauge symmetry.
The KK-reduction of section \[sec:KK\] can be repeated, but now $\hat{\mathcal{K}}_{ijk}$ and the metric defined by appear. This metric coincides with the Calabi–Yau moduli space metric only for infinite $\mathbf{P_1}$, but differs for finite volume. Therefore the resulting four-dimensional effective action receives small corrections at finite volume. However, these corrections cannot lead to any qualitative changes, since already at large ${\bf P_1}$ volume all fields relevant for the gauging are massive, and the corrections just shift their precise mass spectrum.
It would be worthwhile to compute the low energy effective action more explicitly and check its consistency with $N=2$ supergravity. Furthermore, arguments along the lines of Refs. [@TK-P; @AZ] should exist in order to argue that the gauged symmetry is also protected against the breaking coming from the world-sheet instantons. We hope to return to these issues elsewhere.
Heterotic string theory compactified on $K3 \times T^2$ with $T^2$ fluxes {#hetsect}
=========================================================================
In this section we discuss the heterotic string compactified on $K3 \times T^2$ with the gauge fields having non-trivial flux on the $T^2$. More specifically we show that the dual background is related to the M-theory compactification discussed in the previous section. We begin by reviewing the heterotic compactification in sections \[genpro\]–\[N2het\], and we present the details of the duality map in section \[mcomparison\].
General properties {#genpro}
------------------
Consider heterotic string theory compactified on $K3 \times T^2$. In this subsection we analyze the effect of turning on gauge flux on the $T^2$ in the low-energy supergravity theory. In particular we want to show that turning on the flux breaks the corresponding gauge symmetry, giving the gauge field a mass proportional to the flux.
In ten dimensions the spectrum of the heterotic string includes a 2-form field $B$ and a gauge field $A$ with field strength $F$ (in either the $Spin(32)$ or the $E_8\times E_8$ gauge group). The 3-form field strength involves not just the 2-form field, but rather it takes the form: $$H^{het} = d B - \frac{\alpha'_{het}}4 \omega_3,$$ where $\omega_3$ is the Chern-Simons form[^12] $$\omega_3 = {\rm tr}(A \wedge dA + \frac{2}{3} A \wedge A \wedge A).$$ The ten-dimensional action includes kinetic terms proportional to $$\left[-\frac{1}{2} |H^{het}|^2 - \frac{\alpha'_{het}}{4} {\rm tr}(F^2) \right].$$
Suppose that the compactification to six dimensions on $K3$ breaks the gauge group such that it has a $U(1)^n$ factor, and consider a background where we turn on a flux for one of the corresponding $U(1)$ gauge fields $A^{a}$ on the $T^2$ ($a=1,\cdots,n$), $$\label{u1flux}
\int_{T^2} F^{a} \equiv f^{a} \neq 0.$$ The six dimensional action includes a term proportional to $$\label{6daction}
\int_{R^4\times T^2} \left[ \big(d B - \frac{\alpha'_{het}}{4} A^{a} \wedge
F^{a} \big)^2 + \frac{\alpha'_{het}}{2} (F^a)^2 \right],$$ such that the four dimensional action expanded around the flux background includes a term proportional to $$\label{4daction}
\int_{R^4} \left[ \big( d b - \frac{\alpha'_{het}}{4} f^{a} A^{a} \big)^2 +
\frac{\alpha'_{het} V(T^2)^2}{2} (F^a)^2 \right],$$ where $b$ is the scalar field arising from $\int_{T^2} B$, and $V(T^2)$ is the volume of the $T^2$. Naively the first term is not gauge-invariant, but in fact the gauge transformation (already in ten dimensions) acts also on the 2-form field, and this transformation in four dimensions takes the form $A^{a} \to A^{a} + d \Lambda^{a}$, $b \to b + \frac{\alpha'_{het}}{4} f^{a} \Lambda^{a}$ such that is gauge-invariant.
Both from the form of and from the form of the gauge transformation, we see that the $U(1)$ gauge symmetry is broken, since it acts non-linearly on the scalar field $b$. The gauge field acquires a mass proportional to $f^{a}$, and swallows the scalar field $b$ by the Higgs mechanism. Using we see that the mass squared of the gauge field is proportional to $\alpha'_{het} f_a^2 / V(T^2)^2$. In the action we wrote here we set many fields to zero, the full results may be found in [@LM1].
In the previous section we saw that a similar Higgs mechanism in M-theory arises from the non-trivial fibration structure over the M-theory circle. In the following we argue why it is indeed necessary to go to the M-theory description on the dual type IIA side when we add the heterotic fluxes, and afterwards we make the correspondence between the M-theory and the heterotic Higgsing more precise.
Mapping the masses {#mapmas}
------------------
In order to map the Higgs mechanism described above to the type IIA side, we need to compute the mass of the massive vector, and describe it in the language of the type IIA string theory.
Let us first recall the mapping in the absence of fluxes between the heterotic string and the type IIA string. On the heterotic side, the $K3$ manifold is taken to be a fibration of $T^2_f$ over some base $B$. On the type IIA side we have a Calabi-Yau manifold which is a fibration of some ${\tilde K3}$ over $B$ (where we used fiber-wise the duality between the heterotic string theory on $T^4$ and the type IIA string theory on ${\tilde K3}$).
The relations between the parameters of the two theories are (denoting the volume of a cycle by $V$, and not writing down all the numerical constants) :
The mapping of the four dimensional Planck scales gives $$V(K3)V(T^2)/g_h^2l_h^8=V({\tilde K3})V(B)/g^2_{II}l^8_{II}.$$
For the mapping of the type IIA string to a wrapped heterotic five-brane we have $$V(T^2)V(T^2_f)/g^2_hl^6_h= 1/l^2_{II}.$$
The mapping of the heterotic string to a wrapped NS5-brane yields $$1/l^2_h=V({\tilde K3})/g^2_{II}l^6_{II}.$$
Finally, the integral of the heterotic $B$-field on the $T^2$ maps to the integral of the type IIA $B$-field on some 2-cycle $W$ in $\tilde K3$, leading to $$V(T^2)/l^2_h=V(W)/l^2_{II}.$$
Above we found that on the heterotic side the mass of the vector field that becomes massive after we turn on the flux is $$m^2=(f^{a})^2 l_h^2 / V(T^2)^2.$$ Translating this into type IIA string theory using the equations above, we find that the mass can be written as $$m^2 = (f^{a})^2 V({\tilde K3})/ (V(W)^2 g_{II}^2 l_{II}^2).$$
In particular, it involves a negative power of the type IIA string coupling, implying that it is not a perturbative state on the type IIA side. Rather, since its mass is proportional to the D0-brane mass $M_{D0}\simeq 1 / g_{II} l_{II}$, it involves when lifted to M-theory some non-trivial momentum on the M-theory circle. Thus, we cannot describe this flux purely in the language of type IIA supergravity (the massive gauge field is too massive to be included in the low energy IIA description). The dual configuration must involve, when lifted to M-theory, non-trivial dependence on the M-theory circle.
The flux as a monodromy {#fluxmon}
-----------------------
We claim that the correct description of this flux on the type IIA side is given by the non-trivial fibration of the Calabi-Yau over the M-theory circle, described in the previous section. In order to make this identification more precise, let us move up one dimension, and consider the heterotic string theory on $K3\times S^1$, which is dual to M-theory on a Calabi-Yau manifold (this is simply the limit of the duality discussed in the previous subsection, when one of the heterotic circles is taken to be large). We will call the coordinate on this circle $x^5$, and denote the coordinate on the additional circle which we use to go down to four dimensions by $x^4$ (this may be identified with the $z$ coordinate which we used in section \[sec:geo\]).
In the $K3\times S^1$ compactification, each ten-dimensional gauge field $A_{\mu}^{a}$ leads to a scalar field $A_5^{a}$. One way to describe the flux that we are interested in is by taking this scalar field to have a non-trivial monodromy around the additional circle in the $x^4$ direction, $$\label{a5shift}
A_5^{a} = c f^{a} x^4 \Rightarrow A_5^{a}(x^4+2\pi R_4) \simeq
A_5^{a}(x^4) + 2\pi c f^{a} R_4$$ for some constant $c$. Note that the low-energy supergravity is invariant under any shift in the scalar field $A_5^{a}$; however, in the full heterotic string theory, due to the presence of charged states carrying momentum on the $x^5$ circle, there is only a discrete periodicity of the field $A_5^{a}$. Equation may be interpreted as saying that when we go around the $x^4$ circle, $A_5^{a}$ comes back to itself up to a shift by an integer multiple of its period (proportional to $f^a$).
In this language, we can think of the flux as a special case of a monodromy in the T-duality group. Recall that the heterotic string theory on $K3\times S^1$ has $n$ $U(1)$ vector fields $A_{\mu}^{a}$ coming from the ten-dimensional gauge group, and three additional vector fields coming from $g_{\mu 5}$, $B_{\mu 5}$ and the dual of $B_{\mu \nu}$. One combination of the three latter fields is in the graviton multiplet, while the other $n_V^{(5)}=n+2$ fields are in vector multiplets. Each of the vector multiplets contains a real scalar field; these $n_V^{(5)}$ fields are $A_5^{a}$, the radius of the $x^5$ circle, and the heterotic dilaton, and they span the manifold [@Gunaydin:1984ak; @AFT] $SO(1,n_V^{(5)}-1)/SO(n_V^{(5)}-1)\times {\bf R}$. The low-energy supergravity action is invariant under an $SO(1,n_V^{(5)}-1)\times SO(1,1)$ symmetry, where the first factor rotates the scalars (and all the vector fields except for the dual of $B_{\mu \nu}$), while the second factor shifts the dilaton. In the full heterotic string theory, only an $SO(1,n_V^{(5)}-1,{\bf Z})$ subgroup of this group is an exact symmetry – this is the T-duality group of the heterotic string on a circle. This group includes in particular the shifts in $A_5^{a}$ described in the previous paragraph. Thus, these shifts are a special case of a general $SO(1,n_V^{(5)}-1,{\bf Z})$ monodromy, where as we go around the circle the theory comes back to itself up to some $SO(1,n_V^{(5)}-1,{\bf Z})$ transformation.
It is now clear, that in order to map the flux to the M-theory side, we need to consider backgrounds in which M-theory on a Calabi-Yau comes back to itself (as we go around the circle) up to some element of $SO(1,n_V^{(5)}-1,{\bf Z})$. These are precisely the backgrounds we considered in the previous section, so we claim that these are the correct type II duals of the heterotic compactification with flux. In the next two subsections we will check this proposal in detail, by mapping the four dimensional effective actions of the two theories.
The low-energy effective action {#N2het}
-------------------------------
Let us briefly recall the low energy effective action for heterotic string compactifications on $K3 \times T^2$ with non-trivial background fluxes, which was derived in [@LM1]. In the spirit of the present paper we only focus on the vector multiplets and only review the low energy theory for fluxes of the gauge fields on $T^2$, as they lead to a non-Abelian gauge group in the effective four-dimensional theory. The main features of the ungauged theory are summarized in appendix \[vshet\].
The $n_v=n+3$ four dimensional heterotic vector multiplets include the complex scalar fields $x^i = (s, u, t, n^a),~ a=4,\ldots, n_v$ which span the symmetric space , where $s$ denotes the dilaton/axion, $t$ and $u$ are the $T^2$ moduli and $n^a$ denotes the scalars arising from the Wilson lines of the original heterotic gauge fields in the $T^2$ directions. The latter combine with the four-dimensional gauge fields $A^a$ which also originate from the ten-dimensional heterotic gauge fields. From the metric and the $B$-field we obtain four Kaluza-Klein gauge bosons $A^0,\ldots, A^3$ which play the role of the graviphoton and the superpartners of $s,t$ and $u$.[^13] In the absence of fluxes the gauge group is the Abelian group $[U(1)]^{(n_v+1)}$.
When we turn on background fluxes of the form $$\label{hetflux}
\int_{T^2} F^a = f^a \; ,$$ the four dimensional gauge group becomes non-Abelian (in the sense that different gauge transformations no longer commute), as in the general gauged supergravities discussed in the appendix. Note that this non-Abelian symmetry has nothing to do with the original $E_8
\times E_8$ or $SO(32)$ gauge symmetry in ten dimensions; it involves only fields in the Cartan subgroup of the original gauge group.
The action computed in [@LM1] is[^14] $$\label{Sh4}
S_{\mathrm{het}} = \int \left[ \tfrac12 R *1 + \tfrac14 I_{IJ} F^I \wedge *
F^J + \tfrac14 R_{IJ} F^I \wedge F^J - g_{ij} D x^i \wedge * D
\bar x^{\bar \jmath} - V \right] \ ,$$ which slightly differs from the action given in . The point is that from the heterotic viewpoint a different symplectic basis is more natural. More precisely, the gauge field $A_1$ is dualized relative to the formalism used in the appendix, which is the one we use for M-theory. In this basis the prepotential ${\mathcal{F}}$ does not exist but its derivatives are well defined [@CCDF; @dWKLL; @AFGNT]. So let us carefully go through the terms.
The non-trivial covariant derivatives in when we turn on the fluxes are given by $$\label{cdh}
\begin{aligned}
D t = & ~ \partial t - \sqrt 2\, n^a f^a A^1 + f^a A^a \; ,\\
D n^a = & ~ \partial n^a - \tfrac{1}{\sqrt 2}\, f^a (A^0 + u A^1) \; ,
\end{aligned}$$ which, using , corresponds to the Killing vectors $$\label{kvh}
k_0 = \tfrac{1}{\sqrt 2}\, f^a \partial_a \; , \qquad
k_1 = \tfrac{1}{\sqrt 2}\, f^a u\, \partial_a + \sqrt 2\, n^a f^a \partial_t \; ,
\qquad
k_a = - f^a \partial_t \ .$$ Finally, the metric $g_{ij}$ in is special Kähler and can be derived from .
As explained in appendix \[vshet\], the gauge couplings $I_{IJ}, R_{IJ}$, which are given in , cannot be derived directly from . In the ungauged case ($f^a=0$) one needs to perform an electric-magnetic duality transformation on the symplectic vector $X^I, {\mathcal{F}}_I$ given by $X^1 \to -{\mathcal{F}}_1$ and ${\mathcal{F}}_1 \to X^1$. Using this transforms the gauge couplings $I_{IJ}, R_{IJ}$ into a form consistent with and while the Kähler potential is left invariant. For the gauged case ($f^a\neq0$) this transformation is not straightforward and generates precisely a term of the form as we will see in the next subsection.
The non-Abelian field strengths in the heterotic basis are given by $$\begin{aligned}
\label{hetfs}
F^0 & = & d A^0 \; , {\nonumber}\\
F^1 & = & d A^1 \; , {\nonumber}\\
F^2 & = & d A^2 + f^a A^a \wedge A^1 \; , \\
F^3 & = & d A^3 - f^a A^a \wedge A^0 \; ,{\nonumber}\\
F^a & = & d A^a - f^a A^0 \wedge A^1 \; . {\nonumber}\end{aligned}$$ The equations can be understood as follows: recall that (when we do not turn on any non-trivial fields) $A^0$ and $A^1$ are linear combinations of $g_{\mu 4}$ and $g_{\mu 5}$, while $A^2$ and $A^3$ are linear combinations of $B_{\mu 4}$ and $B_{\mu 5}$. The non-Abelian terms in $F^2$ and $F^3$ follow from when including off-diagonal metric elements in the contractions. The non-Abelian term in $F^a$ arises just from off-diagonal contractions in the standard six dimensional kinetic term of $F^a$. By comparing with we see that the non-vanishing structure constants are $$\label{fvh}
f^2_{a1} = - f^3_{a0} = f^a_{01} = f^a\ .$$ Note that there is a slight subtlety when one takes the Killing vectors as given in and checks the consistency of with . The reason is that the structure constants correspond to a Lie algebra generated by $(T_0, ~T_1,~ T_2,~ T_3,~ T_a)$ obeying $$\label{alg}
\left[ T_0 , T_1 \right] = f^a T_a \; , \qquad \left[ T_0 , T_a \right] =
f^a T_3 \; , \qquad \left[ T_a , T_1 \right] = f^a T_2 \; ,$$ with all the other commutators vanishing. We see that $T_2$ and $T_3$ are central elements of the algebra and therefore can consistently be set to zero. This is precisely what happened in our case in that the Killing vectors $k_2$ and $k_3$ are vanishing in , and therefore the last two commutators in are zero even though the corresponding structure constants are non-zero. This situation is encountered frequently in gauged supergravities, see for example [@hull1; @GRZ].[^15]
Finally, the potential in the action is given by the standard formula with the Killing vectors inserted.
Comparison to M-theory {#mcomparison}
----------------------
In this section we wish to compare the heterotic flux compactification derived in the previous subsections, with the M-theory compactification of the previous section. For this we have to remember that in the ungauged case the map between heterotic and type IIA theories involves the non-trivial symplectic rotation . On the gauge fields this translates into the map $$\begin{aligned}
\label{Aid}
A^0_{\mathrm{het}} & \equiv & -A^2_{\mathrm{IIA}} \; , {\nonumber}\\
A^1_{\mathrm{het}} & \equiv & A^0_{\mathrm{IIA}} \; ,{\nonumber}\\
A^2_{\mathrm{het}} & \equiv & A^3_{\mathrm{IIA}} \; , \\
A^3_{\mathrm{het}} & \equiv & \tilde A^1_{\mathrm{IIA}} \; , {\nonumber}\\
A^a_{\mathrm{het}} & \equiv & \sqrt 2 A^a_{\mathrm{IIA}} \; , {\nonumber}\end{aligned}$$ where $\tilde A^1$ denotes the electric-magnetic dual of the vector field $A^1$ which appears in the type IIA compactification.
In order to compare the low-energy effective actions, we need to insert the ${M}_i^j$ into eq. and compare the resulting covariant derivatives and Killing vectors to the heterotic side as given in . We immediately see that there is no perfect match between all the M-theory parameters and the heterotic fluxes that we discussed thus far, and we will return to this point later.
However, let us first see for which subset of the M-theory torsion parameters, the heterotic flux can be recovered. Indeed, choosing $$\begin{aligned}
\label{hetpar}
m_2= m_3= m_a = m_b^a = 0\ ,\end{aligned}$$ and leaving only $\tilde m_a\neq 0$ in eq. results in the non-trivial covariant derivatives $$\label{cdIIA}
\begin{aligned}
D_\mu x^3 = \partial_\mu x^3 + \tilde m_a(x^a A^0_\mu - A^a_\mu) \; , \\
D_\mu x^a = \partial_\mu x^a + \tfrac12 \tilde m_a (x^2 A^0_\mu - A^2_\mu
) \; ,
\end{aligned}$$ or equivalently the Killing vectors $$\begin{aligned}
\label{kspecial}
k_0^3 = - x^a \tilde m_a\ ,\qquad k_0^a = - \tfrac12 x^2 \tilde m_a\
,\qquad 2k_2^a = k_a^3=\tilde m_a \ .\end{aligned}$$ Comparison with eq. together with the identifications and shows a perfect match if we identify $$\label{fluxid}
\tilde m_a |_{\mathrm{IIA}} = - \sqrt2 f^a |_{\mathrm{heterotic}} \; .$$
We can similarly compare the field strengths. Inserting eq. into we arrive at $$\begin{aligned}
\label{FIIA}
F^3 & = d A^3 + \tilde m_a A^0 \wedge A^a \; , \\
F^a & = d A^a - \tfrac12 \tilde m_a A^0 \wedge A^2 \; .
\end{aligned}$$ Comparing with eq. using eqs. and we see that the field strengths $F^3$ and $F^a$ above precisely correspond to $F^2$ and $F^a$ on the heterotic side. However, $F^1$ on the type IIA/M-theory side is Abelian while its correspondent (via ), $F^3$, on the heterotic side is non-Abelian. On the other hand the M-theory side has an additional term (the last term in ) in the low energy effective action. The reason for this mismatch is the fact that the two actions are computed in different symplectic frames. In the ungauged case (i.e. for $\tilde m_a=0$) one easily identifies a symplectic rotation which connects the two frames. In the gauged case (i.e. for $\tilde m_a\neq 0$) this is less straightforward and will occupy us for the rest of this section.[^16]
Let us first recall that the presence of the last term in eq. was due to the fact that the prepotential was not invariant under the gauge transformation . However in the heterotic frame all terms in eq. are invariant and this term is absent. For the choice of parameters the last term in eq. becomes (up to a total derivative) $$\label{AAF}
- \tfrac12 \tilde m_a A^2 \wedge A^a \wedge dA^1 \; .$$ In order to have the two sides match we have to exchange the gauge field $A^1$ with its magnetic dual.[^17] This is indeed possible as the gauge field $A^1$ appears only via its (Abelian) field strength $F^1=d A^1$ as can be seen from eqs. and . The easiest way to see how to do the dualization is to add a Lagrange multiplier $ - \tfrac12 F^1 \wedge d
{\tilde A}_1$ which enforces the Bianchi identity of $F^1$, and ${\tilde A}^1$ will become the magnetic dual of the gauge field $A^1$. The equation of motion for $F^1$ then reads $$\label{Feom}
\tfrac12\, \mathrm{Im} {\mathcal{N}}_{1J} * F^J + \tfrac12\, \mathrm{Re} {\mathcal{N}}_{1J} F^J
- \tfrac12 \tilde m^a A^2 \wedge A^a - \tfrac12 d {\tilde A}_1 = 0 \; .$$ Defining now the magnetic dual field strength $G_1$ as $$\label{G1}
G_1 = d {\tilde A}_1 + \tilde m^a A^2 \wedge A^a \; ,$$ the equation of motion for $F^1$ becomes $$\label{mgfs}
\tfrac12\, G_1 ~ = ~
\tfrac12\, {\textrm{Im} \,}{\mathcal{N}}_{1J} * F^J + \tfrac12\, {\textrm{Re} \,}{\mathcal{N}}_{1J} F^J
~ \equiv ~ \frac{\partial {\mathcal{L}}_{N=2}}{\partial F^1}\ \; ,$$ where ${\mathcal{L}}_{N=2}$ denotes the generic $N=2$ Lagrangian . This equation is precisely the definition of magnetic dual field strength in $N=2$ supergravities and from here on we can apply the general dualization procedure and transform the matrix of gauge couplings ${\mathcal{N}}$ as in with the matrices $U$, $V$, $Z$ and $W$ chosen such that $F^1
\to G_1$ in .
Clearly, now $G_1$ defined in eq. can be mapped to the heterotic field strength $F^3$ from eq. , via eqs. and . This ends the proof that the low energy theories obtained from compactifying heterotic strings on $K3 \times T^2$ with fluxes turned on along $T^2$ and from compactifying M-theory on a seven-dimensional manifold with $SU(3)$ structure with only the fluxes $\tilde m_a$ non-vanishing, are indeed the same.
So far we discussed the duality for the parameter choice . However, our discussion in the previous section makes it clear that all the parameters $M^i_j$ on the M-theory side, which give rise to consistent backgrounds in the full M-theory[^18], correspond to $SO(1,n_V^{(5)}-1,{\bf Z})$ monodromies, and they can be described by such monodromies on the heterotic side as well. The specific monodromy we discussed above is simple on the heterotic side since it does not involve the metric, but it is just a shift of the Wilson lines $A_5^{(i)}$ around the torus $T^n$ that they live on. Monodromies in an $SO(n,{\bf Z})$ subgroup of $SO(1,n_V^{(5)}-1,{\bf
Z})$ may be identified as $SL(n,{\bf Z})$ transformations on this torus, which mix the various gauge fields and scalars; these were denoted by $M^a_b$ above. Generic monodromies (involving $m_2$, $m_3$ and $m_a$) do not have a purely geometrical description [@LMM]. For instance, the $m_a$ parameters are related by a T-duality (inverting the radius of one of the circles) to the $\tilde m_a$ parameters, so they may be viewed as having a variation of the heterotic gauge fields $A^a$ (similar to ) along the T-dual circle. However, this “T-dual flux” does not have a geometrical description in the original heterotic language. Finally note that a background with $m_2 + m_3 \ne 0$ is not consistent as it involves a twist with an element of $SO(1,1, {\bf Z})$ which is not part of the U-duality group in five dimensions. This can also be seen from the heterotic side as it would make the heterotic dilaton charged, which has not been observed so far in perturbation theory.
Conclusions
===========
In this paper we studied M-theory compactifications on seven-dimensional manifolds with $SU(3)$-structure. Specifically we considered a class of such manifolds which can be seen as Calabi–Yau threefolds fibred over a circle. The fibration structure is determined by a specific twist of the second cohomology of the Calabi–Yau as we go around the circle. The consistency of the procedure requires that a discrete isometry in the Calabi–Yau moduli space exists (which is an element of the U-duality group of M-theory compactified on the Calabi-Yau manifold). This is guaranteed for $K3$-fibered Calabi–Yau manifolds which correspond to backgrounds that are dual to the heterotic string compactified on $K3\times T^2$.
Since in such compactifications the second cohomology of the Calabi–Yau manifold governs the vector multiplet sector, the twisting leads to a gauged supergravity where a subset of the isometries of the vector multiplet moduli space are promoted to local gauge symmetries. A novel feature is that the Kähler moduli are charged, and not only their axionic superpartners as it usually happens in $N=2$ string compactifications. Moreover this gauging turns out to be non-Abelian which so far had not been obtained in (smooth) compactifications of type IIA string theory or M-theory.
The fact that this gauging should exist is expected from the heterotic – type IIA duality. In heterotic $N=2$ backgrounds arising from $K3\times T^2$ compactifications with specific background fluxes only the vector multiplets get charged and the potential has no dependence on the hypermultiplets. However, viewed from the dual type IIA perspective, the masses of the vector fields contain negative powers of the type IIA string coupling. Therefore, in order to consistently keep such states in the effective theory and at the same time ignore the KK states, one has to make sure that the type IIA string coupling is large relative to the size of the Calabi–Yau manifold. This forced us into the M-theory regime, and indeed the dual of the heterotic backgrounds were found among the M-theory backgrounds described above.
The general twisted compactification on the M-theory side contains additional parameters which do not map to fluxes on the heterotic side. However, since we can interpret all such compactifications as twists of the five dimensional theory (obtained from M-theory on the Calabi-Yau, or equivalently from the heterotic string theory on $K3\times S^1$) by an element of the heterotic T-duality group, they can all be described as T-folds on the heterotic side. It would be interesting to study these backgrounds further; work along these lines is in progress [@LMM].
**Acknowledgments**
This work was supported by G.I.F., the German-Israeli Foundation for Scientific Research and Development. The work of OA and MB was supported in part by the Israel-U.S. Binational Science Foundation, by a center of excellence supported by the Israel Science Foundation (grant number 1468/06), by a grant (DIP H52) of the German Israel Project Cooperation, by the European network MRTN-CT-2004-512194, by Minerva, and by the Einstein-Minerva Center for Theoretical Physics. The work of AM was supported by the FP6 Marie Curie Research Training Networks, the European Union 6th framework program MRTN-CT-2004-503069 “Quest for unification", MRTN-CT-2004-005104 “ForcesUniverse", MRTN-CT-2006-035863 “UniverseNet“, and the Deutsche Forschungsgemeinschaft (DFG) in the SFB-Transregio 33 ”The Dark Universe". The work of JL was supported by the European Union 6th framework program MRTN-CT-2004-503069 “Quest for unification", and the Deutsche Forschungsgemeinschaft (DFG) in the SFB 676 “Particles, Strings and the Early Universe”.
OA and MB would like to thank Albion Lawrence for useful discussions. JL and AM thank Ron Reid-Edwards, Thomas Grimm, Danny Martinez, Eran Palti, Bastiaan Spanjaard, Daniel Waldram and Marco Zagermann for helpful conversations. JL thanks Chris Hull and the Institute for Mathematical Sciences, Imperial College London, for financial support and the kind hospitality during part of this work.
Vector multiplets coupled to $N=2$ supergravity {#sg}
===============================================
This appendix is a short review of $N=2$ supergravity in four dimensions [@dWvP; @N=2review]. A generic spectrum contains the gravitational multiplet, $n_V$ vector multiplets, $n_H$ hypermultiplets and $n_T$ vector multiplets. In this paper we are interested only in the vector multiplet sector and therefore we discard the hyper- and tensor-multiplets.
The vector multiplets contain $n_V$ complex scalars $x^i,
i=1,\ldots,n_V$, which span a special Kähler manifold ${\mathcal M}_V$. This implies that the Kähler potential $K$ is not an arbitrary real function but is determined in terms of a holomorphic prepotential ${\mathcal{F}}$ according to [@dWvP] $$\label{Kspecial}
K=-\ln\Big[i \bar{X}^{I} (\bar x) {\mathcal{F}}_{I}(X)
- i X^{I} (x)\bar{{\mathcal{F}}}_{I}(\bar{X})\Big] \ .$$ The $X^{I}, I=0,\ldots, n_V$ are $(n_V+1)$ holomorphic functions of the scalars $x^i$, and ${\mathcal{F}}_{I}$ abbreviates the derivative, i.e. ${\mathcal{F}}_{I}\equiv
\frac{\partial {\mathcal{F}}(X)}{\partial X^{I}} $. Furthermore ${\mathcal{F}}(X)$ is a homogeneous function of degree $2$ in $X^{I}$, i.e. $X^{I} {\mathcal{F}}_{I}=2 {\mathcal{F}}$.
The bosonic part of the (ungauged) $N=2$ action for vector multiplets is given by $$\label{asg}
S = \int \Big[ \frac12 R ^* {\bf 1} - g_{i\bar\jmath} dx^i \wedge * d
{\bar x}^{\bar\jmath} + \frac{1}{4}\, {\textrm{Im} \,}{\mathcal{N}}_{IJ} F^I\wedge * F^{J}
+ \frac{1}{4} \, {\textrm{Re} \,}{\mathcal{N}}_{IJ} F^I \wedge F^J
\Big] \ ,$$ where $g_{i\bar\jmath}=\partial_i\partial_{\bar\jmath} K$. In the ungauged case the field strengths are Abelian, $F^I = dA^I$, and the matrix of gauge couplings is given by $$\label{Ndef}
{\cal N}_{IJ} = \bar {\mathcal{F}}_{IJ} +2i\ \frac{\mbox{Im} {\mathcal{F}}_{IK}\mbox{Im}
{\mathcal{F}}_{JL} X^K X^L}{\mbox{Im} {\mathcal{F}}_{LK} X^K X^L} \ .$$ The equations of motion of the action (\[asg\]) are invariant under generalized electric-magnetic duality transformations. From (\[asg\]) one derives the equations of motion $$\label{eom}
\frac{\partial {\mathcal{L}}}{\partial A^I} = \tfrac12 d {G}_I = 0 \ , \qquad
{G}_I \equiv 2 \frac{\partial {\mathcal{L}}}{\partial F^I} = {\textrm{Re} \,}{\mathcal{N}}_{IJ} {F}^J + {\textrm{Im} \,}{\mathcal{N}}_{IJ} *F^J \, ,$$ while the Bianchi identities read $$d {F}^I = 0\ .$$ These equations are invariant under the generalized duality rotations[^19] $$\begin{aligned}
\label{FGdual}
F^{I}&\to&
U^I{}_J\, F^{J}+Z^{IJ}\,G_{ J}\ ,{\nonumber}\\
G_I&\to& V_I{}^J\,G_{J}+W_{IJ}\,F^{J}\ ,\end{aligned}$$ where $U$, $V$, $W$ and $Z$ are constant, real, $(n_V+1)\times(n_V+1)$ matrices which obey $$\begin{aligned}
\label{spc2}
U^{\rm T} V- W^{\rm T} Z &=& V^{\rm T}U - Z^{\rm T}W =
{\bf 1}\, ,{\nonumber}\\
U^{\rm T}W = W^{\rm T}U\,, && \quad Z^{\rm T}V= V^{\rm T}Z\ .\end{aligned}$$ Together they form the $(2n_V+2)\times(2n_V+2)$ symplectic matrix $$\label{uvzwg}
{\cal O}\ = \left(
\begin{array}{cc}
U & Z \\
[1mm] W & V
\end{array}
\right) \, .$$ Thus $(F^I,G_I)$ form a $(2n_V+2)$ symplectic vector. Similarly $(X^I,{\mathcal{F}}_I)$ enjoy the same transformation properties and transform as a symplectic vector under (\[FGdual\]). The Kähler potential (\[Kspecial\]) is invariant under this symplectic transformation, while the matrix ${\mathcal{N}}$ transforms according to $$\label{nchange}
{\mathcal{N}}\to (V {\mathcal{N}}+ W) \,(U+ Z {\mathcal{N}})^{-1} \,.$$ The isometries of the scalar manifold ${\mathcal M}_V$ are global invariances of the scalar field sector, which can be “gauged” by mixing them with the local symmetries. These isometries are generated by holomorphic Killing vectors $k_I^i(x)$ via $$\label{kdef}
\delta x^i \ = \ \Lambda^I k_I^i(x) \ .$$ The $k_I^i(x)$ satisfy the Killing equation which in $N=2$ supergravity can be solved in terms of a Killing prepotential $P_I$ $$\label{Pdef}
k_I^i(x) = g^{i\bar j} \partial_{\bar j} P_I\; .$$ Gauging the isometries (\[kdef\]) requires the replacement of ordinary derivatives by covariant derivatives in the action $$\label{gaugeco}
\partial_\mu x^i \to {D}_\mu x^i = \partial_\mu x^i - k_I^i A_\mu^I\ ,$$ and the field strengths take the form $$\begin{aligned}
\label{strconA}
F^I = d A^I + f^I_{JK}A^J\wedge A^K \ .\end{aligned}$$ Consistency requires $$\label{commutator}
\big[k_I, k_J \big]\ =\ f_{IJ}^L\ k_L\; ,$$ where $k_I = k_I^j\partial_j$. Furthermore the potential $$\label{pot}
V = 2 e^K X^I\bar X^J g_{\bar \imath j}\, k_I^{\bar\imath} k_J^j$$ has to be added to the action in order to preserve supersymmetry.[^20] The bosonic part of the action of gauged $N=2$ supergravity is then given by $$\label{agsg}
S = \int \Big[ \frac12 R ^* {\bf 1} - g_{i{\bar\jmath}} Dx^i \wedge * D
{\bar x}^{\bar\jmath} + \frac{1}{4}\, {\textrm{Im} \,}{\mathcal{N}}_{IJ} F^I\wedge * F^{J}
+ \frac{1}{4} \, {\textrm{Re} \,}{\mathcal{N}}_{IJ} F^I \wedge F^J -V
\Big] \ .$$ The symplectic invariance of the ungauged theory is generically broken since the action now explicitly depends on the gauge potentials $A^I$ through the covariant derivatives $Dx^i$ and the non-Abelian field strengths $F^I$.
There is yet a further generalization of the above setup which was discussed in [@dWvP]. The isometries considered above need not leave the prepotential ${\mathcal{F}}$ invariant. For example, consider an isometry which leads to a change in the prepotential of the type $$\label{dF}
\delta {\mathcal{F}}= \Lambda^I C_{IJK} X^J X^K \; ,$$ for some real parameters $C_{IJK}$. Obviously, the imaginary part of the second derivative of this variation vanishes. From its definition we see that the imaginary part of the gauge coupling matrix $\mathrm{Im} {\mathcal{N}}$ is left invariant. $\mathrm{Re}{\mathcal{N}}$ changes however, and so the action as defined in is not invariant. In order to restore gauge invariance the following term has to be added to the action [@dWvP] $$\label{dL}
S \to S+ \int \tfrac13 C_{IJK} A^I \wedge A^J \wedge (d A^K - \frac38 f_{LM}^K
A^L \wedge A^M) \; .$$
The vector multiplet sector of heterotic string compactifications on $K3 \times T^2$ {#vshet}
====================================================================================
In this appendix we review the structure of the vector multiplet sector of heterotic strings compactified on $K3 \times T^2$, following [@LM1]. For this setup, the vector multiplet sector is directly connected to the $T^2$ part of the compactification and the $K3$ factor only breaks supersymmetry and may reduce the total number of vector multiplets. Therefore, for our purposes studying the $T^2$ step will be enough. The initial non-Abelian gauge symmetry of the heterotic string is in general broken spontaneously to the maximal Abelian subgroup and therefore we consider the resulting theory to be $N=2$ supergravity coupled to an arbitrary number $n_v$ of Abelian vector multiplets.
The vector fields in the vector multiplets have two origins: first they can come from gauge fields in ten dimensions (and their number is arbitrary) and second they arise as KK vector fields on the torus. In the last class we have precisely four vector fields, two from the internal components of the metric – which we denote $A^0$ and $A^1$ – and two from the $B$-field – which we denote $A^2$ and $A^3$. One of these vector fields, or some combination of them will be the graviphoton, while the rest will sit in vector multiplets. The vector fields from the first class we denote as $A^a$ ($a=4, \ldots , n_v$), and they are all part of vector multiplets.
The scalar fields in the vector multiplets span the coset space $$\label{vmshet}
{\mathcal M}_V = \frac{SU(1,1)}{U(1)} \otimes \frac{SO(2,n_v-1)}{SO(2) \times
SO(n_v-1)} \; .$$ The factor $SU(1,1)/U(1)$ corresponds to the dilaton and its superpartner, the axion dual to the four-dimensional $B$-field, while the second factor describes the scalar fields coming from the $T^2$ moduli (including the internal $B$-field) and from the internal components of the ten-dimensional gauge fields. These fields combine into the complex scalar fields $x^i = (s, u, t, n^a), ~ a=4, \ldots , n_v$, with $s$ being the heterotic dilaton $$\label{shet}
s = \frac{a}2 - \frac{i}2 e^{-\phi} \; ,$$ while the rest are given implicitly by $$\begin{aligned}
\label{utn}
A^a_1 & = & \sqrt2 \frac{n^a - \bar n^a}{u - \bar u} \; , \qquad A^a_2 =
\sqrt2 \frac{\bar u n^a - u \bar n^a}{u - \bar u} \; , {\nonumber}\\
B_{12} & = & \frac12 \left[(t + \bar t) - \frac{(n + \bar n)^a
(n- \bar n)^a}{u - \bar u} \right] \; ,\\
\sqrt G & = & - \frac{i}2
\left[ (t - \bar t) - \frac{(n- \bar n)^a (n - \bar n)^a}{u - \bar u}
\right] \; , {\nonumber}\\
G_{11} & = & \frac{2 i}{u - \bar u} \sqrt G \; , \qquad G_{12} = i \frac{u
+ \bar u}{u- \bar u} \sqrt G \; , {\nonumber}\end{aligned}$$ where $A^a_{1,2}$ denote the internal components of the gauge fields, $B_{12}$ is the internal $B$-field, while $G_{11}$, $G_{12}$ and $G$ stand for the metric on the torus and for its determinant, respectively.
From the $T^2$ compactification point of view, the dynamics of these fields is naturally described in terms of a $SO(2,n_v - 1)$ matrix $M^{IJ}$ which is given by $$\label{gcfhet}
M = \left(
\begin{array}[h]{ccc}
G^{-1} & - G^{-1} \hat B & - G^{-1} A \\
-{\hat B}^T G^{-1} & G + A^T A + {\hat B}^TG^{-1} \hat B & A + {\hat
B}^T G^{-1} A \\
-A^T G^{-1} & A^T + A^T G^{-1} \hat B & \mathbf{1}_{n_v-3} + A^T G^{-1} A
\\
\end{array}
\right) \; ,$$ where $\hat B_{ij} = B_{ij} + \tfrac12 A^a_i A^a_j$ with indices $i,j$ labeling the $T^2$ directions. The matrix $M$ as defined above leaves invariant the $SO(2,n_v-1)$ metric $$\label{eta}
\eta = \left(
\begin{array}[h]{ccc}
0 & \mathbf{1_2} & 0 \\
\mathbf{1_2} & 0 & 0 \\
0 & 0 & \mathbf{1_{n_v-3}} \\
\end{array} \right)$$ in that $M^{IJ} \eta_{JK} M^{KL} = \eta^{IL}$. Then, the kinetic terms of the moduli are given by $$\label{hetkin}
L_{kin} = \partial_\mu M^{IJ} \partial^\mu (M^{-1})_{IJ} \; ,$$ while the gauge kinetic function takes the form $$\label{gkfhet}
I_{IJ} \equiv {\rm Im} {\mathcal{N}}_{IJ} = \frac{s - \bar s}{2 i} (M^{-1})_{IJ} \; , \qquad
R_{IJ} \equiv {\rm Re} {\mathcal{N}}_{IJ} = - \frac{s + \bar s}2 \eta_{IJ} \; .$$ The connection to $N=2$ supergravity is not obvious in the above formulation. Moreover, it turns out that that the natural symplectic basis in this case is one where no prepotential exists [@CCDF; @dWKLL; @AFGNT] and so the formulae of appendix \[sg\], and in particular the definition of the gauge coupling matrix , do not directly apply. However one can explicitly compute using and and show that these kinetic terms can be derived from the Kähler potential $$\label{kpothet}
K = - \ln \left[i (\bar s- s) \left((u-\bar u) (t - \bar t) - (n-\bar n)^a
(n- \bar n)^a \right) \right] \; .$$ Moreover one can show that using the general formalism of [@N=2review] the gauge coupling matrix can be obtained form the following holomorphic vector $$\label{holsec}
\left(X^I~ |~ F_I \right) = \left(-u,~ 1,~ t,~ ut - n^a n^a,~ \sqrt 2 n^a
~|\; - st,\; -s (ut - n^a n^a),~ su,\; -s,\; - \sqrt2 sn \right)\; ,$$ while, obviously, using this reproduces the Kähler potential . Alternatively, we can start from the type IIA prepotential with the projective coordinates given by $$\label{Xid}
X^0=1 \; , \quad X^1 = s \; , \quad X^2=u \; , \quad X^3 = t \; , \quad X^a
= n^a\; .$$ Using , one then computes the gauge coupling matrix ${\mathcal{N}}$. To go to the heterotic symplectic basis we perform the symplectic rotation with the matrices $U, ~V,~W,~Z$ in given by $$\begin{aligned}
\label{srot}
U & = & \left (
\begin{array}{cccccc}
0 & 0 & -1 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 \\[.3cm]
0 & 0 & 0 & 0 & \sqrt2\; \mathbf{1_{n_v-3}} \\
\end{array}
\right) , \qquad
V= \left (
\begin{array}{cccccc}
0 & 0 & -1 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 \\[.3cm]
0 & 0 & 0 & 0 & \tfrac1{\sqrt2} \mathbf{1_{n_v-3}} \\
\end{array}
\right) \\[.3cm]
& & \hspace{2cm}Z = - W = \left (
\begin{array}{cccccc}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\[.3cm]
0 & 0 & 0 & 0 & \mathbf{0_{n_v-3}} \\
\end{array}
\right)\; . {\nonumber}\end{aligned}$$ Note that these matrices are precisely the ones which transform the holomorphic section derived from the prepotential and into . Moreover, using the transformation of the gauge coupling matrix it is completely straightforward, but a bit tedious, to show that the gauge coupling matrix precisely reproduces . Finally, let us observe that since the matrices $Z$ and $W$ are non-vanishing this transformation is intrinsically a non-perturbative one in that it exchanges the gauge field $A^1$ with its magnetic dual, followed by certain relabelings and rescalings.
[99]{}
For recents reviews see, for example,\
M. Graña, “Flux compactifications in string theory: A comprehensive review,” Phys. Rept. [**423**]{} (2006) 91 \[arXiv:hep-th/0509003\];\
M. R. Douglas and S. Kachru, “Flux compactification,” arXiv:hep-th/0610102;\
R. Blumenhagen, B. Körs, D. Lüst and S. Stieberger, “Four-dimensional string compactifications with D-branes, orientifolds and fluxes,” arXiv:hep-th/0610327;\
B. Wecht, “Lectures on Nongeometric Flux Compactifications,” Class. Quant. Grav. [**24**]{} (2007) S773 \[arXiv:0708.3984 \[hep-th\]\] and references therein.
J. P. Gauntlett, N. W. Kim, D. Martelli and D. Waldram, “Fivebranes wrapped on SLAG three-cycles and related geometry,” JHEP [**0111**]{} (2001) 018 \[arXiv:hep-th/0110034\];\
J. P. Gauntlett, D. Martelli, S. Pakis and D. Waldram, “G-structures and wrapped NS5-branes,” Commun. Math. Phys. [**247**]{} (2004) 421 \[arXiv:hep-th/0205050\];\
J. P. Gauntlett, D. Martelli and D. Waldram, “Superstrings with intrinsic torsion,” Phys. Rev. D [**69**]{}, 086002 (2004) \[arXiv:hep-th/0302158\].
S. Salamon, “Riemannian Geometry and Holonomy Groups”, Vol. 201 of [*Pitman Research Notes in Mathematics*]{}, Longman, Harlow, 1989;\
D. Joyce, “Compact Manifolds with Special Holonomy,” Oxford University Press, Oxford, 2000.
S. Chiossi and S. Salamon, “The Intrinsic Torsion of $SU(3)$ and $G_2$ Structures,” in *Differential geometry, Valencia, 2001*, pp. 115, arXiv:math.DG/0202282.
S. Kachru and C. Vafa, “Exact results for N=2 compactifications of heterotic strings,” Nucl. Phys. B [**450**]{}, 69 (1995) \[arXiv:hep-th/9505105\];\
S. Ferrara, J. A. Harvey, A. Strominger and C. Vafa, “Second Quantized Mirror Symmetry,” Phys. Lett. B [**361**]{}, 59 (1995) \[arXiv:hep-th/9505162\].
V. Kaplunovsky, J. Louis and S. Theisen, “Aspects of duality in N=2 string vacua,” Phys. Lett. B [**357**]{}, 71 (1995) \[arXiv:hep-th/9506110\];\
A. Klemm, W. Lerche and P. Mayr, “K3 Fibrations and heterotic type II string duality,” Phys. Lett. B [**357**]{} (1995) 313 \[arXiv:hep-th/9506112\];\
P. S. Aspinwall and J. Louis, “On the Ubiquity of K3 Fibrations in String Duality,” Phys. Lett. B [**369**]{}, 233 (1996) \[arXiv:hep-th/9510234\].
S. Gurrieri, J. Louis, A. Micu and D. Waldram, “Mirror symmetry in generalized Calabi–Yau compactifications,” Nucl. Phys. B [**654**]{} (2003) 61 \[arXiv:hep-th/0211102\];\
S. Gurrieri and A. Micu, “Type IIB theory on half-flat manifolds,” Class. Quant. Grav. [**20**]{} (2003) 2181 \[arXiv:hep-th/0212278\].
S. Fidanza, R. Minasian and A. Tomasiello, “Mirror symmetric SU(3)-structure manifolds with NS fluxes,” Commun. Math. Phys. [**254**]{} (2005) 401 \[arXiv:hep-th/0311122\];\
M. Grana, R. Minasian, M. Petrini and A. Tomasiello, “Supersymmetric backgrounds from generalized Calabi-Yau manifolds,” JHEP [**0408**]{} (2004) 046 \[arXiv:hep-th/0406137\];\
A. Tomasiello, “Topological mirror symmetry with fluxes,” JHEP [**0506**]{} (2005) 067
M. Graña, J. Louis and D. Waldram, “Hitchin functionals in N = 2 supergravity,” JHEP [**0601**]{} (2006) 008 \[arXiv:hep-th/0505264\];\
M. Grana, J. Louis and D. Waldram, “SU(3) x SU(3) compactification and mirror duals of magnetic fluxes,” JHEP [**0704**]{}, 101 (2007) \[arXiv:hep-th/0612237\].
R. D’Auria, S. Ferrara and M. Trigiante, “On the supergravity formulation of mirror symmetry in generalized Calabi-Yau manifolds,” Nucl. Phys. B [**780**]{} (2007) 28 \[arXiv:hep-th/0701247\].
G. Curio, A. Klemm, B. K[ö]{}rs and D. L[ü]{}st, “Fluxes in heterotic and type II string compactifications,” Nucl. Phys. B [**620**]{} (2002) 237 \[arXiv:hep-th/0106155\].
J. Louis and A. Micu, “Heterotic-type IIA duality with fluxes,” JHEP [**0703**]{} (2007) 026 \[arXiv:hep-th/0608171\].
J. Louis and A. Micu, “Type II theories compactified on Calabi–Yau threefolds in the presence of background fluxes,” Nucl. Phys. B [**635**]{}, 395 (2002) \[arXiv:hep-th/0202168\].
G. Dall’Agata, R. D’Auria, L. Sommovigo and S. Vaula, “D = 4, N = 2 gauged supergravity in the presence of tensor multiplets,” Nucl. Phys. B [**682**]{} (2004) 243 \[arXiv:hep-th/0312210\].
R. D’Auria, S. Ferrara, M. Trigiante and S. Vaula, “Gauging the Heisenberg algebra of special quaternionic manifolds,” Phys. Lett. B [**610**]{} (2005) 147 \[arXiv:hep-th/0410290\];\
R. D’Auria, S. Ferrara, M. Trigiante and S. Vaula, “Scalar potential for the gauged Heisenberg algebra and a non-polynomial antisymmetric tensor theory,” Phys. Lett. B [**610**]{} (2005) 270 \[arXiv:hep-th/0412063\].
D. Cassani and A. Bilal, “Effective actions and N=1 vacuum conditions from SU(3) x SU(3) compactifications,” JHEP [**0709**]{}, 076 (2007) \[arXiv:0707.3125 \[hep-th\]\];\
D. Cassani, “Reducing democratic type II supergravity on SU(3) x SU(3) structures,” arXiv:0804.0595 \[hep-th\].
J. Louis and A. Micu, “Heterotic string theory with background fluxes,” Nucl. Phys. B [**626**]{} (2002) 26 \[arXiv:hep-th/0110187\].
C. M. Hull, “Massive string theories from M-theory and F-theory,” JHEP [**9811**]{} (1998) 027 \[arXiv:hep-th/9811021\];\
A. Dabholkar and C. Hull, “Duality twists, orbifolds, and fluxes,” JHEP [**0309**]{}, 054 (2003) \[arXiv:hep-th/0210209\];\
C. M. Hull, “A geometry for non-geometric string backgrounds,” JHEP [**0510**]{} (2005) 065 \[arXiv:hep-th/0406102\];\
C. M. Hull and R. A. Reid-Edwards, “Flux compactifications of string theory on twisted tori,” J. Sci. Eng. [**1**]{} (2004) 411 \[arXiv:hep-th/0503114\];\
C. M. Hull and R. A. Reid-Edwards, “Flux compactifications of M-theory on twisted tori,” JHEP [**0610**]{} (2006) 086 \[arXiv:hep-th/0603094\].
N. Kaloper and R. C. Myers, “The O(dd) story of massive supergravity,” JHEP [**9905**]{}, 010 (1999) \[arXiv:hep-th/9901045\].
A. Micu, E. Palti and P. M. Saffin, “M-theory on seven-dimensional manifolds with SU(3) structure,” JHEP [**0605**]{} (2006) 048 \[arXiv:hep-th/0602163\]. E. Palti, “Aspects of moduli stabilisation in string and M-theory,” arXiv:hep-th/0608033.
M. Gunaydin, G. Sierra and P. K. Townsend, “Gauging The D = 5 Maxwell-Einstein Supergravity Theories: More On Jordan Algebras,” Nucl. Phys. B [**253**]{}, 573 (1985). A. C. Cadavid, A. Ceresole, R. D’Auria and S. Ferrara, “Eleven-dimensional supergravity compactified on Calabi-Yau threefolds,” Phys. Lett. B [**357**]{}, 76 (1995) \[arXiv:hep-th/9506144\]. I. Antoniadis, S. Ferrara and T. R. Taylor, “N=2 Heterotic Superstring and its Dual Theory in Five Dimensions,” Nucl. Phys. B [**460**]{}, 489 (1996) \[arXiv:hep-th/9511108\]. A. Strominger, “Yukawa Couplings In Superstring Compactification,” Phys. Rev. Lett. [**55**]{} (1985) 2547;\
A. Strominger, “Special Geometry,” Commun. Math. Phys. [**133**]{} (1990) 163;\
P. Candelas and X. de la Ossa, “Moduli Space Of Calabi–Yau Manifolds,” Nucl. Phys. B [**355**]{}, 455 (1991).
G. Dall’Agata and N. Prezas, “N = 1 geometries for M-theory and type IIA strings with fluxes,” Phys. Rev. D [**69**]{} (2004) 066004 \[arXiv:hep-th/0311146\].
M. Dine and M. Graesser, “CPT and other symmetries in string / M theory,” JHEP [**0501**]{}, 038 (2005) \[arXiv:hep-th/0409209\]. J. Scherk and J. H. Schwarz, “Spontaneous Breaking Of Supersymmetry Through Dimensional Reduction,” Phys. Lett. B [**82**]{} (1979) 60;\
J. Scherk and J. H. Schwarz, “How To Get Masses From Extra Dimensions,” Nucl. Phys. B [**153**]{} (1979) 61.
S. Ferrara and S. Sabharwal, “Dimensional Reduction Of Type II Superstrings,” Class. Quant. Grav. [**6**]{} (1989) L77;\
“Quaternionic Manifolds For Type II Superstring Vacua Of Calabi-Yau Spaces,” Nucl. Phys. B [**332**]{} (1990) 317.
L. Andrianopoli, R. D’Auria, S. Ferrara, P. Fre and M. Trigiante, “R-R scalars, U-duality and solvable Lie algebras,” Nucl. Phys. B [**496**]{} (1997) 617 \[arXiv:hep-th/9611014\].
For a review of $N=2$ supergravity see, for example, L. Andrianopoli, M. Bertolini, A. Ceresole, R. D’Auria, S. Ferrara, P. Fre and T. Magri, “$N = 2$ supergravity and $N = 2$ super Yang-Mills theory on general scalar manifolds: Symplectic covariance, gaugings and the momentum map,” J. Geom. Phys. [**23**]{} (1997) 111 \[arXiv:hep-th/9605032\].
B. de Wit and A. Van Proeyen, “Potentials And Symmetries Of General Gauged N=2 Supergravity - Yang-Mills Models,” Nucl. Phys. B [**245**]{} (1984) 89.\
B. de Wit, P. G. Lauwers and A. Van Proeyen, “Lagrangians Of N=2 Supergravity - Matter Systems,” Nucl. Phys. B [**255**]{} (1985) 569.
A. K. Kashani-Poor and A. Tomasiello, “A stringy test of flux-induced isometry gauging,” Nucl. Phys. B [**728**]{} (2005) 135 \[arXiv:hep-th/0505208\].
L. Anguelova and K. Zoubos, “Five-brane instantons vs flux-induced gauging of isometries,” JHEP [**0610**]{} (2006) 071 \[arXiv:hep-th/0606271\].
A. Ceresole, R. D’Auria, S. Ferrara and A. Van Proeyen, “Duality transformations in supersymmetric Yang-Mills theories coupled to supergravity,” Nucl. Phys. B [**444**]{} (1995) 92 \[arXiv:hep-th/9502072\].
B. de Wit, V. Kaplunovsky, J. Louis and D. Lust, “Perturbative couplings of vector multiplets in N=2 heterotic string vacua,” Nucl. Phys. B [**451**]{} (1995) 53 \[arXiv:hep-th/9504006\]. I. Antoniadis, S. Ferrara, E. Gava, K. S. Narain and T. R. Taylor, “Perturbative Prepotential And Monodromies In N=2 Heterotic Superstring,” Nucl. Phys. B [**447**]{} (1995) 35 \[arXiv:hep-th/9504034\]. C. M. Hull, “New gauged N = 8, D = 4 supergravities,” Class. Quant. Grav. [**20**]{} (2003) 5407 \[arXiv:hep-th/0204156\]. M. Gunaydin, S. McReynolds and M. Zagermann, “The R-map and the coupling of N = 2 tensor multiplets in 5 and 4 dimensions,” JHEP [**0601**]{} (2006) 168 \[arXiv:hep-th/0511025\].
B. de Wit, H. Samtleben and M. Trigiante, “Magnetic charges in local field theory,” JHEP [**0509**]{}, 016 (2005) \[arXiv:hep-th/0507289\];\
M. de Vroome and B. de Wit, “Lagrangians with electric and magnetic charges of N=2 supersymmetric gauge theories,” JHEP [**0708**]{} (2007) 064 \[arXiv:0707.2717 \[hep-th\]\].
J. Louis, D. Martinez and A. Micu, in preparation.
[^1]: On leave from IFIN-HH Bucharest.
[^2]: A tensor multiplet can be dualized to a hypermultiplet or a vector multiplet, depending on the mass of the tensor.
[^3]: By gauging we mean that isometries of the scalar manifold are mixed into the gauge transformations, and not that new gauge fields are introduced.
[^4]: By $N=2$ we mean the minimal amount of supersymmetry possible in five dimensions, which reduces to $N=2$ in four dimensions.
[^5]: Here we only give the final result and refer the reader for further details to [@CCAF; @AFT].
[^6]: The same metric $g_{ij}$ will also appear in the four-dimensional effective action which we discuss in the next section. In this case it is the metric on a complex special Kähler manifold, since in $d=4$ the scalar fields in the vector multiplets are complex and furthermore they necessarily span a special Kähler manifold .
[^7]: By U-duality we broadly refer to the group of discrete gauge transformations of the theory. We implicitly assume that all discrete global symmetries are actually gauged [@Dine:2004dk].
[^8]: In the last section we noted that for compactifications which have a heterotic dual the U-duality group is $\Gamma({\bf Z}) = SO(1,h^{(1,1)}-2,{\bf Z})$, but the analysis of this section holds for arbitrary $\Gamma({\bf Z})$.
[^9]: The above Ansatz includes only zero modes, and therefore we omitted the off-diagonal components which involve one-forms on $CY_3$, since they lead to massive excitations.
[^10]: The couplings of the hypermultiplets in the $N=2$ low energy effective action can be found, for example, in [@FS; @LM2].
[^11]: We thank the referee of this paper for pointing out that is also a solvable Lie algebra (for a definition see, for example, [@Turin]).
[^12]: There is also a gravitational Chern-Simons term in $H^{het}$, which is of higher order in the Planck constant and will not play a role in our discussion.
[^13]: The details can be found in reference [@LM1].
[^14]: Compared to [@LM1] we have rescaled the metric by a factor $1/2$ and the gauge fields by a factor $1/\sqrt 2$ in order to agree with the conventions we use in type IIA compactifications.
[^15]: We thank Marco Zagermann for educating us on this subject and the referee of this paper for pointing out that for $T_2=T_3=0$ is also a nilpotent Lie algebra [@Turin].
[^16]: The following discussion should be straightforward in the framework of gauged supergravity as given in [@dWST].
[^17]: Recall that already in six dimensions, the duality between heterotic string theory on $T^4$ and type IIA string theory on $K3$ involves a dualization of the 2-form field.
[^18]: Namely, $e^M$ must be a member of the discrete U-duality group.
[^19]: This is often stated in terms of the self-dual and anti-self-dual part of the field strength $F^{\pm J}$ and the dual quantities $G^+_{I}\equiv{\cal N}_{IJ}F^{+J}\,,\
G^-_{I} \equiv\bar{\cal N}_{IJ}F^{-J}$.
[^20]: Note the factor 2 in front of the potential compared to [@N=2review] which comes from the different normalization which we use in the action .
|
---
abstract: 'We study $D$ and $D_s$ mesons at finite temperature using an effective field theory based on chiral and heavy-quark spin-flavor symmetries within the imaginary-time formalism. Interactions with the light degrees of freedom are unitarized via a Bethe-Salpeter approach, and the $D$ and $D_s$ self-energies are calculated self-consistently. We generate dynamically the $D^*_0(2300)$ and $D_s(2317)$ states, and study their possible identification as the chiral partners of the $D$ and $D_s$ ground states, respectively. We show the evolution of their masses and decay widths as functions of temperature, and provide an analysis of the chiral-symmetry restoration in the heavy-flavor sector below the transition temperature. In particular, we analyse the very special case of the $D$-meson, for which the chiral partner is associated to the double-pole structure of the $D^*_0(2300)$.'
address:
- 'Departament de Física Quàntica i Astrofísica and Institut de Ciències del Cosmos (ICCUB), Facultat de Física, Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona, Spain'
- 'Institut für Theoretische Physik, Goethe Universität Frankfurt, Max von Laue Strasse 1, 60438 Frankfurt, Germany'
- 'Frankfurt Institute for Advanced Studies, Ruth-Moufang-Str. 1, 60438 Frankfurt am Main, Germany'
- 'Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans, 08193, Barcelona, Spain'
- 'Institut d’Estudis Espacials de Catalunya (IEEC), 08034 Barcelona, Spain'
author:
- Glòria Montaña
- Àngels Ramos
- Laura Tolós
- 'Juan M. Torres-Rincon'
bibliography:
- 'D-mesonChiralLetter.bib'
title: Impact of a thermal medium on $D$ mesons and their chiral partners
---
Charmed mesons, effective hadron theories, finite-temperature QFT, chiral symmetry, heavy-quark symmetry, chiral symmetry restoration
Introduction
============
The idea that chiral partners become degenerate above the chiral restoration temperature $T_\chi$ [@Hatsuda:1985eb; @Rapp:1999ej] has motivated a large amount of works in which low-lying hadronic states of opposite parities have been studied in a thermal medium and their masses have been seen to merge at large temperatures $T>T_\chi$.
The canonical example resides in the light-meson sector, where the pseudoscalar isotriplet ($\pi$) and the scalar isoscalar ($\sigma$ meson) acquire similar masses above $T_\chi$. This system has been studied in the linear sigma model [@Bochkarev:1995gi], the (P)NJL model [@Klevansky:1992qe; @Florkowski:1993br; @Hansen:2006ee], the quark-meson model [@Tripolt:2013jra] and others. On the other hand, vector and axial vector interactions, that have been studied in the (P)NJL model [@Sintes:2014lka], and gauge linear-sigma model [@Pisarski:1995xu] for example, allow one to study the chiral symmetry restoration of the $\rho$ and the $a_1$ states [@Rapp:1999ej]. Opposite-parity diquarks also present such degeneracy in the (P)NJL model [@Torres-Rincon:2015rma], whereas there exist also indications from lattice-QCD calculations of the chiral restoration of opposite-parity baryons [@Aarts:2017rrl; @Aarts:2018glk].
In many of the theoretical models, the parity partners are fundamental degrees of freedom, e.g. $\pi$ and $\sigma$ in the linear sigma model [@Bochkarev:1995gi], and interactions in a thermal/dense medium dress them producing in-medium mass modifications. In another set of models, e.g. the NJL and PNJL model, the parity partners (either $0^+/0^-$ or $1^+/1^-$) are not part of the degrees of freedom of the Lagrangian, but are instead generated from few-body dynamics, like those implemented by the Bethe-Salpeter equation for a quark-antiquark pair. In this case, masses and decay widths seem to converge in the chirally-restored phase [@Hansen:2006ee].
All these models provide insights of the effects of chiral restoration, both below and above $T_\chi$. However one should keep in mind that—although well-motivated by the QCD symmetries and dynamics—they are not usually the correct effective field theory (EFT) of QCD. In the light-meson sector, for instance, we know that the low-energy effective theory is chiral perturbation theory (ChPT) [@Gasser:1983yg]. It can lead to model-independent results, also at finite temperatures. However, this approach is valid at low energies and temperatures, always below $T_\chi$, and only timid indications of a chiral-symmetry restoration can be expected from it.
Even if limited to $T<T_\chi$, this chiral approach is quite interesting because a combined picture of the chiral partners comes into play. The negative parity partner $\pi$ is a degree of freedom of the Lagrangian [@Gasser:1983yg], whose vacuum mass is dressed by interactions with the whole set of (pseudo-) Goldstone bosons. However, the positive parity partner ($\sigma$) is not part of the Lagrangian. In unitarized versions of ChPT [@Dobado:1989qm; @Dobado:1996ps] it can be associated to the $J^\pi=0^+$ resonant state, appearing in the scalar isoscalar channel of the meson-meson scattering amplitude. This state—experimentally identified with the scalar $f_0(500)$ of the Particle Data Group [@Tanabashi:2018oca]—can be generated at finite temperature as well [@Dobado:2002xf; @Rapp:1995fv]. This scenario, where one of the chiral companions is a degree of freedom of the theory and the other a dynamically-generated state, is the one we consider in this work.
In this letter we focus on light-heavy meson systems and look for the thermal effects on the $D$ and $D_s$ mesons properties, and of their chiral partners. For this goal, we extend previous results in a more complete and consistent approach using a hadronic EFT. The spirit is similar to Ref. [@Cleven:2017fun] where a chiral $SU(4)$ effective Lagrangian at leading-order (LO) was used (see also [@Mishra:2003se] for a use of the same EFT). However, in the present work, we construct the interactions based on an effective Lagrangian based on $SU(3)$ chiral, and heavy-quark symmetries. This effective theory at next-to-leading-order (NLO) has been well studied in vacuum, and its low-energy parameters fixed by lattice-QCD fits [@Guo:2009ct; @Liu:2012zya; @Guo:2018tjx]. The dynamics of the light-heavy meson systems is computed at finite temperature in the framework of the imaginary-time formalism (ITF), and we use unitarity and self-consistency as our guiding principles.
An important goal of this work is to study the spectroscopy of the heavy-light sector at finite temperature. This means that we are interested in accessing not only the masses and decay widths of the $D$ and $D_s$ mesons, but also of the states which appear dynamically upon unitarization, namely the $D^*_0(2300)$ and $D_{s0}^*(2317)$ states. It happens that these scalar, positive parity states could be associated with the chiral partners of the ground states. Therefore, we can describe the temperature dependence of their masses/widths in view of the possible restoration of chiral symmetry in heavy-light systems. Limited by low temperatures (below $T_\chi$) we simply provide qualitative indications on how these states approach the chiral transition, not being able to describe what happens above it. Moreover we discuss a new peculiar picture of chiral companions as the $D^*_0(2300)$ is described by a double-pole structure. This is a new scenario for chiral symmetry restoration as one needs to study simultaneously the evolution with temperature of three states.
Effective Lagrangian and Unitarized Interactions at $T\neq0$
============================================================
At $T < T_\chi$ and assuming no baryon density, the thermal medium is essentially composed by the lighter mesons of the pseudoscalar meson octet. Their interactions at low energies are governed by ChPT, based on chiral power counting. The heavy $J^\pi=0^-$ mesons, $D$ and $D_s$, propagate through this medium behaving as Brownian particles, suffering from collisions with any of the light mesons. The interaction of the $D$-mesons with light particles is described by an effective Lagrangian based on both chiral and heavy-quark symmetries [@Kolomeitsev:2003ac; @Lutz:2007sk]. We use the version at NLO in the chiral expansion, similarly as in [@Guo:2009ct; @Liu:2012zya; @Guo:2018tjx; @Geng:2010vw; @Abreu2011; @Tolos:2013kva; @Albaladejo:2016lbb].
The LO Lagrangian reads $$\begin{aligned}
\mathcal{L}_{\rm LO}&=\langle\nabla^\mu D\nabla_\mu D^\dagger\rangle-m_D^2\langle DD^\dagger\rangle-\langle\nabla^\mu D^{*\nu}\nabla_\mu D^{*\dagger}_{\nu}\rangle+m_D^2\langle D^{*\nu}D^{*\dagger}_{\nu}\rangle \nonumber \\
& +ig\langle D^{*\mu}u_\mu D^\dagger-Du^\mu D^{*\dagger}_\mu\rangle+\frac{g}{2m_D}\langle D^*_\mu u_\alpha\nabla_\beta D^{*\dagger}_\nu-\nabla_\beta D^*_\mu u_\alpha D^{*\dagger}_\nu\rangle\epsilon^{\mu\nu\alpha\beta} \ ,\end{aligned}$$ where $D$ denotes the antitriplet of $0^-$ $D$-mesons \[$D=\begin{pmatrix} D^0 & D^+ & D^+_s \end{pmatrix}$\], and similarly for the vector $1^-$ states \[$D^*_\mu=\begin{pmatrix} D^{*0} & D^{*+} & D^{*+}_s \end{pmatrix}_\mu$\] (not used in this work)\]. The light mesons are encoded into $u_\mu=i(u^\dagger\partial_\mu u-u\partial_\mu u^\dagger)$, where $u$ is the unitary matrix of Goldstone bosons in the exponential representation. The bracket denotes the trace in flavor space and the connection of the covariant derivative $\nabla_\mu D^{(*)}=\partial_\mu D^{(*)} -D^{(*)}\Gamma^\mu$ reads $\Gamma_\mu=\frac{1}{2}(u^\dagger\partial_\mu u+u\partial_\mu u^\dagger)$.
The NLO Lagrangian is given by $$\begin{aligned}
\nonumber\label{eq:lagrangianNLO}
\mathcal{L}_{\rm NLO}=&-h_0\langle DD^\dagger\rangle\langle\chi_+\rangle+h_1\langle D\chi_+D^\dagger\rangle+h_2\langle DD^\dagger\rangle\langle u^\mu u_\mu\rangle \\ \nonumber
&+h_3\langle Du^\mu u_\mu D^\dagger\rangle+h_4\langle\nabla_\mu D\nabla_\nu D^\dagger\rangle\langle u^\mu u^\nu\rangle+h_5\langle\nabla_\mu D\{u^\mu,u^\nu\}\nabla_\nu D^\dagger \rangle \\ \nonumber
&+\tilde{h}_0\langle D^{*\mu}D^{*\dagger}_\mu\rangle\langle\chi_+\rangle-\tilde{h}_1\langle D^{*\mu}\chi_+D^{*\dagger}_\mu\rangle-\tilde{h}_2\langle D^{*\mu}D^{*\dagger}_\mu\rangle\langle u^\nu u_\nu\rangle \\
&-\tilde{h}_3\langle D^{*\mu}u^\nu u_\nu D^{*\dagger}_\mu\rangle-\tilde{h}_4\langle\nabla_\mu D^{*\alpha}\nabla_\nu D^{*\dagger}_\alpha\rangle\langle u^\mu u^\nu\rangle-\tilde{h}_5\langle\nabla_\mu D^{*\alpha}\{u^\mu,u^\nu\}\nabla_\nu D^{*\dagger}_\alpha\rangle,\end{aligned}$$ where $\chi_+=u^\dagger\chi u^\dagger+u\chi u$, with the quark mass matrix $\chi={\rm diag}(m_\pi^2,m_\pi^2,2m_K^2-m_\pi^2)$.
For more details we recommend Refs. [@Geng:2010vw; @Abreu2011; @Liu:2012zya; @Tolos:2013kva]. The low-energy constants (LECs, $h_i$ with $i=0,...,5$), have been revisited in this work in view of the recent study [@Guo:2018tjx] based on lattice-QCD data.
The effective Lagrangian at LO+NLO provides the tree-level scattering amplitude for $D$ and $D_s$ mesons with light mesons, $$\begin{aligned}
\nonumber\label{eq:potential}
V^{ij}(s,t,u)=&\frac{1}{f_\pi^2}\Big[\frac{C_{\rm LO}^{ij}}{4}(s-u)-4C_0^{ij}h_0+2C_1^{ij}h_1\\
&-2C_{24}^{ij}\Big(2h_2(p_2\cdot p_4)+h_4\big((p_1\cdot p_2)(p_3\cdot p_4)+(p_1\cdot p_4)(p_2\cdot p_3)\big)\Big)\\ \nonumber
&+2C_{35}^{ij}\Big(h_3(p_2\cdot p_4)+h_5\big((p_1\cdot p_2)(p_3\cdot p_4)+(p_1\cdot p_4)(p_2\cdot p_3)\big)\Big)
\Big],\end{aligned}$$ where $p_1$ and $p_2$ ($p_3$ and $p_4$) are the momenta of the incoming (outgoing) mesons and $C_{{\rm LO},0,1,24,35}$ are the isospin coefficients (see Table II in [@Liu:2012zya]). The $i,j$ indices denote channels with given values of strangeness $S$ and isospin $I$.
This amplitude is used as the kernel of an on-shell Bethe-Salpeter equation within a full coupled-channel basis, $T=V+VGT$, where $T$ is the unitarized amplitude and $G$ is the light-heavy two-body propagator, which contains medium effects (see Fig. \[fig:subfigA\]). In IFT, after a Matsubara summation and a continuation to real energies, the loop reads $$\begin{aligned}
\label{eq:loopT}
G_{D\Phi}(E,\vec{p};T)&=&\int\frac{d^3q}{(2\pi)^3}\int d\omega\int d\omega'\frac{S_{D}(\omega,\vec{q};T)S_{\Phi}(\omega',\vec{p}-\vec{q};T)}{E-\omega-\omega'+i\varepsilon} \nonumber \\
&& \times [1+f(\omega,T)+f(\omega',T)],\end{aligned}$$ where $D$ denotes the heavy meson and $\Phi$ the light meson. The vacuum contribution in the expression above needs regularization, which is performed here in the cutoff scheme with a hard cutoff of $800$ MeV. In [@Guo:2018tjx] the dimensional regularization scheme is used with subtraction constants fitted to lattice-QCD data. We checked that our results for the scattering lengths at $T=0$ are consistent with them.
At $T\neq 0$ the internal meson propagators receive medium corrections due to the light meson gas. In ChPT the pion mass and decay constant do not appreciably change with temperature up to two-loops and even in unitary extensions of it [@Schenk:1993ru; @Toublan:1997rr]. In addition, the pion damping rate is very much suppressed at the temperatures explored in this paper, so we have decided to use the pion vacuum spectral function for all temperatures. For the $D$ meson, we consider its medium modification through a self-consistent scheme consisting in using the $T$-matrix (Fig. \[fig:subfigA\]) to dress the propagator (Fig. \[fig:subfigB\]) with the $D$-meson self-energy (Fig. \[fig:subfigC\]) which reads, $$\label{eq:selfE}
\Pi_{D}(E,\vec{p};T)=\int\frac{d^3q}{(2\pi)^3}\int d\Omega\frac{E}{\omega_\pi}\frac{f(\Omega,T)-f(\omega_\pi,T)}{E^2-(\omega_\pi-\Omega)^2+i\varepsilon}\Bigg(-\frac{1}{\pi}\Bigg){\rm Im\,}T_{D\pi}(\Omega,\vec{p}+\vec{q};T) \ .$$
The $D$-meson spectral function to be used in the loop function is therefore, $$\label{eq:specfunc}
S_{D}(\omega,\vec{q};T)=-\frac{1}{\pi}{\rm Im\,}\mathcal{D}_{D}(\omega,\vec{q};T)=-\frac{1}{\pi}{\rm Im\,}\Bigg(\frac{1}{\omega^2-\vec{q}\,^2-m_{D}^2-\Pi_{D}(\omega,\vec{q};T)}\Bigg) \ .$$ This set of equations is solved iteratively until self-consistency is obtained.
\(a) [$D_i$]{}; (i) ; (b) [$D_j$]{}; (d) [$\Phi_j$]{}; (c) [$\Phi_i$]{}; ; (i) circle(1.5mm);
$=$
\(a) [$D_i$]{}; (i) ; (b) [$D_j$]{}; (d) [$\Phi_j$]{}; (c) [$\Phi_i$]{}; ; (i) circle(.8mm);
$+$
\(a) [$D_i$]{}; (i) ; (j) ; (b) [$D_j$]{}; (d) [$\Phi_j$]{}; (c) [$\Phi_i$]{}; (b1) [$D_k$]{}; (d1) [$\Phi_k$]{}; ; (i) circle(0.8mm); (j) circle(1.5mm);
\
[0.7]{}
\(a) ; (b) ; ;
$=$
\(a) ; (b) ; ;
$+$
\(a) ; (i) ; (b) ; (d) ; (e) [$\pi$]{}; ; (d) circle(0.3cm); (i) circle(1.5mm);
[0.25]{}
\(a) [$D$]{}; (i) ; (b) [$D$]{}; (d) ; (e) [$\pi$]{}; ; (d) circle(0.4cm); (i) circle(1.5mm);
Dynamically generated states at $T=0$
=====================================
Let us first discuss our findings at $T=0$. In order to do so, we analytically continue the energy to the complex-energy plane and look for poles in the appropriate Riemann-sheet (RS) of the $T$-matrix, to find bound, resonant and virtual states. The pole position $\sqrt{s_R}$ provides the pole mass, $M_R={\rm Re\,}\sqrt{s_R}$, and the width, $\Gamma_R/2={\rm Im\,}\sqrt{s_R}$. We also report the coupling, $|g_i|^{-2}=\partial T^{-1}_{ii} (s) /\partial s |_{s=s_R}$, of each pole to each of the channels $i$ that the pole can couple.
In this letter we focus on the sectors $(S,I)=(0,\frac12)$—with three coupled channels, viz. $D\pi(2005.3)$, $D\eta(2415.1)$ and $D_s\bar{K}(2464.0)$— and $(S,I)=(1,0)$—with two coupled channels, $DK(2364.9)$ and $D_s\eta(2516.2)$, where the number in parenthesis gives the corresponding threshold energy in MeV. We use the Fit-2B set of LECs used in [@Guo:2018tjx], as it is the preferred one in that work and also the most similar to the one employed in [@Liu:2012zya]. Our results for the dynamically generated $0^+$ partners are summarized in Table \[tab:poles\].
------------------ --------------- ----------- ---------- -------------- -------------------------
$(S,I)$ RS $M_R$ $\Gamma_R/2$ $|g_i|$
(MeV) (MeV) (GeV)
$D_0^*(2300)$ $(0,\frac12)$ $(-,+,+)$ $2081.9$ $86.0$ $|g_{D\pi}|=8.9$
$|g_{D\eta}|=0.4$
$|g_{D_s\bar{K}}|=5.4$
$(-,-,+)$ $2521.2$ $121.7$ $|g_{D\pi}|=6.4$
$|g_{D\eta}|=8.4$
$|g_{D_s\bar{K}}|=14.0$
$D_{s0}^*(2317)$ $(1,0)$ $(+,+)$ $2252.5$ $0.0$ $|g_{DK}|=13.3$
$|g_{D_s\eta}|=9.2$
------------------ --------------- ----------- ---------- -------------- -------------------------
: Poles and the corresponding couplings to the coupled channels of the physical $D_0^*(2300)$ (first two poles) and $D_{s0}^*(2317)$ (last pole).[]{data-label="tab:poles"}
In the sector with $(S,I)=(0,\frac12)$ we find two poles in the complex energy plane. Both correspond to the experimental $D_0^*(2300)$ state [@Tanabashi:2018oca]. This double-pole structure has been previously analyzed in [@Guo:2018tjx; @Albaladejo:2016lbb]. The lower pole appears just above the first threshold in the $(-,+,+)$[^1] RS of the $T$-matrix around 2080 MeV. The nature of the higher pole is a bit more complicated. We find it above the $D_s\bar{K}$ threshold as a pole in the $(-,-,+)$ RS, but for some values of the parameters of the model [@Guo:2018tjx; @Albaladejo:2016lbb] the pole appears in the same sheet between the $D\eta$ and $D_s\bar{K}$ thresholds or below the $D\eta$ threshold[^2], strongly coupled to the $D_s\bar{K}$ channel in all cases.
Both poles have a considerable decay width so they are not close to the real energy axis. As we will see later, their reflection on the real axis will leave a peculiar structure, which one identifies with the experimental $D_0^*(2300)$. The lower pole couples mostly to $D\pi$, with reasonably large coupling to $D_s\bar{K}$, whereas the higher one couples to all the channels but with a larger coupling to $D_s\bar{K}$ .
In the $(S,I)=(1,0)$ sector the situation is somewhat clearer. We find a single pole on the real axis, which we identify to the bound state $D_{s0}^*(2317)$. It has sizable couplings to both $DK$ and $D_s \eta$, but it cannot decay to any of them as the phase space is closed at $T=0$.
Spectral functions, masses, and widths at $T\neq 0$
===================================================
We now present the results of our study at finite temperature. The spectral functions of the $D$ and $D_s$ mesons follow the standard definition in terms of the retarded propagator, see Eq. (\[eq:specfunc\]). They are shown on the top panels of Fig. \[fig:spectral\] at zero trimomentum, as functions of the energy and for different temperatures (colored lines). The mass shift and widening of both states with temperature is evident, being these effects stronger for the $D$-meson, whose mass decreases considerably with $T$. The properties of the dynamically generated states are directly obtained from the imaginary part of the amplitudes $T_{ii}$ as a proxy for their spectral shape. It is presented in the bottom panels of Fig. \[fig:spectral\], with $i$ denoting the channel to which the state couples most, i.e. $D\pi$ ($D_s \bar{K}$) for the lower (higher) pole of the $D_0^*(2300)$ in the $(S,I)=(0,\frac12)$ sector, and $DK$ for the pole of the $D_{s0}^*(2317)$ in the $(S,I)=(1,0)$ sector. In the $S=0$ case peculiar structures appear, which are produced by the interplay of the position of the resonance to some nearby channel thresholds. Still the evolution of the peak and width of the amplitudes with $T$ is evident. For the $S=1$ sector the situation is clearer, but one can observe that, in addition to the typical thermal widening, more strength is visible on the right-hand side tail producing a totally asymmetric distribution. The reason lies in the fact that the unitary $DK$ threshold is lowered due to the decrease of the $D$ mass and its widening with temperature, hence opening the phase space for decay into this channel at smaller energies.
![Upper plots: spectral functions of the $D$- (left) and $D_s$-mesons (right) at different temperatures from 0 to 150 MeV. Lower plots: imaginary part of the $D\pi\rightarrow D\pi$ and $D_s\bar{K}\rightarrow D_s\bar{K}$ scattering amplitudes in the $(S,I)=(0,\frac12)$ sector (left) and the $DK\rightarrow DK$ amplitude for $(S,I)=(1,0)$ (right) at the same values of the temperature.[]{data-label="fig:spectral"}](figs/Specfunc_D_Ds_temp_fonly "fig:") ![Upper plots: spectral functions of the $D$- (left) and $D_s$-mesons (right) at different temperatures from 0 to 150 MeV. Lower plots: imaginary part of the $D\pi\rightarrow D\pi$ and $D_s\bar{K}\rightarrow D_s\bar{K}$ scattering amplitudes in the $(S,I)=(0,\frac12)$ sector (left) and the $DK\rightarrow DK$ amplitude for $(S,I)=(1,0)$ (right) at the same values of the temperature.[]{data-label="fig:spectral"}](figs/ImT_D2300_Ds2317_temp_fonly "fig:")
Finally in Fig. \[fig:masseswidths\] we represent the evolution of the masses and decay widths with temperature. Differently from the $T=0$ case, the results of which we presented in Table \[tab:poles\], we find the determination of the poles in the complex energy plane unfeasible. Apart from complications tied to the analytic continuation of imaginary frequencies to the different RSs, a numerical search on the complex plane within self-consistency is computationally challenging.
Therefore, the mass and width will be obtained from the position and the half-width at half-maximum of the peak of the spectral functions in the real-energy axis. For the ground states, $D$ and $D_s$, this method is totally acceptable as the quasi-particle approximation is entirely justified. However, for the dynamically generated states—at least in the $S=0$ channel—this entails more problems because their poles are located far from the real axis and the width is not a well-defined concept. In view of these problems, we establish the following strategy, the details of which will be given in a subsequent publication,
- For the lower resonance in the $(S,I)=(0,\frac12)$ sector we assume a Breit-Wigner-Fano shape [@Fano:1961zz], which takes into account the interaction between the resonance and the background corresponding to the higher resonance. The mass and width of the fit at $T=0$ are in very good agreement with the values of the pole mass and the width in Table \[tab:poles\].
- For the higher resonance in the $(S,I)=(0,\frac12)$ sector we subtract the background contribution of the lower resonance and then fit a Flatté-type distribution that describes the shape of resonances in the proximity of a threshold [@Flatte:1976xu], extended here to the three coupled-channel case.
- For the resonance in the $(S,I)=(1,0)$ sector we again fit a Breit-Wigner-Fano distribution, although a simple fit with a Breit-Wigner gives the same results for $T<120$ MeV.
![Temperature evolution of the mass (left panels) and width (right panels) of the chiral partners in the $(S,I)=(0,\frac12)$ sector (upper panels) and in the $(S,I)=(1,0)$ sector (lower panels). The ground-state $0^-$ partners are represented with circles and the dynamically generated $0^+$ partners, the two poles of the $D_0^*(2300)$ and the $D_{s0}^*(2317)$ pole, with upward/downward triangles and squares, respectively.[]{data-label="fig:masseswidths"}](figs/M_Width_temp_fonly.pdf)
From the results in Fig. \[fig:masseswidths\] and in comparison with previous works, we list the following observations:
1. The ground state $D$ mass has a sizable decrease of $\Delta m_D \sim 40$ MeV at the highest temperature $T=150$ MeV. This reduction is consistent, albeit twice larger, with that observed in [@Fuchs2006], where a more phenomenological approach is used to compute the $D$-meson propagator. Our reduction, on the other hand, is smaller than the one reported in Ref. [@Sasaki2014], that uses non-unitarized ChPT. However, in the $SU(4)$ effective approach of [@Cleven:2017fun] no significant modification is reported. In our present work the two poles of the $D^*_0(2300)$ have a more stable trend. They slightly move in opposite directions, moderately distancing from each other. Therefore, in this sector we cannot conclude that masses of opposite parity states become degenerate close to $T_\chi$, although the temperatures studied might be still low for the chiral symmetry restoration. In [@Buchheim2018] a large reduction in the mass of the positive-parity $D$ meson partner, of around 150 MeV, is found at $T=150$ MeV, but using a constant $D$ mass as an input of the sum-rule analysis. An even larger reduction of close to 200 MeV is seen in the results of [@Sasaki2014].
2. The width of all states in the non-strange sector increases with temperature. The ground state shows a width of around $\sim70$ MeV at $T =150$ MeV, consistent with [@Cleven:2017fun] and the estimates of Refs. [@Fuchs2006; @He:2011yi]. The measured widths of the two poles of the $D_0^*(2300)$ increase moderately with respect to their vacuum value.
3. In the strangeness sector we observe a clearer picture. The parity partners seem to decrease their mass with temperature, more moderately in the case of the ground state, reaching a reduction of $\sim 7$ MeV for the $0^-$ state and $\sim 17$ MeV for the $0^+$ state at $T=150$ MeV. Consequently, a relative closeness between the strange chiral partners at high temperature is observed. However, they are still far from chiral degeneracy. These behaviours seem to be compatible with the low temperature trends seen in the linear-sigma model calculation of [@Sasaki2014].
4. The decay widths of both strange partners increase from zero at very different rates. The width of the $D_{s0}^* (2317)$ is around twice that acquired by the $D_s$ ground state at $T=150$ MeV. We note that, whereas the width of the latter is only due to medium effects, the $D_{s0}^* (2317)$ also contains the additional contribution of the decay into $DK$ states due to the reduction of the mass and the widening of the $D$-meson. We are not aware of any previous result to compare to in this sector.
Apart from the above comparisons with previous models, unfortunately there is no solid data from first principles to compare to. However, in spite of the limitations in obtaining reliable information from finite temperature lattice QCD simulations tied to the difficulties in extracting the spectral function from the lattice correlators, we can still aim at a qualitative comparison. We note that a recent lattice-QCD calculation [@Kelly:2018hsi] presents the spectral functions of $D$ and $D_s$ channels at different temperatures. The analysis in that paper concludes that no medium modification with respect to the $D$ and $D_s$ ground states is seen up to $T_{\chi} $, where $T_ {\chi} \simeq 185$ MeV in that work. Given the precision of the lattice-QCD data this might be well in agreement with our findings here, as our $D$ ($D_s$) mass shift is only $2 \%$ ($0.5 \%$) of the mass itself. As a pion mass of $m_\pi \sim 380$ MeV is used in [@Kelly:2018hsi], it would be interesting to re-address our calculation with a heavier pion mass and analyze the effects on the charm meson properties for temperatures $T<T_{\chi} $.
Conclusion
==========
In this letter we report our findings on the properties of heavy-light mesons at finite temperatures. Using a thermal effective field theory based on chiral and heavy-quark symmetries at NLO, and on the basis of unitarized scattering amplitudes and self-consistency, we have obtained the temperature dependence of the spectral functions of the chiral partners, $D$ and $D_0^*(2300)$, as well as those of the $D_s$ and $D_s^*(2317)$ mesons.
From these spectral functions, we have extracted the dependence of the masses and widths of the mesons with temperature. In the $(S,I)=(0,\frac12)$ sector we do not observe a clear tendency to chiral degeneracy, while in the $(S,I)=(1,0)$ sector we observe a slight convergence in mass of the chiral partners. However, we are limited by the low-temperature application of the hadron effective theory and, from the results of effective models in the light sector [@Florkowski:1993br; @Hatsuda:1994pi], such degeneracy might occur at higher temperatures, $T>T_\chi$.
One of our main results is that the chiral partner of the $D$ meson, the $D_0^*(2300)$, has a double-pole structure in the complex-energy plane, and it is unclear at this point how the chiral symmetry restoration should be realized. Will both poles merge into a single one before becoming degenerate with the ground state? Or will only one pole survive and become degenerate with the ground state at $T>T_\chi$, while the other follows a different path?
Finally, we should mention that these results are important for a realistic analysis of heavy-ion collisions using appropriately medium-modified properties and/or heavy-flavor transport coefficients [@Tolos:2013kva; @Ozvenchuk:2014rpa; @Song:2015sfa; @Tolos:2016slr; @Das:2016llg]. This is mandatory to understand the mechanisms of charm production and properly characterise the deconfined and hadronic phases. We plan to address studies in that direction in the future.
Acknowledgements
================
J.M.T.-R. acknowledges the hospitality of the Institut de Ciències de l’Espai (CSIC) and the Universitat de Barcelona, where part of this work was carried out. He thanks discussion with Á. Gómez-Nicola and J.A. Oller on the subject.
G.M. and A.R. acknowledge support from the Spanish Ministerio de Economía y Competitividad (MINECO) under the project MDM-2014-0369 of ICCUB (Unidad de Excelencia “María de Maeztu”), and, with additional European FEDER funds, under the contract FIS2017-87534-P. G.M. also acknowledges support from the FPU17/04910 Doctoral Grant from MINECO. L.T. acknowledges support from the FPA2016-81114-P Grant from Ministerio de Ciencia, Innovación y Universidades, Heisenberg Programme of the Deutsche Forschungsgemeinschaft (DFG, German research Foundation) under the Project Nr. 383452331 and THOR COST Action CA15213. L.T. and J.M.T.-R. acknowledge support from the DFG through projects no. 411563442 (Hot Heavy Mesons) and no. 315477589 - TRR 211 (Strong-interaction matter under extreme conditions).
[^1]: The notation indicates the RS of the loop function for each of the coupled channels ($+$ for first and $-$ for second).
[^2]: \[footnote\]We note that the $(-,-,+)$ RS is only connected to the real energy axis in the region between the $D\eta$ and the $D_s\bar{K}$ thresholds.
|
---
abstract: 'With recent advances in deep learning, neuroimaging studies increasingly rely on convolutional networks (ConvNets) to predict diagnosis based on MR images. To gain a better understanding of how a disease impacts the brain, the studies visualize the salience maps of the ConvNet highlighting voxels within the brain majorly contributing to the prediction. However, these salience maps are generally confounded, i.e., some salient regions are more predictive of confounding variables (such as age) than the diagnosis. To avoid such misinterpretation, we propose in this paper an approach that aims to visualize confounder-free saliency maps that only highlight voxels predictive of the diagnosis. The approach incorporates univariate statistical tests to identify confounding effects within the intermediate features learned by ConvNet. The influence from the subset of confounded features is then removed by a novel partial back-propagation procedure. We use this two-step approach to visualize confounder-free saliency maps extracted from synthetic and two real datasets. These experiments reveal the potential of our visualization in producing unbiased model-interpretation.'
author:
- |
Qingyu Zhao$^{*,}$, Ehsan Adeli$^{*,}$, Adolf Pfefferbaum ,\
Edith V. Sullivan,
title: 'Confounder-Aware Visualization of ConvNets'
---
Introduction
============
The development of deep-learning technologies in medicine is advancing rapidly [@topol2019]. Leveraging labeled big data and enhanced computational power, deep convolutional neural networks have been applied in many neuroscience studies to accurately classify patients with brain diseases from normal controls based on their MR images [@topol2019; @Esmaeilzadeh2018]. State-of-the-art saliency visualization techniques are used to interpret the trained model and to visualize specific brain regions that significantly contribute to the classification [@Esmaeilzadeh2018]. The resulting saliency map therefore provides fine-grained insights into how the disease may impact the human brain.
Despite the promises of deep learning, there are formidable obstacles and pitfalls [@topol2019; @he2019]. One of the most critical challenges is the algorithmic bias introduced by the model towards confounding factors in the study [@pourhoseingholi2012control]. A confounding factor (or confounder) correlates with both the dependent variable (group label) and independent variable (MR image) causing spurious association. For instance, if the age distribution of the disease group is different from that of the normal controls, age might become a potential confounder because one cannot differentiate whether the trained model characterizes neurodegeneration caused by the disease or by normal aging.
Since the end-to-end training scheme disfavors any additional intervention, controlling for confounding effects in deep learning is inherently difficult. This often leads to misinterpretation of the trained model during visualization: while some salient regions correspond to true impact of the disease, others are potentially linked to the confounders. In this paper, we present an approach[^1] that identifies confounding effects within a trained ConvNet and removes them to produce confounder-free visualization of the model. The central idea is first to detect confounding effects in each intermediate feature via univariate statistical testing. Then, the influence of confounded features is removed from the saliency map by a novel “partial back-propagation” operation, which can be intuitively explained by a chain-rule derivation on voxelwise saliency scores. This operation is efficiently implemented with a model refactorization trick. We apply our visualization procedure to interpret ConvNet classifiers trained on a synthetic dataset with known confounding effects and on two real datasets, i.e., MRIs of 345 adults for analyzing Human Immunodeficiency Virus (HIV) effects on the brain and MRIs of 674 adolescents for analyzing sexual dimporphsim. In all three experiments, our visualization shows the potential in producing unbiased saliency maps compared to traditional visualization techniques.
Confounder-Aware Saliency Visualization
=======================================
We base our approach on the saliency visualization proposed in [@Simonyan2013DeepIC]. Given an MR image $\mathcal{I}$ and a trained ConvNet model, saliency visualization produces a voxel-wise saliency map specific to $\mathcal{I}$ indicating important regions that strongly impact the classification decision. Without loss of generality, we assume a ConvNet model is trained for a binary classification task (pipeline generalizable to multi-group classification and regression), where the prediction output is a continuous score $s \in [0,1]$. Then, the saliency value at voxel $v$ is computed as the partial derivative $|\partial s / \partial \mathcal{I}_v|$. Intuitively, it quantifies how the prediction changes with respect to a small change in the intensity value at voxel $v$. Computationally, this quantity can be computed efficiently using back-propagation.
As discussed, when the ConvNet is confounded, some salient regions may actually contribute to the prediction of confounding variables rather than the group label. To address this issue, we propose a two-step approach to remove confounding effects from the saliency map enabling an unbiased interpretation of a trained ConvNet. To do this, we assume that a typical ConvNet architecture is composed of an encoder and a predictor. The encoder contains convolutional layers (including their variants and related operations such as pooling, batch normalization and ReLU) that extract a fixed-length feature vector $\boldsymbol{f}_i \in \mathbb{R}^M=[f_i^1,...,f_i^M]$ from the $i^{th}$ training image. The predictor, usually a fully connected network, takes the $M$ features as input and produces a prediction score $s_i$ for image $i$. To disentangle confounding effects from the saliency map, we propose in Section 2.1 to first test each of the $M$ features separately for confounding effects using a general linear model (GLM). Next, the influence from the subset of features with significant confounding effects can be removed from the saliency map by performing a novel partial back-propagation procedure based on an intuitive chain-rule derivation (Section 2.2).
![Our confounder-aware visualization is composed two steps: (a) A GLM test is performed on each individual feature collected over all training images to detect confounding effects. (b) For each image, the model is refactorized to fix the value of confounded features, thereby enabling a partial back-propagation to derive a confounder-free saliency map. []{data-label="fig-1"}](fig1.png){width="\linewidth"}
Univariate Test for Identifying Confounding Effects
---------------------------------------------------
This section introduces a way to test for the presence of confounding effect within a specific feature. Let $\boldsymbol{f}^j=[f_1^j,...,f_N^j]$ denote the $j^{th}$ feature derived from all $N$ training images. Likewise, denote $\boldsymbol{s}=[s_1,...,s_N]$ as the $N$ prediction scores and $\boldsymbol{z}=[z_1,...,z_N]$ as a confounding variable (e.g., age of the $N$ subjects). In this work, we use GLM [@dobson1990glm] to perform a group-level statistical test for detecting whether the relationship between $\boldsymbol{s}$ and $\boldsymbol{f}^j$ is confounded by $\boldsymbol{z}$. Specifically, GLM decomposes the variance in $\boldsymbol{f}^j$ into variance explained by $\boldsymbol{s}$ and variance explained by $\boldsymbol{z}$. The model reads $$\boldsymbol{f}^j = \beta_0 + \beta_1 \boldsymbol{s} + \beta_2 \boldsymbol{z}.
\label{eq:glm}$$ We claim feature $\boldsymbol{f}^j$ is confounded by $\boldsymbol{z}$ if the null hypothesis that linear coefficient $\beta_2$ is zero can be rejected (e.g., $p<0.05$ by *t*-test). In other words, when the variance in $\boldsymbol{f}^j$ is partially explained by $\boldsymbol{z}$, $\boldsymbol{f}^j$ potentially contributes to the prediction of the confounder rather than the key variable of interest. This analysis can be extended to handle multiple confounding variables, where all confounders are included in the GLM as independent covariates. Then, $\boldsymbol{f}^j$ is confounded when the $p$-value for at least one confounder is significant. Note, this model is a specific instance of the mediation model [@MacKinnon2008], a popular model for confounding analysis. However, our model makes fewer assumptions so that it is more sensitive in detecting confounding effects than the mediation model. We also emphasize that such confounding analysis can only be performed on the feature-level instead of voxel-level. Unlike features encoding geometric patterns that are commensurate within a group, voxel intensities are only meaningful within a neighborhood but variant across MRIs. As such, removing confounding effects based on feature-analysis is prevalent in traditional feature-based models (non-deep-learning models) [@Adeli2018; @park2018].
Repeating the above analysis for all $M$ features, we generate a binary mask $\boldsymbol{b}\in [0,1]^M=[b^1,...,b^M]$, where $b^j=0$ indicates the presence of confounding effect in the $j^{th}$ feature and $b_j=1$ otherwise.
Visualization via Partial Back-Propagation
------------------------------------------
To generate a saliency map unbiased towards the subset of confounded features, we further investigate the voxelwise partial derivative. Based on the chain-rule, $$\frac{\partial s_i}{\partial \mathcal{I}_v}=\frac{\partial s_i(f_i^1,...,f_i^M)}{\partial \mathcal{I}_v}=\sum_{j=1}^M \frac{\partial s_i}{\partial f_i^j} \frac{\partial f_i^j}{\partial \mathcal{I}_v}.
\label{eq:chain_rule}$$ Eq. factorizes the voxelwise partial derivative with respect to the $M$ features, where each $\partial s_i / \partial f_i^j$ quantifies the impact of the $j^{th}$ feature on the prediction. Therefore, to derive a confounder-free saliency map, we set this impact to zero for the confounded features. In doing so, the saliency score can be computed as $$\sum_{j=1}^M b^j \frac{\partial s_i}{\partial f_i^j} \frac{\partial f_i^j}{\partial \mathcal{I}_v}.
\label{eq:partial_bp}$$ Computationally, this corresponds to a partial back-propagation procedure, where the gradient is only back-propagated through the un-confounded features.
**The Refactorization Trick.** We show that performing the partial back-propagation for a training image $\mathcal{I}$ can be implemented by refactorizing the trained ConvNet model and then applying the original visualization pipeline of full back-propagation. As enforcing a zero $\partial f_i^j / \partial \mathcal{I}_v$ is equivalent to fixing $f_i^j$ to a constant value independent of the input image, we design a dummy layer $\mathcal{L}$ between the encoder and the predictor that performs $\mathcal{L}(\boldsymbol{x}) = \boldsymbol{x}\otimes \boldsymbol{b}_i \oplus ((1-\boldsymbol{b}_i) \otimes \boldsymbol{y}_i)$, where $\otimes$ and $\oplus$ denote element-wise operators, and $\boldsymbol{y}_i$ is a constant feature vector for image $i$ pre-computed by the trained ConvNet. As shown in Fig. \[fig-1\]b, the dummy layer fixes the value of confounded features while keeping un-confounded features dependent on the input image. As such, the partial back-propagation of Eq. can by simply computed by running the full back-propagation on the refactorized model. Note, model refactorization is performed for each MR image independently to yield subject-specific saliency maps.
Experiments
===========
We first performed synthetic experiments, in which image data were imputed by known confounding effects so that we could test whether the proposed approach can successfully remove those effects during visualization. Next, we applied the approach to two real datasets to visualize (1) the impact of HIV on brain structures while controlling for aging effects; (2) sexual dimorphism during adolescence while controlling for ‘puberty stage’.
![Synthetic experiments: (a) Each synthetic image contains 4 Gaussians that are created differently between the two groups; (b) Average saliency map produced by the original visualization pipeline; (c) Widths of the two off-diagonal Gaussians are considered as confounders; (d) GLM identifies selective features, mainly in Blocks B and C, as confounded; (e) Removing the confounded features in the visualization leads to a confounder-free saliency map. []{data-label="fig-2"}](fig2.png){width="\linewidth"}
Synthetic Data
--------------
We first generated a synthetic dataset containing two groups. Each group consisted of 512 2D images (dimension: $32\times32$ pixels). Each image was generated by 4 Gaussians (Fig. \[fig-2\]a), the width of which was controlled by the standard deviation $\sigma$. For each image of Group 1, we sampled $\sigma$ from the uniform distribution $\mathcal{U}(2,6)$. Images of Group 2 generally had wider distributions as we sampled from $\mathcal{U}(4,8)$ instead. To predict group labels from the synthetic images, we constructed a simple ConvNet with the encoder consisting of 3 stacks of 2\*2 convolution/ReLu/max-pooling layers and producing 32 intermediate features. The fully-connected predictor had one hidden layer of dimension 16 with `tanh` as the non-linear activation function. We trained the network for binary classification on the entire synthetic dataset as the focus here was to interpret the trained model as opposed to measuring classification accuracy. With the trained ConvNet, we first applied the original visualization pipeline to each image and averaged the resulting subject-specific saliency maps. The average saliency map shown in Fig. \[fig-2\]b indicates that all 4 Gaussians contributed to the classification.
Next, we viewed the width of the two off-diagonal Gaussians, i.e., the standard deviations $\boldsymbol{\sigma}_B$ of Block B and $\boldsymbol{\sigma}_C$ of Block C as confounders. Based on Eq. \[eq:glm\], we then tested the presence of confounding effects in each of the 32 intermediate features with the following GLM: $\boldsymbol{f}^j = \beta_0 + \beta_1 \boldsymbol{s} + \beta_B \boldsymbol{\sigma}_B + \beta_C \boldsymbol{\sigma}_C$. The results revealed that all features extracted from Blocks B and C were detected as confounded ($p<0.05$ for either $\beta_B$ or $\beta_C$), while only features from Blocks A and D were identified as unconfounded ($p\geq0.05$ for both $\beta_B$ and $\beta_C$). We can see that our conservative test was sensitive in detecting confounding effects (no false negative but several false positives), thereby potentially removing some features representing true group difference. Such trade-off can be controlled by the $p$-value threshold used in the GLM tests. Finally, using the binary mask (yellow mask in Fig. \[fig-2\]d) for partial back-propagation, we produced a confounder-free average saliency map (Fig. \[fig-2\]e) that successfully removed the confounding effects.
Visualizing HIV Effects
-----------------------
The second experiment examined the impact of HIV on the human brain. The classification was performed on the T1-weighted MRI data of 223 control subjects (CTRL) and 122 HIV patients [@Adeli2018]. Participants ranged in age between 18–86 years, and there was a significant age difference between CTRL and HIV subjects (CTRL: $45\pm17$, HIV: $51\pm8.3$, *p*<0.001 by two-sample $t$-test). As HIV has been frequently suggested to accelerate brain aging [@cole2017increased], age is therefore a confounder that needs to be controlled for when interpreting the saliency map associated with the trained classifier.
**Preprocessing and Classification.** The MR images were first preprocessed by denoising, bias field correction, skull striping, affine registration to the SRI24 template (which accounts for differences in head size), and re-scaling to a $64\times64\times64$ volume [@Adeli2018]. Even though the present study focused on the visualization technique, we measured the classification accuracy as a sanity check via 5-fold cross validation. To ensure the classifier can reasonably learn the group difference between HIV and CTRL subjects, the training dataset was augmented by random shifting (within one-voxel distance), rotation (within one degree) in all 3 directions, and left-right flipping. Note, the flipping was based on the assumption that HIV infection affects the brain bilaterally [@Adeli2018]. The data augmentation resulted in a balanced training set of 1024 CTRLs and 1024 HIVs.
As the flipping removed left-right orientation, the ConvNet was built on half of the 3D volume containing one hemisphere. The encoder contained 4 stacks of 2\*2\*2 3D convolution/ReLu/batch-normalization/max-pooling layers yielding 4096 intermediate features. The fully-connected predictor had 2 hidden layers of dimension (64, 32) with `tanh` as the non-linear activation function. An L2-regularization ($\lambda = 0.1$) was applied to all fully-connected layers. Based on this ConvNet architecture, we achieved 73% normalized accuracy for HIV/CTRL classification, which was comparable to other recent studies on this dataset [@Adeli2018].
**Model Visualization.** To visualize the HIV effect, we re-trained the ConvNet on a dataset of 1024 CTRLs and 1024 HIVs augmented from the entire dataset of 345 MRIs. We first visualized the average saliency map produced by the original visualization pipeline. Since the ConvNet operated on only one hemisphere, we mirrored the resulting average saliency map to the other hemisphere to create bilaterally symmetric display and overlaid it on the SRI24 T1 atlas (Fig. \[fig-3\]a). For comparison, we then visualized the confounder-free saliency map produced by our approach. Specifically, we tested each of the 4096 features with $\boldsymbol{f}^j = \beta_0 + \beta_1 \boldsymbol{s} + \beta_2 age$, and 804 were identified to be confounded by age. Fig. \[fig-3\]b shows the saliency map after removing aging effects, and Fig. \[fig-3\]c shows that saliency at the posterior ventricle (red regions) was attenuated by our approach indicating those regions contained aging effects instead of HIV effects. This finding is consistent with current concept that the ventricular volume significantly increases with age [@Kaye92].
![Visualization of ConvNets trained for HIV/CTRL classification (top row) and sexual dimorphism (bottom row). []{data-label="fig-3"}](fig3.png){width="\linewidth"}
Visualizing Sexual Dimorphism
-----------------------------
The third experiment aimed to improve understanding of sexual dimorphism in brain development that emerges during adolescence. The classification was performed on the baseline T1 MR images of 334 boys and 340 girls (age 12-21) from the National Consortium on Alocohol and NeuroDevelopment in Adolescence (NCANDA) [@Brown2015]. All subjects met the no-to-low alcohol drinking criteria of the study, and there was no significant age-difference between boys and girls ($p$>0.5 two-sample $t$-test). As puberty stage [@Brown2015] of girls was significantly higher than boys during adolescence, the pubertal development score (PDS: boys 2.86$\pm$0.7, girls 3.41$\pm$0.6, *p*<0.001 by two-sample $t$-test) was a potential confounder of the study.
All experimental setups complied with the previous HIV study. As the first attempt of predicting sex on the NCANDA data, we achieved 89.5% normalized accuracy based on a 5-fold cross-validation. The original saliency map produced for the ConvNet trained on the entire augmented dataset is shown in Fig. \[fig-3\]d. After testing and removing PDS effects, the confounder-free saliency map is shown in Fig. \[fig-3\]e. Consistent with existing adolescence literature, sex difference was mainly found in the temporal lobe [@Sowell2002]. Fig. \[fig-3\]f indicates PDS effects mainly existed in the frontal and inferior parietal region. Another interesting observation is in the caudate, which has been frequently reported as proportionately larger in female participants across different ages [@MacKinnon2008]. As shown in our results, the saliency at the caudate region attenuated after removing confounding effects, suggesting a potential compounding effect of PDS in that region.
Conclusion and Discussion
=========================
In this paper, we introduced a novel approach for confounder-free visualization and interpretation of a trained ConvNet. By performing partial back-propagation with respect to a set of unconfounded intermediate features, the approach disentangled true group difference from confounding effects and produced unbiased saliency maps. We successfully illustrated its usage on a synthetic dataset with ground-truth confounding effects and two real neuroimaging datasets. Because our approach is a type of post-hoc analyses with respect to a trained model, further extension could potentially integrate similar confounder-control procedures during model-training time to fully explore unbiased group differences within a dataset.
[^1]: Source code available at https://github.com/QingyuZhao/Confounder-Aware-CNN-Visualization.git
|
---
abstract: 'We propose a streaming submodular maximization algorithm “stream clipper” that performs as well as the offline greedy algorithm on document/video summarization in practice. It adds elements from a stream either to a solution set $S$ or to an extra buffer $B$ based on two adaptive thresholds, and improves $S$ by a final greedy step that starts from $S$ adding elements from $B$. During this process, swapping elements out of $S$ can occur if doing so yields improvements. The thresholds adapt based on if current memory utilization exceeds a budget, e.g., it increases the lower threshold, and removes from the buffer $B$ elements below the new lower threshold. We show that, while our approximation factor in the worst case is $1/2$ (like in previous work, and corresponding to the tight bound), we show that there are data-dependent conditions where our bound falls within the range $[1/2, 1-1/e]$. In news and video summarization experiments, the algorithm consistently outperforms other streaming methods, and, while using significantly less computation and memory, performs similarly to the offline greedy algorithm.'
author:
- Tianyi Zhou
- Jeff Bilmes
title: |
Stream Clipper:\
Scalable Submodular Maximization on Stream
---
|
---
abstract: 'A model for the static weak-field macroscopic medium is analyzed and the equation for the macroscopic gravitational potential is derived. This is a biharmonic equation which is a non-trivial generalization of the Poisson equation of Newtonian gravity. In case of the strong gravitational quadrupole polarization it essentially holds inside a macroscopic matter source. Outside the source the gravitational potential fades away exponentially. The equation is equivalent to a system of the Poisson equation and the nonhomogeneous modified Helmholtz equations. The general solution to this system is obtained by using Green’s function method and it does not have a limit to Newtonian gravity. In case of the insignificant gravitational quadrupole polarization the equation for macroscopic gravitational potential becomes the Poisson equation with the matter density renormalized by the factor including the value of the quadrupole gravitational polarization of the source. The general solution to this equation obtained by using Green’s function method has a limit to Newtonian gravity.'
author:
- |
Giovanni Montani$^{(1)}$[^1], Remo Ruffini$^{(1)}$[^2] and Roustam Zalaletdinov$^{(1,2,3)}$[^3]\
\[5mm\] *$^{(1)}$ICRA, Departamento di Fisica, Universitá di Roma “La Sapienza"*\
*P.le Aldo Moro 5, Roma 00185, Italia*\
\[2mm\] *$^{(2)}$Department of Mathematics and Statistics, Dalhousie University*\
*Chase Building, Halifax, Nova Scotia, Canada B3H 3J5*\
\[2mm\] *$^{(3)}$Department of Theoretical Physics, Institute of Nuclear Physics*\
*Uzbek Academy of Sciences, Tashkent 702132, Uzbekistan, CIS*
---
Introduction {#problem}
============
General relativity as a classical theory of gravity is known to have some remarkable analogies with classical Maxwell’s macroscopic theory of electromagnetism (see, for example, [@MTW:1973] and references therein). The physical motivation and intuition for many of the problems posed in general relativity originate therefore in their electromagnetic analogies where physics and formalism are much easier to deal with. One of such problems of the primary importance is that of gravitational waves which has been inspired and put forward mostly owing to our deep understanding of the structure and physics of electromagnetic waves. Some important issues in the physics of gravitational waves in general relativity, however, remain obscure. The questions whether or not gravitational waves undergo the refraction in a gravitating macroscopic (continuous) media, the speed of a gravitational wave changes (i.e. slows down) in a material medium, the phenomenon of gravitational polarization exists, have been not approached and even properly posed as yet.
To determine the structure of the macroscopic energy-momentum tensor resulting from averaging out a microscopic matter source, the problem of construction of a continuous (macroscopic) matter model for a given point-like (microscopic) matter distribution in general relativity has been formulated in [@MRZ:2001a]. The existing approaches have considered and a physical analogy with the similar problem in classical macroscopic electrodynamics has been pointed out. The procedure due to Szekeres [@Szek:1971] in the linearized general relativity on Minkowski background space-time to construct a tensor of gravitational quadruple polarization by applying Kaufman’s method of molecular moments [@Kauf:1962] for derivation of the polarization tensor in macroscopic electrodynamics and to derive an averaged field operator by utilizing an analogy between the linearized Bianchi identities and Maxwell equations, has been analyzed. The approach of Szekeres to construct a tensor of the gravitational quadruple polarization is based on the following assumptions: (a) the linearized theory of gravity on Minkowski background space-time; (b) the linearized field equations are taken as the linearized Bianchi identity to employ an analogy between gravitation and electromagnetism; (c) the covariant method of molecular moments of Kaufman is applied to construct a tensor of quadruple gravitational polarization. The procedure is shown to possess some inconsistencies, in particular, (1) it has only provided the terms linear in perturbations for the averaged field operator which do not contribute into the dynamics of the averaged field, and (2) the analogy between electromagnetism and gravitation does break upon averaging. A macroscopic gravity approach in the perturbation theory up to the second order on a particular background space-time taken to be a smooth weak gravitational field has been applied to write down a system of macroscopic field equations [@MRZ:2001a], [@MRZ:2001b], [@MRZ:2001c]: Isaacson’s equations [@Isaa:1968a] with a source incorporating the quadruple gravitational polarization tensor, Isaacson’s energy-momentum tensor of gravitational waves [@Isaa:1968b] and energy-momentum tensor of gravitational molecules and corresponding equations of motion. The system of equations is shown to be underdetermined. A suitable set of material relations which relate all the tensors has been proposed [@MRZ:2001a], [@MRZ:2001b], [@MRZ:2001c], so that the full system of the field equations and the material relations become determined.
In this paper the system of equations is used to find a solution to the Szekeres model of the gravitational quadrupole polarization. A model for the static weak-field macroscopic medium is analyzed and the equation for the macroscopic gravitational potential is derived. This is a biharmonic equation which generalizes the Poisson equation of Newtonian gravity. In case of the strong gravitational quadrupole polarization it essentially holds inside a macroscopic matter source. Outside the source the gravitational potential fades away exponentially. The equation is equivalent to a system of the Poisson equation and the nonhomogeneous modified Helmholtz equations. The general solution to this system is obtained by using Green’s function method. This solution does not have a limit to Newtonian gravity. In case of the insignificant gravitational quadrupole polarization the equation for macroscopic gravitational potential becomes the Poisson equation with the matter density renormalized by the factor including the value of the quadrupole gravitational polarization of the source. The general solution to this equation has been obtained by using Green’s function method and it has a limit to Newtonian gravity.
Macroscopic Description of Gravity {#*mdg}
==================================
To consider the problem of macroscopic description of gravitation, an approach of macroscopic gravity has been proposed earlier in [@Zala:1992], [@Mars-Zala:1997] (see [@Zala:1997], [@Zala:1998], [@Kras:1997] for discussion of the problem and references therein, [@Tava-Zala:1998] for discussion of the physical status of general relativity as either a microscopic or macroscopic theory of gravity). A covariant space-time volume averaging procedure for tensor fields [@Zala:1992], [@Mars-Zala:1997], [@Zala:1993] has been defined and proved to exist on arbitrary Riemannian space-times with well-defined properties of the averages. Upon utilizing the averaging scheme, the macroscopic gravity approach has shown that (*i*) averaging out Cartan’s structure equations brings about the structure equations for the averaged (macroscopic) non-Riemannian geometry and the definition and the properties of the correlation tensor, (*ii*) the averaged Einstein’s equations become then the macroscopic field equations and they must be supplemented by a set of differential equations for the correlation tensor, (*iii*) it is always possible to extract the field operator of the same form as that of Einstein’s equations for the Riemannian macroscopic metric tensor and its Ricci tensor while all other non-Riemannian correlation terms go to the right-hand side of the averaged Einstein’s equations to give geometric correction to the averaged (macroscopic) energy-momentum tensor. It is been also shown [@Zala:1997], [@Zala:1998], [@Zala:1996b] that only in case of neglecting all correlations of the gravitational field the averaged equations becomes the macroscopic Einstein equations for the Riemannian macroscopic metric tensor and its Ricci tensor with a continuous matter distribution. This result reveals the physical status of using the so-called standard procedure in cosmology when one claims that the Einstein equations preserve their form after substitution of a discrete matter model by a continuous one.
In using the Einstein equations for a matter distribution in the form of a set of point-like mass constituents, there is a problem of adequate application, or validity, of the Einstein equations when such a matter distribution is substituted by a continuous matter distribution while the field operator in the left-hand side of the equations is kept unchanged. This problem as it stands in cosmology[^4] is called the averaging problem [@Zala:1992], [@Shir-Fish:1962], [@Scia:1971], [@Elli:1984], [@Zoto-Stoe:1992]. Indeed, let us consider the Einstein equations in the mixed form[^5] $$g^{\alpha \epsilon }r_{\epsilon \beta }-\frac{1}{2}\delta _{\beta }^{\alpha
}g^{\mu \nu }r_{\mu \nu }=-\kappa t_{\beta }^{\alpha \mathrm{(discrete)}}
\label{EE}$$ with $$t_{\beta }^{\alpha \mathrm{(discrete)}}(x)=\sum_{i}^{{}}{t_{(i)}}_{\beta
}^{\alpha }[x-z_{(i)}(\tau _{(i)})] \label{dmd}$$ where ${t_{(i)}}_{\beta }^{\alpha }$ is a energy-momentum tensor for a point-like mass moving along its world line $z^{\mu
}=z_{(i)}^{\mu }(\tau _{(i)})$ parameterized by $\tau _{(i)}$ and $i$ counts for the matter particles in the distribution (\[dmd\]). Changing the discrete matter distribution to a continuous (hydrodynamic) one in the right-hand side of (\[EE\]), which is the standard approach in cosmology [@Zala:1992], [@Shir-Fish:1962], [@Scia:1971], [@Elli:1984], [@Zoto-Stoe:1992] made phenomenologically on the basis of assumption about the uniformity and isotropicity of distribution of galaxies, or cluster of galaxies, throughout the whole Universe, means an implicit averaging denoted here by $\langle \cdot \rangle $ $$t_{\beta }^{\alpha \mathrm{(discrete)}}(x)\rightarrow T_{\beta }^{\alpha
\mathrm{(hydro)}}(x)=\left\langle \sum_{i}^{{}}{t_{(i)}}_{\beta }^{\alpha
}[x-z_{(i)}(\tau _{(i)})]\right\rangle \ . \label{aver-dmd}$$ Given a covariant averaging procedure $\langle \cdot \rangle $ for tensors on space-time, the averaging out of (\[EE\]) with taking into account (\[aver-dmd\]) brings $$\langle g^{\alpha \epsilon }r_{\epsilon \beta }\rangle -\frac{1}{2}\delta
_{\beta }^{\alpha }\langle g^{\mu \nu }r_{\mu \nu }\rangle =-\kappa T_{\beta
}^{\alpha \mathrm{(hydro)}}\ . \label{averEE:1}$$ An important point regarding the averaged equations (\[averEE:1\]) is that in this form they are just algebraic relations between components of the smoothed hydrodynamic energy-momentum tensor and the average products of the metric tensor by the Ricci tensor $\langle g^{\alpha \epsilon }r_{\epsilon
\beta }\rangle $ and cannot therefore be taken as field equations. By splitting the products out as $\langle g^{\alpha \epsilon }r_{\epsilon \beta
}\rangle =\langle g^{\alpha \epsilon }\rangle \langle r_{\epsilon \beta
}\rangle +C_{\beta }^{\alpha }$ where $C_{\beta }^{\alpha }$ is a correlation tensor, the averaged equations (\[averEE:1\]) become $$\langle g^{\alpha \epsilon }\rangle \langle r_{\epsilon \beta
}\rangle - \frac{1}{2}\delta _{\beta }^{\alpha }\langle g^{\mu \nu
}\rangle \langle r_{\mu \nu }\rangle =-\kappa T_{\beta }^{\alpha
\mathrm{(hydro)}}-C_{\beta }^{\alpha }+\frac{1}{2}\delta _{\beta
}^{\alpha }C_{\epsilon }^{\epsilon }. \label{averEE:2}$$ Here $\langle g^{\alpha \beta }\rangle $ and $\langle r_{\alpha
\beta }\rangle $ denote the averaged inverse metric and the Ricci tensors which are supposed to describe the gravitational field due to the matter distribution $T_{\beta }^{\alpha \mathrm{(hydro)}}$. A simple important observation [@Zala:1997], [@Zala:1998] now is that the averaged Einstein equations (\[averEE:2\]) are still not real" field equations - just a definition of the correlation tensor $C_{\beta }^{\alpha }$ as a difference between (\[averEE:1\]) and (\[averEE:2\]). The origin of this fundamental fact is that the average of the non-linear operator of (\[EE\]) on the metric tensor $g_{\rho
\sigma }$ is not equal in general[^6] to an operator of the same form on the average metric $\langle g_{\rho \sigma }\rangle $: $$\left\langle \left( g^{\alpha \epsilon }r_{\epsilon \beta
}-\frac{1}{2} \delta _{\beta }^{\alpha }g^{\mu \nu }r_{\mu \nu
}\right) [g_{\rho \sigma }]\right\rangle \neq \left( \langle
g^{\alpha \epsilon }\rangle \langle r_{\epsilon \beta }\rangle
-\frac{1}{2}\delta _{\beta }^{\alpha }\langle g^{\mu \nu }\rangle
\langle r_{\mu \nu }\rangle \right) [\langle g_{\rho \sigma
}\rangle ]\ . \label{aver:oper}$$ In order to return them the status of the field equations one must define the object $C_{\beta }^{\alpha }$ and find its properties using information outside the Einstein equations.
To resolve the averaging problem, and to consider it in a broader context as the problem of macroscopic description of gravitation, the approach of macroscopic gravity has been proposed [@Zala:1992], [@Mars-Zala:1997], [@Zala:1997], [@Zala:1998], [@Kras:1997], [@Zala:1993], [@Zala:1996b], [@Zala:1996] (see [@Zala:1997], [@Kras:1997] for discussion of the problem and references therein, [@Tava-Zala:1998] for discussion of the physical status of general relativity as either a microscopic or macroscopic theory of gravity). A covariant space-time volume averaging procedure for tensor fields [@Zala:1992], [@Mars-Zala:1997], [@Zala:1993], has been defined and proved to exist on arbitrary Riemannian space-times with well-defined properties of the averages. Upon utilizing the averaging scheme, the macroscopic gravity approach has shown that (*i*) averaging out Cartan’s structure equations brings about the structure equations for the averaged (macroscopic) non-Riemannian geometry and the definition and the properties of the correlation tensor $C_{\beta }^{\alpha }$, (*ii*) the averaged Einstein’s equations (\[averEE:2\]) become then the macroscopic field equations and they must be supplemented by a set of differential equations for the correlation tensor, (*iii*) it is always possible to extract the field operator of the form (\[EE\]) for the Riemannian macroscopic metric tensor $G_{\mu
\nu }$ and its Ricci tensor $M_{\mu \nu }$ with all other non-Riemannian correlation terms going to the right-hand side of (\[averEE:2\]) to give geometric correction to the averaged energy-momentum tensor $T_{\beta }^{\alpha \mathrm{(hydro)}}$. It is been also shown [@Zala:1997], [@Zala:1998], [@Zala:1996b] that only in case of neglecting all correlations of the gravitational field the averaged equations (\[averEE:2\]) becomes the macroscopic Einstein equations with a continuous matter distribution $$G^{\alpha \epsilon }M_{\epsilon \beta }-\frac{1}{2}\delta _{\beta }^{\alpha
}G^{\mu \nu }M_{\mu \nu }=-\kappa T_{\beta }^{\alpha \mathrm{(hydro)}},
\label{macroEE}$$ which reveals the physical status of using the standard procedure in cosmology of claiming (\[averEE:2\]) to be the Einstein equations (\[macroEE\]) after substitution of the matter model (\[aver-dmd\]). The physical meaning, dynamical role and magnitude of the gravitational correlations must be elucidated in various physical settings. There is some evidence that they cannot be negligible for cosmological evolution (see, for example [@Bild-Futa:1991] for an estimation of the age of Universe in a second order perturbation approach).
Macroscopic media in general relativity {#media}
=======================================
Derivation of the macroscopic (averaged) Maxwell field operator in macroscopic electrodynamics is easily accomplished due its linear field structure and the main problem consists in the construction of models of macroscopic electromagnetic media (for example, diamagnetics, magnetics, waveguides, etc.) [@Lore:1916], [@Pano-Phil:1962], [@deGr-Sutt:1972], [@Jack:1975], which relates to the structure of the averaged current. In general relativity the problem of construction of macroscopic gravitating medium models is hardly elaborated due to the following reasons: (a) existing mathematical and physical difficulties in establishing the form of the averaged (macroscopic) operator in (\[averEE:2\]) for the field equations of macroscopic gravity recedes the interest in development of macroscopic gravitating media; (b) posing on its own more or less realistic problem with a discrete matter creates mathematical and physical problems due to nonlinearity and non-trivial geometry of gravitation, to mention, for example, the $N$-body problem, the problem of statistical description of gravity, etc. and (c) being relied on physically motivated phenomenological arguments (uniformity, isotropy, staticity, etc.) most applications of general relativity deal with *effective* continuous media if even a starting physical model is discrete in its nature like in cosmology (see Section \[\*mdg\]) or in description of extended bodies in general relativity (see [@Tava-Zala:1998] for discussion of the physical status of general relativity).
The kinetic approach in physics is known to provide a general scheme for introduction of characteristics of continuous media with a *known* distribution function of a discrete configuration. But the advantages of such generality are often greatly weakened in particular applications by difficulties of solving the Boltzmann equation to find a distribution function of interest. This applies to a great extent to general relativity where despite the formulation of the general relativistic Boltzmann equation [@Cher:1962], [@Isra:1972] the kinetic approach still remains useful for general definitions and considerations rather than being a working tool (see, for example, [@Yodz:1971]) to derive a specific model of a macroscopic medium.
In case of the macroscopic electrodynamics together with the volume space-time averaging on Minkowski space-time the formalism of statistical distribution functions has been utilized (see [@deGr-Sutt:1972] and references therein) and it is of importance for the mathematically well-posed derivation of the macroscopic theory and the general structure of averaged current starting from microscopic electrodynamics of point-like moving charges. Further application of the macroscopic theory requires usually mainly phenomenological considerations to establish material relations between macroscopic average fields and induction field necessary to make an overdetermined system of macroscopic equations determined. A correct derivation of material relations is known to require [@deGr:1969] averaging the microscopic equations with a given microscopic matter model during accomplishing an averaging of the microscopic field equations. Though it is the only self-consistent way, the elaboration of such kind of approach still remains a challenge even for simple physical settings.
On the other hand, volume (space, time, space-time) averaging procedures maintain their importance, direct physical meaning in application to macroscopic settings and their extreme clearness and descriptiveness. A volume averaging is also known to be unavoidable in all macroscopic settings (including statistical approaches) [@Jack:1975], [@deGr:1969], [@Russ:1970], [@Robi:1973] and space-time averages of physical fields are known to have the physical meaning as directly measurable quantities [@Bohr-Rose:1933], [@DeWi:1962] (for discussion see [@Zala:1997], [@Zala:1998] and references therein). That greatly motivates and supports interest in applying approaches with various averaging schemes in physics despite corresponding (mostly mathematical, not physical) difficulties in the rigorous formulation of averaging procedures.
The paper aims to approach the problem of construction of gravitating macroscopic media in general relativity by using an appropriate space-time averaging scheme.
Szekeres’ gravitational polarization tensor
===========================================
The approach of Szekeres [@Szek:1971] to construct a tensor of gravitational quadruple polarization is based on the following assumptions: (a) the linearized theory of gravity on Minkowski background; (b) the linearized field equations are taken as the linearized Bianchi identity to employ an analogy between gravitation and electromagnetism; (c) the covariant method of molecular moments of Kaufman is applied to construct a tensor of quadruple gravitational polarization.
The equations under consideration are the contracted Bianchi identities $$\label{bianchi}
C_{\mu \nu \rho \sigma}{}^{; \sigma} = \kappa J_{\mu \nu \rho}$$ where $C_{\mu \nu \rho \sigma}$ is the Weyl tensor interpreted as free gravitational field, $J_{\mu \nu \rho}$ is a kind of “matter current" for the energy-momentum tensor $t_{\mu \nu}$ $$\label{matter-current}
J_{\mu \nu \rho} = J_{[ \mu \nu ] \rho} = - (t_{\rho [ \mu ; \nu ]} -
\frac{1}{3} g_{\rho [ \mu} t_{, \nu ]}) ,$$ $$\label{matter-current-conserv}
J_{\mu \nu \rho}{}^{; \rho} = 0 .$$ equations (\[bianchi\]) are analogous the Maxwell equations $$\label{maxwell}
f_{\mu \nu }{}^{; \nu} = \frac{4 \pi}{c} j_{\mu}$$ with (\[matter-current-conserv\]) being comparable with the conservation of the electromagnetic current $j_{\mu}$ $$\label{em-current-conserv}
j_{\mu}{}^{; \mu} = 0 .$$
Let us consider a number of particles labelled by $i$ and having masses $m_{i}
$ and which are moving in there own effective gravitational field along world lines $z_{i}^{\mu }(\tau _{i})$. A physical parameter which characterizes such a distribution is a typical characteristic distance $l$ between neighboring particles. Then the corresponding microscopic energy-momentum tensor has the form $$t^{\mathrm{(micro)}\mu \nu }(x)=c^{-1}\sum_{i}\int m_{i}\frac{dz_{i}^{\mu }}
{d\tau _{i}}\frac{dz_{i}^{\nu }}{d\tau _{i}}\delta ^{4}[x-z_{i}^{\mu }(\tau
_{i})]d\tau _{i}. \label{micro}$$ Assume now that due to gravitation the particles form into groups, a kind of gravitational molecules, which will be labelled by index $a$. From the physical point of view that means the presence of another parameter which is a characteristic size (diameter) $L$ of such a molecule with $L\gg l$. It is a longwave *macroscopic* parameter and its presence in a microscopic system will be defining the dynamics of the system on the distances of order of $L$. The microscopic energy-momentum tensor (\[micro\]) becomes now $$t^{\mathrm{(molec)}\mu \nu }(x)=c^{-1}\sum_{a}\sum_{i}^{ina}\int m_{i}
\frac{d\tau _{a}}{d\tau _{i}}\frac{dz_{i}^{\mu }}{d\tau _{a}}\frac{dz_{i}^{\nu }}
{d\tau _{a}}\delta ^{4}[x-y_{a}^{\mu }(\tau _{a})-s_{i}(\tau _{a})]d\tau _{a}
\label{molec}$$ where $y_{a}^{\mu }(\tau _{a})$ is a world line of the $a$-th molecule center of mass [@Szek:1971] and $s_{i}^{\mu }(\tau _{a})=z_{i}^{\mu
}(\tau _{i})-y_{a}^{\mu }(\tau _{a})$ is a vector connecting the $i$-th particle with the center of mass of the molecule including this particle. Let us apply now the method of molecular moments of Kaufman [@Kauf:1962] to represent (\[molec\]) as a series expansion in powers of $s_{i}^{\mu }$ under assumption that the effective gravitational field which is created by moving gravitational molecules is a weak field and the perturbations of the gravitational field due to relative oscillations of gravitating particles in molecules are small compared with the mean effective field. After averaging out over a typical size of the gravitational molecule (for an averaging procedure see [@Zala:1992], [@Zala:1993]) one gets[^7] in accordance with the Szekeres procedure $$\langle t^{\mathrm{(molec)}\mu \nu }\rangle =T^{\mathrm{(free)}\mu \nu
}+D^{\mu \nu \rho }{}_{,\rho }+Q^{\mu \nu \rho \sigma }{}_{,\rho \sigma }
\label{av-molec}$$ where $T^{\mathrm{(free)}\mu \nu }$ is the energy-momentum tensor of molecules, which has the form similar to (\[micro\]) with substitution $i$ by $a$, $D^{\mu \nu \rho }$ is the tensor of gravitational dipole polarization that can be incorporated into the quadrupole term, and $Q^{\mu
\nu \rho \sigma }$ the tensor of gravitational quadrupole polarization $$Q^{\mu \nu \rho \sigma }=c^{-1}\langle \sum_{a}\int q_{a}^{\mu \nu \rho
\sigma }\delta ^{4}(x-y_{a})d\tau _{a}\rangle , \label{polar}$$ which has the symmetries of the Riemann tensor. The expression for the covariant gravitational quadrupole moment $q_{a}^{\mu \nu \rho \sigma }$ is defined as $$\begin{aligned}
q_{a}^{\mu \nu \rho \sigma } & = & g_{a}^{\mu \nu }u_{a}^{\rho }u_{a}^{\sigma
}-g_{a}^{\rho \nu }u_{a}^{\mu }u_{a}^{\sigma }-g_{a}^{\mu \sigma
}u_{a}^{\rho }u_{a}^{\nu }+g_{a}^{\rho \sigma }u_{a}^{\mu }u_{a}^{\nu
}+ \nonumber \\
& & u_{a}^{\mu }h_{a}^{\rho \nu \sigma }-u_{a}^{\rho }h_{a}^{\mu \nu \sigma
}+u_{a}^{\nu }h_{a}^{\sigma \mu \rho }-u_{a}^{\sigma }h_{a}^{\nu \mu \rho
}+k_{a}^{\mu \nu \rho \sigma }, \label{moment}\end{aligned}$$ where $$g_{a}^{\mu \nu }=\sum_{i}m_{i}\frac{d\tau _{a}}{d\tau _{i}}s_{i}^{\mu
}s_{i}^{\nu },$$ $$h_{a}^{\mu \nu \rho }=\frac{2}{3}\sum_{i}m_{i}\frac{d\tau _{a}}{d\tau _{i}}
s_{i}^{\mu }\left( \frac{ds_{i}^{\nu }}{d\tau _{a}}s_{i}^{\rho }-\frac{
ds_{i}^{\rho }}{d\tau _{a}}s_{i}^{\nu }\right) ,$$ $$k_{a}^{\mu \nu \rho \sigma }=\frac{2}{3}\sum_{i}m_{i}\frac{d\tau _{a}}{d\tau
_{i}}\left( \frac{ds_{i}^{\mu }}{d\tau _{a}}s_{i}^{\rho }\frac{ds_{i}^{\nu }
}{d\tau _{a}}s_{i}^{\sigma }-\frac{ds_{i}^{\mu }}{d\tau _{i}}s_{i}^{\rho
}s_{i}^{\nu }\frac{ds_{i}^{\sigma }}{d\tau _{a}}\right) .$$ Upon averaging (\[bianchi\]) over the typical size of gravitational molecule the following equations were obtained: $${\langle C_{\mu \nu \rho \sigma }{}\rangle }^{,\sigma }=\kappa \langle
J_{\mu \nu \rho }^{\mathrm{(micro)}}\rangle \label{av-bianchi}$$ where $$\langle J_{\mu \nu \rho }^{\mathrm{(micro)}}\rangle =-\langle t_{\rho
\lbrack \mu }^{\mathrm{(micro)}}\rangle _{,\nu ]}+\frac{1}{3}\eta _{\rho
\lbrack \mu }\langle t^{\mathrm{(micro)}}\rangle _{,\nu ]},
\label{av-matter-current}$$ or $$P_{\mu \nu \rho \sigma }=\frac{1}{2}(-{Q_{\rho \sigma \epsilon \lbrack \mu }}
^{,\epsilon }-\frac{1}{3}\eta _{\rho \lbrack \mu }Q^{\gamma }{}_{\sigma
\gamma \epsilon }{}^{,\epsilon })_{,\nu ]} \label{polarization}$$ and $$\langle J_{\mu \nu \rho }^{\mathrm{(micro)}}\rangle =J_{\mu \nu \rho }^{
\mathrm{(free)}}-P_{\mu \nu \rho \sigma }{}^{,\sigma }.
\label{av-matter-current-2}$$ The expression (\[av-matter-current-2\]) is analogous to the expression for the averaged electromagnetic current $\langle j^{\mathrm{(micro)\mu }
}\rangle $ for a bunch of charged particles moving along their world lines in the effective electromagnetic field in accordance with the microscopic equation (\[maxwell\]) with $j^{\mathrm{(micro)\mu }}$ when particles are grouped into molecules [@Kauf:1962] $$\langle j^{\mathrm{(micro)\mu }}\rangle =j^{\mathrm{(free)\mu }}-cP^{\mu \nu
}{}_{,\nu } \label{av-current}$$ where the polarization tensor $P^{\mu \nu }$ is defined as an average of the quadruple polarization moment of the molecules $p_{a}^{\mu \nu }$ (see [@Kauf:1962], [@Szek:1971] for details) $$P^{\mu \nu }=\langle \sum_{a}\int d\tau _{a}p_{a}^{\mu \nu }\delta
^{4}(x-y_{a})\rangle . \label{polarization-em}$$ Then equations (\[av-bianchi\]) can be rewritten as the macroscopic equations[^8] $$E_{\mu \nu \rho \sigma }{}^{,\sigma }=\kappa J_{\mu \nu \rho }^{\mathrm{
(free)}} \label{av-bianchi-2}$$ for the gravitational induction tensor $E_{\mu \nu \rho \sigma }$ defined as $$E_{\mu \nu \rho \sigma }=\langle C_{\mu \nu \rho \sigma }\rangle +\kappa
P_{\mu \nu \rho \sigma }. \label{gr-induction}$$ The macroscopic equations (\[av-bianchi-2\]) are analogous to the Maxwell macroscopic equations obtainable by means of averaging the microscopic equations (\[maxwell\]) with $j^{\mathrm{(micro)}\mu }$ with taking into account (\[av-current\]) $$H^{\mu \nu }{}_{,\nu }=\frac{4\pi }{c}J^{\mathrm{(free)\mu }}
\label{av-maxwell}$$ for the electromagnetic induction tensor $H_{\mu \nu }$ defined as $$H^{\mu \nu }=\langle f^{\mu \nu }\rangle +4\pi P_{\mu \nu }.
\label{em-induction}$$ Unfortunately, at this point the analogy between the electromagnetism and gravitation which holds on the level of (\[bianchi\]), (\[matter-current-conserv\]) and (\[maxwell\]), (\[em-current-conserv\]) breaks. Indeed, the formal similarity of (\[av-bianchi-2\]), (\[gr-induction\]) and (\[av-maxwell\]), (\[em-induction\]) does not possess the structural analogy between averaged electromagnetism and gravitation: (A) the gravitational induction tensor $E_{\mu \nu \rho \sigma }
$ does not have any more the symmetries of the Weyl tensor compared with $H_{\mu \nu }$ keeping the symmetries of $f_{\mu \nu }$; (B) it is constructed from the second derivatives of the polarization tensor $Q_{\mu
\nu \rho \sigma }$ compared with the linear algebraic structure of the electromagnetic induction tensor $H_{\mu \nu }$ in terms of the polarization tensor $P_{\mu \nu }$ - it is thus impossible to proceed with the formulation of phenomenological material relations between $E_{\mu \nu \rho
\sigma }$ and $\langle C_{\mu \nu \rho \sigma }\rangle $ as possible in electromagnetism (relations between $H_{\mu \nu }$ and $\langle f_{\mu \nu
}\rangle $, or amongst the fields $\mathbf{E}$, $\mathbf{D}$, $\mathbf{B}$, $\mathbf{H}$ and $\mathbf{J}$).
Even more important issue is that analysis of the macroscopic field equation (\[av-bianchi\]) $$\label{av-bianchi-orders}
{
\begin{array}[t]{c}
{\langle C_{\mu \nu \rho \sigma}{} \rangle}^{, \rho} \\
{\scriptscriptstyle {\mathcal{O}}(e)}
\end{array}
} = \kappa \langle J^{\mathrm{(micro)}}_{\mu \nu \rho} \rangle = {
\begin{array}[t]{c}
\kappa J^{\mathrm{(free)}}_{\mu \nu \rho} \\
{\scriptscriptstyle {\mathcal{O}}(1)}
\end{array}
} - {
\begin{array}[t]{c}
P_{\mu \nu \rho \sigma}{}^{, \sigma} \\
{\scriptscriptstyle {\mathcal{O}}(e^2)}
\end{array}
},$$ where $e$ is a parameter measuring the value of deviation from the flat space, requires one to put into agreement the orders of magnitude of all quantities and reveals that the linearized Weyl tensor should be zero under averaging $\langle C_{\mu \nu \rho \sigma} \rangle = 0$. The Weyl tensor must be estimated in the perturbation theory up to the second order as it was done for the matter current $J^{\mathrm{(micro)}}_{\mu \nu \rho}$ in the left-hand side of (\[bianchi\]). So considering some physical features of the polarization tensor neither the macroscopic equations (\[av-bianchi\]), nor any other field equations had in fact been employed [@Szek:1971].
On the basis of the expression $$\langle t_{\mu \nu }^{\mathrm{(micro)}}\rangle =T_{\mu \nu }^{\mathrm{(free)}
}+\frac{1}{2}Q_{\mu \rho \nu \sigma }{}^{,\rho \sigma }
\label{av-energy-momentum}$$ the following material relations have been suggested $$Q_{i0j0}=\langle G_{ij}\rangle N=\epsilon _{g}C_{i0j0} \label{gr-material}$$ where $N$ is the average number of molecules per unit volume, $G_{ij}$ is the quadrupole moment of a molecule $$G_{ij}=\int \rho (x)\delta x_{i}\delta x_{j}d^{3}x, \label{quadruple}$$ $\rho =\rho (x)$ is the matter density in molecules, $\delta x_{i}$ is a vector between neighboring particles of a molecule. The quantity $\epsilon
_{g}$ has been called the gravitational dielectric constant and in Newtonian approximation found to be $$\epsilon _{g}=\frac{1}{4}\frac{mA^{2}c^{2}}{\omega _{0}^{2}}N
\label{grav-dielectric-const}$$ where $A$ is the average linear dimension of a typical molecule, $m$ is the average mass of the molecules, $\omega _{0}^{2}$ is a typical frequency of harmonically oscillating particles in molecules.
The Macroscopic Gravity Equations {#mgeqs}
=================================
A macroscopic gravity approach in the perturbation theory up to the second order on a particular background space-time taken to be a smooth weak gravitational field is applied to write down a system of macroscopic field equations: Isaacson’s equations with a source incorporating the quadruple gravitational polarization tensor, Isaacson’s energy-momentum tensor of gravitational waves and energy-momentum tensor of gravitational molecules and corresponding equations of motion.
The gravitational field created by a number of particles represented by a microscopic energy-momentum tensor $t_{\beta }^{\alpha \mathrm{(micro)}}$ is defined by Einstein’s equation $$g^{\alpha \epsilon }r_{\epsilon \beta }-\frac{1}{2}\delta _{\beta }^{\alpha
}g^{\mu \nu }r_{\mu \nu }=-\kappa t_{\beta }^{\alpha \mathrm{(micro)}}
\label{ein}$$ where $\kappa =8\pi G/c^{4}$ is Einstein’s gravitational constant and $G$ is Newton’s gravitational constant. The Einstein equations (\[ein\]) for the microscopic distribution of gravitational molecules (\[molec\]) have the following form: $$g^{\alpha \epsilon }r_{\epsilon \beta }-\frac{1}{2}\delta _{\beta }^{\alpha
}g^{\mu \nu }r_{\mu \nu }=-\kappa t_{\beta }^{\alpha \mathrm{(molec)}}.
\label{ein-molec}$$
Averaging the left-hand side of the Einstein equations (\[ein-molec\]) following the Isaacson’s high-frequency approximation approach [@Isaa:1968a], [@Isaa:1968b], using the averaging procedure [@Zala:1992]$^{,}$[@Zala:1993] (one can also use Isaacson’s averaging procedure [@Isaa:1968a], [@Isaa:1968b], see also [@Zala:1996]) and with taking into account the expression [@Szek:1971] for the tensor of gravitational quadrupole polarization $Q^{\mu \nu \rho \sigma }$ in terms of the covariant gravitational quadrupole moment $q_{a}^{\mu \nu \rho \sigma
}$ (\[polar\]) brings the averaged Einstein equations in the form: $$R_{\mu \nu }^{(0)}-\frac{1}{2}g_{\mu \nu }^{(0)}R^{(0)}=-\kappa (T_{\mu \nu
}^{\mathrm{(free)}}+T_{\mu \nu }^{\mathrm{(GW)}}+\frac{1}{2}c^{2}Q_{\mu \rho
\nu \sigma }{}^{;\rho \sigma }), \label{av-ein-molec}$$ where $T_{\mu \nu }^{\mathrm{(GW)}}$ is Isaacson’s energy-momentum tensor of gravitational waves [@Isaa:1968a], [@Isaa:1968b] and $T_{\mu \nu }^{
\mathrm{(free)}}$ is the energy-momentum tensor of molecules $$T_{\mu \nu }^{\mathrm{(free)}}(x)=c^{-1}\sum_{a}\int m_{a}\frac{dy_{a}^{\mu }
}{d\tau _{i}}\frac{dy_{a}^{\nu }}{d\tau _{i}}\delta ^{4}[x-y_{a}^{\mu }(\tau
_{a})]d\tau _{a}. \label{free}$$ All members in equation (\[av-ein-molec\]) can be shown to be of the same order of magnitude ${\mathcal{O}}(1/L^{2})$. The macroscopic equations (\[av-ein-molec\]) give the equations of motion for molecules[^9] $$T^{\mathrm{(free)}\mu \nu }{}_{;\nu }=0, \label{eq-motion-molec}$$ conservation of the energy-momentum of gravitational waves $$T^{\mathrm{(GW)}\mu \nu }{}_{;\nu }=0, \label{conserv-gw}$$ and an identity for the gravitational polarization $$Q_{\mu \nu \rho \sigma }{}^{;\nu \sigma \mu }=0. \label{ident-polar}$$
The system of equations (\[av-ein-molec\])-(\[ident-polar\]) is underdetermined because there are 20 unknown components of the tensor of gravitational polarization. It is possible to formulate two natural material relations. The first relation connects the traceless part of the quadrupole polarization tensor $$\begin{aligned}
\widetilde{Q}_{\mu \rho \nu \sigma } & = & Q_{\mu \rho \nu \sigma }+ \frac{1
}{2}(-g_{\mu \nu }P_{\rho \sigma }+g_{\mu \sigma }P_{\rho \nu }-g_{\rho
\sigma }P_{\mu \nu}+g_{\rho \nu }P_{\mu \sigma }) + \nonumber \\
& & \frac{1}{2}S(g_{\mu \nu }g_{\rho \sigma }-g_{\mu \sigma
}g_{\rho \nu }), \label{polar-decomp}\end{aligned}$$ where $P_{\rho \sigma }=Q^{\mu }{}_{\rho \mu \sigma }$, $S=P_{\rho }^{\rho }$ , with the traceless energy-momentum tensor of gravitational waves $T_{\mu
\nu }^{\mathrm{(GW)}}$ $$\frac{c^{2}}{2}\widetilde{Q}_{\mu \rho \nu \sigma }{}^{;\rho \sigma
}=\lambda T_{\mu \nu }^{\mathrm{(GW)}}, \label{material-1}$$ where $\lambda =\lambda (x)$ the gravitational radiation polarization factor. Relation (\[material-1\]) can be shown to be always valid in the geometrical optics limit.
The second material relation connects the remaining part of the polarization tensor $Q_{\mu \rho \nu \sigma }$, namely its trace $P_{\rho \sigma }$, with a projection of the curvature tensor on the world line of an observer (the electric part of the curvature tensor) $$P_{\rho \sigma }=\epsilon R_{\mu \rho \nu \sigma }^{(0)}u^{\mu }u^{\nu },
\label{material-2}$$ where $u^{\mu }$ is the observer 4-velocity (4-velocity of the molecule center of mass) and $\epsilon =\epsilon (x)$ is the macroscopic medium polarization factor. The relation (\[material-2\]) can be shown to lead to the correct expression for the 3-tensor of the average quadrupole gravitational moment [@Szek:1971] so that $$P_{\mu \nu }=(P_{00}=0,P_{0i}=0,P_{ij}=\langle G_{ij}\rangle N).
\label{q-trace}$$ where $N$ is the average number of molecules per unit volume, $\langle
G_{ij}\rangle $ is the averaged quadrupole moment of a molecule (\[quadruple\]). Then the material relation [@Szek:1971] can be recovered in the form $$Q_{i0j0}=\langle G_{ij}\rangle N=\epsilon _{g}R_{i0j0}
\label{gr-material-improv}$$ that gives $\epsilon =\epsilon _{g}$ with the gravitational dielectric constant $\epsilon _{g}$ defined in [@Szek:1971] as (\[grav-dielectric-const\]).
Thus the system of equations (\[av-ein-molec\])-(\[conserv-gw\]), (\[material-1\]), (\[material-2\]) is fully determined and can be used to find the gravitational and polarization fields for the macroscopic gravitating systems.
The Static Weak-field Macroscopic Medium {#medium}
========================================
The averaged Einstein equations (\[av-ein-molec\]) have been derived under assumption of the weak gravitational field though they can be considered to be formally valid for any background metric $g_{\mu \nu }^{(0)}$ with a given tensor of gravitational quadrupole polarization $Q^{\mu \nu \rho
\sigma }$ and the material relations (\[material-1\]) and (\[material-2\] ). The definition of the tensor of gravitational quadrupole polarization $
Q^{\mu \nu \rho \sigma }$ (\[polar\]) adopted here is essentially valid on the flat space-time background due to the used definitions of molecule’s center of mass [@Szek:1971], [@Syng:1956]. Therefore, the averaged Einstein equations (\[av-ein-molec\]) with (\[polar\]) can be only consistently applied in the framework of the weak-field approximation.
A model of static weak-field macroscopic medium with quadrupole gravitational polarization is considered here. The model is based on three assumptions.
\(1) Newtonian gravity conditions for the energy-momentum tensor of molecules $T_{\mu \nu }^{\mathrm{(free)}}$, $$T_{00}^{\mathrm{(free)}}\gg T_{ij}^{\mathrm{(free)}},\quad T_{00}^{\mathrm{\
(free)}}\gg T_{0i}^{\mathrm{(free)}},\quad T_{00}^{\mathrm{(free)}}=T_{\mu
\nu }^{\mathrm{(free)}}u^{\mu }u^{\nu }=\mu c^{2}, \label{newton}$$ where the observer 4-velocity (4-velocity of the molecule center of mass) $
u^{\nu }$ is $u^{\nu }=(1,0,0,0)$ and $\mu $ is the macroscopic matter density. The condition (\[newton\]) means that the gravitational field created by gravitational molecules is essentially Newtonian.
\(2) The macroscopic metric tensor $g_{\mu \nu }^{(0)}$ is static, $$\frac{\partial g_{\mu \nu }^{(0)}}{\partial t}=0, \label{staticity}$$ which means that is the macroscopic matter density $\mu $ depends only on spatial coordinates, $\mu =\mu (x^{a})$.
\(3) The condition of the weak field approximation for the macroscopic metric tensor $g_{\mu \nu }^{(0)}$, $$g_{\mu \nu }^{(0)}=\eta _{\mu \nu }+eh_{\mu \nu }, \label{weak}$$ where is $\eta _{\mu \nu }$ the flat space-time metric, $\eta _{0 0}=-1$, $\eta _{1 1}=1$, $\eta _{2 2}=1$, $\eta _{3 3}=1$ and $\eta
_{\mu \nu }=0$ if $\mu \neq \nu $, $h_{\mu \nu }$ is the arbitrary perturbation functions depending here only on spatial coordinates and $e$ is the smallness parameter, $e\ll 1$.
In general relativity the conditions (\[newton\])-(\[weak\]) for a microscopic energy-momentum tensor $t_{\beta }^{\alpha \mathrm{(micro)}}$ and for the metric tensor $g_{00}=-\left( 1+\frac{2\varphi }{c^{2}}\right) $ , $g_{0i}=0$, $g_{ij}=0$, are known to lead to the Newtonian limit of the Einstein equations (\[ein\]) to result in the Poisson equation for the Newtonian gravitational potential $\varphi =\varphi (x^{a})$ $$\Delta \varphi =4\pi G\mu . \label{poisson}$$
The Newtonian limit of the macroscopic gravity equations (\[av-ein-molec\] ) under conditions (\[newton\])-(\[weak\]) for the macroscopic tensor $
g_{\mu \nu }^{(0)}$ $$g_{0 0 }^{(0)}=-\left( 1+\frac{2\varphi }{c^{2}}\right), \quad g_{1 1}^{(0)}=1,
\quad g_{2 2}^{(0)}=1, \quad g_{3 3}^{(0)}=1, \quad g_{\mu \nu }^{(0)}=0, \quad \mu \neq \nu ,
\label{potential}$$ should bring a generalization of the Poisson equation which incorporates the effect of gravitational quadrupole polarization.
For the case of static weak-field macroscopic medium Isaacson’s energy-momentum tensor of gravitational waves $T_{\mu \nu }^{\mathrm{(GW)}}$ vanishes $$T_{\mu \nu }^{\mathrm{(GW)}}=0, \label{isaacson's}$$ and no gravitational radiation polarization factor $\lambda $ is involved $$\lambda =0. \label{lambda}$$ The the macroscopic medium polarization factor $\epsilon $ is here $$\epsilon =\epsilon _{g} \label{epsilon}$$ with the gravitational dielectric constant $\epsilon _{g}$ defined [@Szek:1971] as (\[grav-dielectric-const\]).
The Equation for the Macroscopic Gravitational Potential {#potential_eq}
========================================================
Calculation of the equation for the macroscopic gravitational potential $
\varphi $ from the macroscopic gravity equations (\[av-ein-molec\]) under conditions (\[newton\])-(\[weak\]) for the macroscopic tensor $g_{\mu
\nu }^{(0)}$ (\[potential\]) brings the equation $$\Delta \varphi =4\pi G\mu +\frac{4\pi G\epsilon _{g}}{3c^{2}}\Delta
^{2}\varphi \label{macropotential}$$ where $\Delta ^{2}\varphi \equiv \Delta (\Delta \varphi )$ is the Laplacian of the Laplacian of $\varphi $. This is a non-trivial generalization of the Poisson equation for the gravitational potential $\varphi $ of Newtonian gravity (\[poisson\]). This is a biharmonic equation due the presence of the term $\Delta ^{2}\varphi $. The equation (\[macropotential\]) involves a singular perturbation, since in case of the vanishing gravitational dielectric constant, $\epsilon _{g}=0$, this equation becomes the Poisson equation, but if $\epsilon _{g}\neq 0$, this equations change its operator structure to be of the fourth order equation in partial derivatives of $
\varphi $ as compared with the Poisson second order partial differential equation.
It is convenient to introduce the factor $$\frac{1}{k^{2}}=\frac{4\pi G\epsilon _{g}}{3c^{2}} \label{k2}$$ with $k$ having a physical dimension of inverse length, $\left[ k^{-2}\right]
=\mathrm{length}^{2}$. Then the equation (\[macropotential\]) takes the form $$\Delta \varphi =4\pi G\mu +\frac{1}{k^{2}}\Delta ^{2}\varphi .
\label{macropotential_k}$$ By using the definitions of the gravitational dielectric constant $\epsilon _{g}$ (\[grav-dielectric-const\]), the characteristic oscillation frequency of molecule’s constituents $\omega
_{0}^{2}$, macroscopic matter density $\mu =3m/4\pi A^{3}$ and the average number of molecules per unit volume $N=4\pi D^{3}/3$ with $D$ as a mean distance between molecules, the factor $k^{-2}$ can be shown to have the following form $$\frac{1}{k^{2}}=\frac{1}{4\theta }\left( \frac{A^{3}}{D^{3}}\right) A^{2}.
\label{k2A2}$$ Here the dimensionless factor $\theta $, $$\theta =\frac{\omega _{0}^{2}}{4\pi G\mu /3}, \label{theta}$$ reflects the nature of field responsible for bounding of discrete matter constituents into molecules. If $\theta \approx 1$, the molecules of self-gravitating macroscopic medium are considered to be gravitationally bound. For instance, considering a macroscopic model of galaxy as a self-gravitating macroscopic medium consisting of gravitational molecules taken as double stars, $\theta \approx 1$ as such galactic molecules are gravitationally bound. If one takes the molecules to be of electron-proton type, like atoms, the factor $\theta \approx 10^{40}$, which makes the factor $k^{-2}$ essentially insignificant.
The dimensionless ratio $A/D$ reflects the structure of macroscopic medium. If $(A/D)\approx 1$, the macroscopic medium behaves itself like a liquid or solid. If $(A/D)<1$, the macroscopic medium behaves itself like a gas. For the macroscopic galactic model for the present epoch the macroscopic medium is like a gas, since $(A/D)\approx 10^{-1}-10^{-2}$, which makes the factor $
A^{3}/D^{3}$ to be of order of $10^{-3}-10^{-6}$. However, for earlier times of galaxy formulation this factor can be expected to be of much greater order of magnitude up to $1-10$.
It is useful to introduce a dimensionless factor $k^{-2}L^{-2}$, $$\frac{1}{k^{2}L^{2}}=\frac{1}{4\theta }\left( \frac{A^{3}}{D^{3}}\right)
\left( \frac{A^{2}}{L^{2}}\right) , \label{k2L2}$$ where $L$ is the characteristic scale of the macroscopic gravitational field. The dimensionless ratio $A/L$ reflects the scale of significant change in the macroscopic gravitational field. If $(A/L)\approx 1$, the macroscopic gravitational field changes significantly on the scale $L$. If $
(A/D)\ll 1$, the scale $L$ does not reflect the presence of gravitational molecules.
Thus, the structure of the factor $k^{-2}$ in (\[k2A2\]), or $k^{-2}L^{-2}$ in (\[k2L2\]), is model dependent, and its particular value is fully determined by a particular model of self-gravitating macroscopic matter. When a macroscopic medium has the factor $$\frac{1}{k^{2}L^{2}}=\frac{1}{4\theta }\left( \frac{A^{3}}{D^{3}}\right)
\left( \frac{A^{2}}{L^{2}}\right) \approx 1, \label{k2L2=1}$$ the equation for the macroscopic gravitational potential $\varphi $ is (\[macropotential\_k\]) which is convenient to write in the following form $$\Delta ^{2}\varphi -k^{2}\Delta \varphi =-4\pi Gk^{2}\mu .
\label{macropotential_k2}$$ It can be rewritten as a system of two second order partial differential equations $$\Delta \varphi =f, \label{L1-1}$$ $$\Delta f-k^{2}f=-4\pi Gk^{2}\mu , \label{L1-2}$$ for the unknowns $\varphi (x,y,z)$ and $f(x,y,z)$ with given $\mu (x,y,z)$ and $k^{2}$. In this case of the significant gravitational quadrupole polarization of a macroscopic medium, which will be referred to as Case I, the equations (\[L1-2\]) is singular with respect to the gravitational dielectric constant $\epsilon _{g}$ because $k^{2}\sim 1/\epsilon _{g}$ and the limit $\epsilon _{g}\rightarrow 0$ cannot be accomplished in a solution to (\[L1-1\]) and (\[L1-2\]). Equations (\[L1-1\]) and (\[L1-2\]) do not have a limit to the Poisson equation (\[poisson\]) of Newtonian gravity, nor does a solution to (\[L1-1\]) and (\[L1-2\]) have a limit to a solution to (\[poisson\]). It should be pointed out that the equations ( \[L1-1\]) and (\[L1-2\]) are essentially valid either inside the macroscopic matter source, or outside the matter source in the close vicinity of the macroscopic matter configuration boundary where the effect of strong gravitational quadrupole polarization is still significant, $$\frac{1}{k^{2}}\Delta ^{2}\varphi \approx \frac{1}{k^{2}L^{2}}\Delta \varphi
\gg \Delta \varphi . \label{L1-3}$$ As far as asymptotically $\Delta \varphi /k^{2}L^{2}$ becomes much less than $\Delta \varphi $ the equations (\[L1-1\]) and (\[L1-2\]) are not valid anymore. See the general solution to equations (\[L1-1\]) and (\[L1-2\]) and their asymptotic behavior in Section \[case\_I\].
When a macroscopic medium has the factor $$\frac{1}{k^{2}L^{2}}=\frac{1}{4\theta }\left( \frac{A^{3}}{D^{3}}\right)
\left( \frac{A^{2}}{L^{2}}\right) \ll 1, \label{k2L2<1}$$ the equation (\[macropotential\_k\]) for the macroscopic gravitational potential $\psi $ can be rewritten effectively in the form of the Poisson equation $$\Delta \psi =4\pi G\mu \left( 1+\frac{1}{k^{2}L^{2}}\right) , \label{L2}$$ for the unknowns $\psi (x,y,z)$ with given $\mu (x,y,z)$ and $k^{2}L^{2}$. It is possible because the $\Delta ^{2}\psi $ term in (\[macropotential\_k\] ) is well approximated as $$\frac{1}{k^{2}}\Delta ^{2}\psi \approx \frac{1}{k^{2}L^{2}}\Delta \psi \ll
\Delta \psi . \label{L2-2}$$ In this case of the insignificant gravitational quadrupole polarization of a macroscopic medium, which will be referred to as Case II, the equation (\[L2\]) is nonsingular with respect to the gravitational dielectric constant $
\epsilon _{g}$ because $1/k^{2}\sim \epsilon _{g}$ and the limit $\epsilon
_{g}\rightarrow 0$ can be accomplished in the solution to (\[L2\]). Equation (\[L2\]) does have a limit to the Poisson equation (\[poisson\]) of Newtonian gravity, and a solution to (\[L2\]) always has a limit to a solution to (\[poisson\]). It should be pointed out that the equation (\[L2\]) holds either inside the macroscopic matter source, or outside the matter source. A solution to equation (\[L2\]) gives the macroscopic gravitational potential $\psi (x,y,z)$ defined everywhere.
The equation (\[L2\]) describes also asymptotic behavior, $
L^{2}\rightarrow \infty $, for the field of the macroscopic gravitational potential $\varphi (x,y,z)$for a macroscopic matter distribution with strong gravitational quadrupole polarization. A solution to equations (\[L1-1\]) and (\[L1-2\]) must be matched to a solution to equation (\[L2\]) on a surface outside the source between the source boundary and the nearby zone to get the macroscopic gravitational potential $\varphi (x,y,z)$ in asymptotic region and, as a result, to define it everywhere.
The Cases I and II have different equations and different physics, which is reflected by solutions to equations (\[L1-1\]) and (\[L1-2\]) and equation (\[L2\]).
Case I: The Strong Quadrupole Polarization {#case_I}
==========================================
The system of equations (\[L1-1\]) and (\[L1-2\]) can be solved by separation of variables. Consider the equation (\[L1-2\]) in the spherical coordinates $(x,y,z)\rightarrow (r,\theta ,\phi )$ when the Laplacian becomes $$\Delta =\frac{1}{r^{2}}\frac{\partial }{\partial r}\left( r^{2}\frac{
\partial }{\partial r}\right) -\frac{\hat{L}^{2}}{r^{2}},\quad \hat{L}^{2}=
\frac{\partial ^{2}}{\partial \theta ^{2}}+\coth \frac{\partial }{\partial
\theta }+\frac{1}{\sin ^{2}\theta }\frac{\partial ^{2}}{\partial \phi ^{2}}.
\label{laplacian}$$ Applying the method of separation of variables for the function $f(r,\theta
,\phi )$ in (\[L1-2\]) with assuming the angular part is given by the spherical harmonics $Y_{m}^{l}(\theta ,\phi )$ [@Erde-etal:1953], $$f(r,\theta ,\phi )=\frac{S(r)}{r}Y_{m}^{l}(\theta ,\phi ),
\label{separation}$$ one can represent the Green function in a spherical harmonics expansion [@Arfk:1985] as $$G(r_{1},r_{2})=\sum_{l=0}^{\infty
}\sum_{m=-l}^{l}g_{l}(r_{1},r_{2})Y_{m}^{l}(\theta _{1},\phi
_{1})Y_{m}^{l}(\theta _{2},\phi _{2}), \label{green(sh)}$$ which gives the equation for the radial Green function $g_{l}(r_{1},r_{2})$ $$r_{1}\frac{d^{2}S}{dr_{1}^{2}}
[r_{1}g_{l}(r_{1},r_{2})]-k^{2}r^{2}g_{l}(r_{1},r_{2})-l(l+1)g_{l}(r_{1},r_{2})=-4\pi \delta (r_{1}-r_{2}).
\label{green(r)}$$ It has the solution [@Arfk:1985] $$g_{l}(r_{1},r_{2})=ki_{l}(kr_{<})k_{l}(kr_{>}),\quad
i_{l}(kr),~r_{1}<r_{2},\quad k_{l}(kr),~r_{1}>r_{2} \label{green(r)_sol}$$ and the solution for $f(r,\theta ,\phi )$ for $r_{1}>r_{2}$ has the form $$\begin{aligned}
f({{\mathbf{r}}_{1}}) & = & Gk^{2}\sum_{l=0}^{\infty
}\sum_{m=-l}^{l}ki_{l}(kr_{1})Y_{m}^{l}(\theta _{1},\phi _{1})\times \nonumber \\
& & \int \mu ({{\mathbf{r}}_{2}})k_{l}(kr_{2})Y_{m}^{l}(\theta _{2},
\phi_{2})d\phi _{2}\sin \theta _{2}d\theta _{2}r_{2}^{2}dr_{2}. \label{f(r)}\end{aligned}$$ This is a multipole expansion of $f({\mathbf{r}}_{1})$ where the particular structure of the function of a macroscopic matter distribution $\mu ({\mathbf{r}}_{2})$ will make the particular structure of multipole expansion.
Now one can solve the Poisson equation (\[L1-1\]) with known $f(\mathbf{r})$ to obtain $\varphi (\mathbf{r})$. But rather than continuing solving the system in spherical harmonic representation, let us solve the system in general form by using Green’s function method. The solutions are very useful for illustration of the character of the macroscopic gravitational potential $\varphi (\mathbf{r})$ for the static weak-field approximation with the quadrupole moment tensor. One can always accomplish expansion of the general solution with respect to the spherical harmonics to get a multipole expansion.
The system of equations (\[L1-1\]) and (\[L1-2\]) must be supplemented by boundary conditions for the unknown $f(\mathbf{r})$ and $\varphi (\mathbf{r})$ with given $\mu (\mathbf{r})$ and $k^{2}$: $$\lim_{\left\vert {\mathbf{r}}\right\vert \rightarrow 0^{+}}\varphi ({\mathbf{r}})
={\mathrm{exists~and~is~bounded,}}\quad \lim_{\left\vert {\mathbf{r}}
\right\vert \rightarrow \infty }\varphi ({\mathbf{r}})=0,\quad 0<\left\vert
{\mathbf{r}}\right\vert <\infty , \label{phi_bc}$$ $$\lim_{\left\vert {\mathbf{r}}\right\vert \rightarrow 0^{+}}f({\mathbf{r}})=
{\mathrm{exists~and~is~bounded,}}\quad \lim_{\left\vert {\mathbf{r}}\right\vert
\rightarrow \infty }f({\mathbf{r}})=0,\quad 0<\left\vert {\mathbf{r}}\right\vert
<\infty . \label{f_bc}$$ The system of partial differential equations (\[L1-1\]) and (\[L1-2\]) with the boundary conditions (\[phi\_bc\]) and (\[f\_bc\]) is a Dirichlet boundary value problem on interval $0<\left\vert \mathbf{r}\right\vert
<\infty $. One can consider also two Dirichlet problems for intervals $
(0,\left\vert {{\mathbf{r}}}_{0}\right\vert )$ and $(\left\vert {{\mathbf{r}}}
_{0}\right\vert ,\infty )$ where ${{\mathbf{r}}}_{0}$ is the radius vector of a macroscopic matter configuration.
To solve first the nonhomogeneous modified Helmholtz equation (\[L1-2\]), one needs to find the corresponding Green function $$\Delta G_{f}({\mathbf{r}}_{1}{\mathbf{,r}}_{2})-k^{2}G_{f}({\mathbf{r}}_{1},
{\mathbf{r}}_{2})=-4\pi \delta ({\mathbf{r}}_{1}-{{\mathbf{r}}_{2}}) \label{G(f)}$$ with the boundary condition $$\lim_{\left\vert {\mathbf{r}}_{1}\right\vert \rightarrow \infty }
G_{f}({\mathbf{r}}_{1}{\mathbf{,r}}_{2})=0. \label{G(f)_bc}$$ The Green function can be found [@Arfk:1985] to be $$G_{f}({\mathbf{r}}_{1}{\mathbf{,r}}_{2})=\frac{e^{-k\left\vert {\mathbf{r}}_{1}-
{\mathbf{r}}_{2}\right\vert }}{\left\vert {\mathbf{r}}_{1}-
{{\mathbf{r}}}_{2}\right\vert } \label{G(f)_exp}$$ and the solution for $f({\mathbf{r}})$ is $$f({\mathbf{r}}_{1})=Gk^{2}\int \frac{e^{-k\left\vert {\mathbf{r}}_{1}-{\mathbf{r}}
_{2}\right\vert }}{\left\vert {\mathbf{r}}_{1}-{\mathbf{r}}_{2}\right\vert }\mu
( {\mathbf{r}}_{2})dV_{2}. \label{f(r)_exp}$$ The second equation (\[L1-1\]) has the Green function $$G_{\varphi }({\mathbf{r}}_{1}{\mathbf{,r}}_{2})=\frac{1}{\left\vert {\mathbf{r}}
_{1}-{\mathbf{r}}_{2}\right\vert } \label{G(phi)_exp}$$ as the general solution to the Green equation, $$\Delta G_{\varphi }({\mathbf{r}}_{1}{\mathbf{,r}}_{2})=-4\pi \delta ({\mathbf{r}}
_{1}-{\mathbf{r}}_{2}), \label{G(phi)}$$ to bring the solution for $\varphi (\mathbf{r})$ $$\varphi ({\mathbf{r}}_{1})=-\frac{1}{4\pi }\int \frac{f({\mathbf{r}}_{2})}{
\left\vert {\mathbf{r}}_{1}-{\mathbf{r}}_{2}\right\vert }dV_{2}.
\label{phi(r)_exp}$$ Now the solution for the equation (\[macropotential\_k2\]) can be written as $$\varphi ({\mathbf{r}}_{1})=-\frac{Gk^{2}}{4\pi }\int \frac{dV_{2}}{\left\vert
{\mathbf{r}}_{1}-{\mathbf{r}}_{2}\right\vert }\int \frac{dV_{3}e^{-k\left\vert
{\mathbf{r}}_{2}-{\mathbf{r}}_{3}\right\vert }}{\left\vert {\mathbf{r}}_{2}-
{\mathbf{r}}_{3}\right\vert }\mu ({\mathbf{r}}_{3}). \label{phi(r)_tot}$$ If the macroscopic gravitational potential $\varphi (\mathbf{r})$ is calculated in the origin of the coordinate system, that is ${\mathbf{r}}_{1}=0$, ${\mathbf{r}}_{2}=\mathbf{R}$, the formula (\[phi(r)\_tot\]) becomes $$\varphi =-\frac{Gk^{2}}{4\pi }\int \frac{dV_{R}}{\left\vert \mathbf{R}
\right\vert }\int \frac{dV_{r}e^{-k\left\vert \mathbf{R}-\mathbf{r}
\right\vert }}{\left\vert \mathbf{R}-\mathbf{r}\right\vert }\mu (\mathbf{r}).
\label{phi(R)_tot}$$
The asymptotic form of the solution (\[phi(R)\_tot\]) as $\left\vert
\mathbf{R}\right\vert \rightarrow 0^{+}$ is given by $$\lim_{\left\vert \mathbf{r}\right\vert \rightarrow 0^{+}}\varphi \simeq
-4\pi Gk^{2}\mu (\mathbf{r})\left\vert {\mathbf{R}}\right\vert ^{4}.
\label{phi(0)}$$ It shows that the macroscopic gravitational potential $\varphi (\mathbf{r})$ has better analytical properties as $\left\vert \mathbf{r}\right\vert
\rightarrow 0^{+}$ then the gravitational potential of the Poisson equation.
The asymptotic form of the solution (\[phi(R)\_tot\]) as $\left\vert
\mathbf{R}\right\vert \rightarrow \infty $ is given by $$\lim_{\left\vert \mathbf{r}\right\vert \rightarrow \infty }\varphi \simeq
-Gk^{2}\frac{Q_{ij}R_{i}R_{j}e^{-k\left\vert \mathbf{R}\right\vert }}{{\
\left\vert \mathbf{R}\right\vert }^{3}} \label{phi(inf)}$$ where $Q_{ij}$ is the total quadrupole moment of the macroscopic mass distribution $$Q_{ij}=\int \mu ({\mathbf{r}})r_{i}r_{j}dV_{r}. \label{Q}$$ The formula (\[phi(inf)\]) shows that the macroscopic gravitational potential $\varphi (\mathbf{r})$ fades out in the vicinity of the macroscopic matter source boundary on the characteristic distance $k^{-1}$. For the distances $\left\vert \mathbf{R}\right\vert \gg k^{-1}$ the inequality (\[L1-3\]) does not hold and the inequality (\[L2-2\]) is valid. Therefore to find a proper asymptotic form of the macroscopic gravitational potential $\varphi (\mathbf{r})$ for $\left\vert \mathbf{R}
\right\vert \gg k^{-1}$ one should solve the equation (\[L2\]) and match the solution (\[psi(r)\_exp\] ), see Section \[case\_II\], with the solution (\[phi(R)\_tot\]).
Case II: The Weak Quadrupole Polarization {#case_II}
=========================================
The equation (\[L2\]) can be solved by the Green function method. To solve the Poisson equation (\[L2\]), one needs to find the corresponding Green function $$\Delta G_{\psi }({\mathbf{r}}_{1}{\mathbf{,r}}_{2})=-4\pi \delta
({\mathbf{r}}_{1}- {\mathbf{r}}_{2}) \label{G(psi)}$$ with the boundary condition $$\lim_{\left\vert {\mathbf{r}}_{1}\right\vert \rightarrow \infty }G_{\psi }(
{\mathbf{r}}_{1}{\mathbf{,r}}_{2})=0. \label{G(psi)_bc}$$ The Green function can be found [@Arfk:1985] to be $$G_{\psi }({\mathbf{r}}_{1}{\mathbf{,r}}_{2})=\frac{1}{\left\vert {\mathbf{r}}_{1}-
{\mathbf{r}}_{2}\right\vert } \label{G(psi)_exp}$$ and the solution for $\psi (\mathbf{r})$ is $$\psi ({\mathbf{r}}_{1})=-G\left( 1+\frac{1}{k^{2}L^{2}}\right) \int \frac{\mu
({\mathbf{r}}_{2})}{\left\vert {\mathbf{r}}_{1}-{\mathbf{r}}_{2}\right\vert }
dV_{2}. \label{psi(r)_exp}$$ If the macroscopic gravitational potential $\psi (\mathbf{r})$ is calculated in the origin of the coordinate system, that is $\mathbf{r}_{1}=0$, $\mathbf{
\ \ r}_{2}=\mathbf{R}$, the formula (\[phi(r)\_tot\]) becomes $$\psi =-G\left( 1+\frac{1}{k^{2}L^{2}}\right) \int \frac{\mu (\mathbf{r})}{
\left\vert \mathbf{R}-\mathbf{r}\right\vert }dV_{r}. \label{psi(R)_exp}$$
Behavior near the center and at infinity are essentially the same up to renormalization by the factor $\left( 1+1/k^{2}L^{2}\right) $. The multipole expansion [@Land-Lifs:1975] for the macroscopic gravitational potential $
\psi $ with $L\sim R_{0}$ has the form $$\psi =-G\left( 1+\frac{1}{k^{2}L^{2}}\right) \left\{ \frac{M}{R_{0}}+\frac{1
}{6}D_{ij}\frac{\partial }{\partial X_{i}}\frac{\partial }{\partial X_{j}}
\frac{1}{R_{0}}+...\right\} . \label{psi(R)_multi}$$ The factor $\left( 1+1/k^{2}L^{2}\right) $ renormalizes the whole potential to contribute into all terms in the multipole expansion.
Acknowledgements {#acknowledgements .unnumbered}
================
Roustam Zalaletdinov would like to thank Remo Ruffini for hospitality and hosting his NATO-CNR Fellowship in ICRA.
[99]{} C.W. Misner, K.S. Thorne and J.A. Wheeler, *Gravitation* (W.H. Freeman, San Francisco, 1973).
G. Montani, R. Ruffini and R. Zalaletdinov, *Nuovo Cim.* **115B** (2001) 1343; *e-print* gr-qc/0012080.
P. Szekeres, *Ann. Phys. (NY)* **64**, 599 (1971).
A.N. Kaufman, *Ann. Phys. (NY)* **18**, 264.
G. Montani, R. Ruffini and R. Zalaletdinov, in: *Online Proc. of the 9th Marcel Grossmann Meeting*, Rome, Italy, July 2000 (http://www.icra.it/MG9/ Proceedings.htm, 2001), 11 p.
G. Montani, R. Ruffini and R. Zalaletdinov, in: *Proc. of the 9th Marcel Grossmann Meeting*, Rome, Italy, July 2000 (World Scientific, Singapore, 2001), 3 p., to appear.
R.A. Isaacson, *Phys. Rev.* **166**, 1263 (1968).
R.A. Isaacson, *Phys. Rev.* **166**, 1272 (1968).
R.M. Zalaletdinov, *Gen. Rel. Grav.* **24** (1992) 1015.
M. Mars and R.M. Zalaletdinov, *J. Math. Phys.* **38** (1997) 4741.
R.M. Zalaletdinov, *Bull. Astron. Soc. India* **25** (1997) 401.
R.M. Zalaletdinov, *Hadronic J.* **21** (1998) 170.
A. Krasiński, *Inhomogeneous Cosmological Models* (Cambridge University Press, Cambridge, 1997).
R. Tavakol and R. Zalaletdinov, *Found. Phys.* **28** (1998) 307.
R.M. Zalaletdinov, *Gen. Rel. Grav.* **25** (1993) 673 .
R.M. Zalaletdinov, in: *Proc. of the 7th Marcel Grossmann Meeting on General Relativity*, Stanford, USA, July 1994, Part A, eds. R.T. Jantzen and G. Mac Keiser (World Scientific, Singapore, 1996), p. 394.
M.F. Shirokov and I.Z. Fisher, Astron. Zh. **39** 899 (1962) (in Russian) \[English translation: Sov. Astron. - A.J. **6** 699 (1963) \].
D.W. Sciama, *Modern Cosmology* (CUP, Cambridge, 1971) , Chapter 8.
G.F.R. Ellis, in *General Relativity and Gravitation* , eds. B. Bertotti, F. de Felici and A. Pascolini (Reidel, Dordrecht, 1984), p. 215.
N.V. Zotov and W.R. Stoeger, Class. Quantum Grav. **9** 1023 (1992).
P. Yodzis, *Inter. J. Theor. Phys.* **3** (1971) 331.
Yu.G. Ignatiev, in: *Gravitatsya i Teoriya Otnositel’nosti (Gravitation and Relativity Theory)*, Issue 14-15 (Kazan State University Press, Kazan, 1978) (in Russian).
R.M. Zalaletdinov, *Gen. Rel. Grav.* **28** (1996) 953 .
H.A. Lorentz, *The Theory of Electrons* (Teubner, Leipzig, 1916).
S. Bildhauser and T. Futamase, *Gen. Rel. Grav* . **23** (1991) 1251.
W.K.H. Panovsky and M. Phillips, *Classical Electricity and Magnetism* (Addison-Wesley, Reading, 1962).
S.T. de Groot and L.G. Suttorp, *Foundations of Electrodynamics* (North-Holland, Amsterdam, 1972).
J.D. Jackson, *Classical Electrodynamics* (John Wiley & Sons, New York, 1975).
N.A. Chernikov, *Dokl. Akad. Nauk SSSR* **144** (1962) 544 \[*Soviet Physics-Doklady* **7** (1962) 428\].
W. Israel, in: *General Relativity*, ed. L. O’Raifeartaigh (Clarendon Press, Oxford, 1972).
S.R. de Groot, *The Maxwell Equations* (North-Holland, Amsterdam, 1969).
G. Russakoff, *Amer. J. Phys.* **38** (1970) 1188 .
F.N.H. Robinson, *Macroscopic Electrodynamics* (Pergamon Press, Oxford, 1973).
N. Bohr and L. Rosenfeld, Mat.-fys. Medd. Dan. Vid. Selsk. **12**, no. 8 (1933) \[English translation in *Selected Papers of Léon Rosenfeld*, eds. R.S. Cohen and J.J. Stachel (D. Reidel, Dordrecht, 1979) p. 357\].
B.S. DeWitt, in *Gravitation: An introduction to current research*, ed. L. Witten (Wiley, New York, 1962), p. 266.
L. Bel, *Ann. Inst. Henri Poincaré* **17** (1961) 37.
J.L. Synge, *Relativity: The Special Theory* (North-Holland, Amsterdam, 1956).
A. Erdelyi *et al*, *Higher Transcendental Functions*, v. II (Dover, New York, 1953).
G. Arfken, *Mathematical Methods for Physicists*, 3rd Edition (Academic Press, New York, 1985).
L.D.Landau and E.M. Lifshitz, *The Classical Theory of Fields*, 4th Edition (Pergamon Press, Oxford, 1975).
[^1]: E-mail: montani@icra.it
[^2]: E-mail: ruffini@icra.it
[^3]: E-mail: zala@icra.it
[^4]: For discussion on the other physical settings on general relativity facing the same problem see [@Tava-Zala:1998].
[^5]: The mixed form is preferable here for the reason that it contains only products of metric by curvature. On contrary, the covariant or contravariant forms of the Einstein equations have triple products of metric by metric by curvature.
[^6]: It should noted that the inequality (\[aver:oper\]) has been observed in all possible averaging settings, for example, for a volume space-time averaging in [@Zala:1992], [@Shir-Fish:1962], in the framework of a kinetic approach in [@Yodz:1971] and for a statistical ensemble averaging in [@Igna:1978]. Relations between different averaging procedures are discussed in [@Zala:1997], [@Zala:1998].
[^7]: No explicit averaging procedure had been used in [@Szek:1971] and averaged relations and equations were being written rather on the basis of heuristic considerations than a rigorous analysis.
[^8]: Gravitational macroscopic equations similar to (\[av-bianchi\]) are known to have been proposed first in [@Bel:1961].
[^9]: Under assumption that the background metric in the left-hand side of (\[av-ein-molec\]) represents a weak gravitational field on the flat background one can use the covariant derivatives with respect the metric in all relations instead of partial derivatives with respect to the flat metric.
|
---
abstract: 'Over the next decade, improvements in cosmological parameter constraints will be driven by surveys of large-scale structure in the Universe. The information they contain is encoded in a hierarchy of correlation functions, and tools to utilize the two-point function are already well-developed. But the inherent non-linearity of large-scale structure suggests that further information will be embedded in higher correlations, of which the bispectrum is currently the most accessible. Extracting this information is extremely challenging: it requires accurate theoretical modelling and significant computational resources to estimate the covariance matrix describing correlations between different configurations of Fourier modes. We investigate whether it is possible to reduce the covariance matrix without significant loss of information by using a proxy that aggregates the bispectrum over a subset of Fourier configurations. Specifically, we study the constraints on $\Lambda$CDM parameters from combining the power spectrum with (*a*) the modal decomposition of the bispectrum, (*b*) the line correlation function and (*c*) the integrated bispectrum. We forecast the error bars achievable on $\Lambda$CDM parameters in a future galaxy survey that measures one of these proxies and compare them to those obtained from measurements of the Fourier bispectrum, including simple estimates of their degradation in the presence of shot noise. Our results demonstrate that the modal bispectrum performs as well as the Fourier bispectrum, even with considerably fewer modes than Fourier configurations. The line correlation function has good performance but does not match the modal bispectrum. The integrated bispectrum is comparatively insensitive to changes in the background cosmology. We find that the addition of bispectrum data can improve constraints on bias parameters and the normalization $\sigma_8$ by a factor between 3 and 5 compared to power spectrum measurements alone. For other parameters, improvements of up to $\sim$ 20% are possible. Finally, we use a range of theoretical models to explore how the sophistication required for realistic predictions varies with each proxy.'
author:
- |
\
Astronomy Centre, School of Mathematical and Physical Sciences, University of Sussex, Brighton BN1 9QH, United Kingdom
bibliography:
- 'paper.bib'
date: 'Accepted XXX. Received YYY; in original form ZZZ'
title: |
Towards optimal cosmological parameter recovery\
from compressed bispectrum statistics
---
\[firstpage\]
Cosmology: theory, Large-scale structure of the Universe
Introduction {#sec:intro}
============
Constraints on cosmological parameters have improved significantly over the last two decades, driven by high-precision data from the cosmic microwave background (‘CMB’) temperature and polarization anisotropies [@Bennett:2003ca; @Ade:2013ydc]. But the capacity of CMB observations to sustain this rate of progress is now nearly exhausted. Measurements of the temperature anisotropy have become limited by cosmic variance down to very small scales, and therefore future large-scale measurements will furnish little new information. Meanwhile, on small scales, cosmological information begins to be erased by astrophysical processes. Modest improvements may still come from better polarization data, perhaps shrinking current uncertainties by a factor of a few, but eventually these measurements will also approach the limit of cosmic variance. Further progress will be possible only with new sources of information. In the decade 2020–2030 we expect such a source to be provided by surveys of cosmological large-scale structure—but only if the information these surveys contain can be extracted and understood [@Silk:2016srn].
The statistical information contained in a galaxy survey is carried by its hierarchy of correlation functions, of which typically only a few lowest-order functions can be measured accurately. Tools to extract information from the two-point function were developed early and are now mature. The development of tools to extract information from higher-order correlation functions has proceeded more slowly [@Fry:1983cj; @Goroff:1986ep; @Scoccimarro:2000sn; @Sefusatti:2006pa], but because structure formation is non-linear it is likely that these carry an important fraction of the information content. To make good use of our investment in costly observational programmes it will be necessary to find a means of using information from at least the three-point function.
What are the challenges? A first difficulty arises from combinatorics. We write the matter overdensity at time $t$ as $\delta(\bx,t) = \delta\rho(\bx, t) / \bar{\rho}(t)$, where $\delta\rho(\bx, t) = \rho(\bx,t) - \bar{\rho}(t)$ is the density perturbation and $\rho(t)$ is the uniform background. Allowing angle brackets $\langle \cdots \rangle$ to denote an ensemble average, statistical homogeneity makes the two- and three-point functions $\langle \delta(\bx) \delta(\bx + \br) \rangle$ and $\langle \delta(\bx) \delta(\bx + \br_1) \delta(\bx + \br_2) \rangle$ independent of the origin $\bx$. After translation to Fourier space this enforces conservation of momentum for the wavenumbers that participate in the expectation value,
$$\begin{aligned}
\label{eq:defP}
\langle \delta(\bk_1) \delta(\bk_2) \rangle
& = (2\pi)^3 \DiracD(\bk_1 + \bk_2) P(k) , \\
\label{eq:defB}
\langle \delta(\bk_1) \delta(\bk_2) \delta(\bk_3) \rangle
& = (2\pi)^3 \DiracD(\bk_1 + \bk_2 + \bk_3) B(k_1, k_2, k_3) ,\end{aligned}$$
where $k = |\bk_1| = |\bk_2|$ is the common magnitude of the wavenumbers appearing in the two-point function. In Equations – and the remainder of this paper we suppress the time $t$ labelling the hypersurface of evaluation. Isotropy makes the power spectrum $P$ a function only of $k$, while the bispectrum $B$ is a function of the three wavenumbers $k_1$, $k_2$, $k_3$ subject to the closure condition $\bk_1 + \bk_2 + \bk_3 = 0$. Therefore a fixed volume of space yields many more distinct configurations of the bispectrum than of the spectrum. If we choose to measure all of them then we must provide an estimate for their covariance, and beyond the Gaussian approximation this typically requires simulations. Since we require at least as many simulations as the number of independent covariances, the number of simulations to be performed grows at least linearly in the number of configurations. This makes it very expensive to use more than a fraction of the available bispectrum measurements.
Second, we must estimate typical values for $B(k_1, k_2, k_3)$ in a particular cosmological model. While such estimates are already necessary for the power spectrum $P(k)$, accurate estimates for the bispectrum are substantially more challenging. There are two key reasons. No matter what methods we use, the algebraic complexity associated with high-order correlation functions is usually worse than at lower order. Also, many of our standard tools have a reduced range of validity as we move up the correlation hierarchy. We must therefore work harder to obtain trustworthy predictions from our models, and in some cases we can do so only by giving up analytic methods altogether.
These problems have hampered the development of a toolkit that would make use of bispectrum measurements routine. Nevertheless, they are difficulties of practice and not obstructions of principle—if necessary, we could determine both covariances and typical values of $P$ or $B$ from simulations, at least over a certain range of scales. But such determinations would require a very large number of realizations. The sheer computational resource entailed by this strategy makes it unattractive on timescales of interest for surveys such Euclid, DESI, or LSST.
To build a practical methodology we must cut the size of the covariance matrices and avoid simulations where possible. Simulations are not needed when analytic methods suffice to predict $P$ or $B$, or when a Gaussian approximation to the covariance is acceptable. Meanwhile, an obvious way to reduce the number of configurations is simply not to measure them all. Depending how aggressively we choose to cut, this may mean accepting a significant loss of information. A more nuanced option is to aggregate groups of configurations into weighted averages, effectively *compressing* the data carried by the bispectrum rather than discarding it. Such averages could be computed directly. But there are also observables whose statistics can naturally be expressed as weighted averages of this kind. Measuring these will often be simpler than measuring amplitudes of the Fourier bispectrum—simultaneously reducing the effort required to estimate and invert their covariance matrices. We describe these observables as ‘proxies’ or ‘proxy statistics’ for the full Fourier bispectrum.
Each proxy represents a compromise between (*a*) information loss due to compression, (*b*) the type of Fourier configurations over which it aggregates, and therefore the physics to which it is sensitive, and (*c*) its accessibility to analytical modelling, either for covariances or to estimate typical measurements. In this paper we select three proxies that have already been described in the literature and characterize their performance in each of these categories. Our aim is not to find an optimal proxy for any particular measurement, but rather to demonstrate that their use represents a feasible strategy for upcoming surveys without unacceptable degradation in information recovery.
Our principal results are forecasts for the parameter error bars achievable from combinations of the galaxy power spectrum and bispectrum, or its proxies. The parameter set we study comprises the background quantities of a $\Lambda$CDM model with evolving dark energy, supplemented by two parameters describing the bias model [@McDonald:2009dh]. We study how these forecasts change when they are estimated using the complete non-Gaussian covariance matrix or its Gaussian approximation. We characterize their dependence on the method used to predict typical values for $P(k)$ and $B(k_1, k_2, k_3)$ by sampling the results using tree-level and one-loop standard perturbation theory (‘SPT’), and an implementation of the halo model. We compare these estimates with values measured directly from simulations. These results can be used to determine, for each observable, the degree of modelling sophistication that is required to obtain accurate forecasts.
Our analysis does not include the effect of survey geometry or incompleteness, or redshift-space effects, and should be regarded as a determination of the performance of each proxy under idealized conditions. We include a simple analysis that indicates how our results would change in the presence of shot noise.
Fisher forecasts including Fourier bispectrum measurements have previously been reported by [@Sefusatti:2006pa], assuming $1{,}015$ bispectrum configurations and measuring covariances from a suite of $6{,}000$ mock catalogues generated by the algorithm [@Scoccimarro:2001cj] and second-order Lagrangian perturbation theory (‘2LPT’). Their results suggested that the bispectrum contains significant cosmological information. For comparison, in our analysis we use $95$ bispectrum configurations in order to keep the size of the covariance matrix within plausible bounds, and measure it directly from a suite of full simulations.
More recently, [@Chan:2016ehg] estimated the extra constraining power of Fourier bispectrum measurements by computing their contribution to the signal-to-noise, but did not make forecasts for error bars on cosmological parameters. They found that the bispectrum contributed up to a $\sim 30\%$ increase in signal-to-noise above the power spectrum and concluded that the information gain would be modest, perhaps being principally useful to break degeneracies. One of our aims is to clarify the relationship between this conclusion and the more nuanced outcomes found by [@Sefusatti:2006pa]. We find that estimates based on signal-to-noise alone generally give only a rough indication compared to the full Fisher calculation because they do not account for variations in the sensitivity to background cosmology between observables.
Our presentation is organized as follows. In Section \[sec:estimators\] we introduce the three bispectrum proxies to be studied in the remainder of the paper. These are: (*a*) the *modal bispectrum*, which can be regarded as an alternative to the Fourier bispectrum obtained by exchanging the Fourier modes $\e{\im \bk \cdot \bx}$ for an alternative basis [@Fergusson:2010ia; @Reganetal2012]; (*b*) the *line correlation function*, which samples three-point statistics of the phase of the density fluctuation [@Obreschkow:2012yb; @Wolstenhulme:2014cla], and (*c*) the *integrated bispectrum* [@Chiang:2014oga], which measures variation of the power spectrum in subsampled regions. Each of these measures can be expressed as a weighted average over particular configurations of the Fourier bispectrum.
In Sections \[sec:predict-ib\]–\[sec:predict-modal\] we explain how each proxy can be predicted using the halo model or a flavour of SPT. In Section \[sec:galaxy-bias\] we explain our prescription to obtain the biased galaxy density field from the underlying matter density field, which is the quantity predicted by these analytic models. In Section \[sec:estimation\] we describe our procedure to recover estimates for each proxy statistic from simulations, and in Section \[sec:comparison\] we compare these estimates (and estimates for their deriatives with respect to the cosmological parameters) with theoretical predictions. Readers familiar with the measures of 3-point correlations described in Section \[sec:estimators\] and the modelling technologies of Section \[sec:modelling\] may choose to begin reading at this point. In Section \[sec:covariance\] we present signal-to-noise estimates for the information content of each proxy. Our Fisher forecasts appear in Section \[sec:paramEstim\]. In Section \[sec:discussion\] we collect a number of topics for discussion, including the compression efficiency of each proxy statistic and the impact of shot noise on our forecasts. We conclude in Section \[sec:conclusions\].
Our Fourier convention is $f(\bx) = \int \D^3 k \, (2\pi)^{-3} f(\bk) \e{\im \bk \cdot \bx}$. To avoid confusion we distinguish the Dirac $\delta$-function $\DiracD(\bx)$ or $\DiracD(\bk)$ and the Kronecker symbol $\Kronecker_{ij}$ from the matter overdensity $\delta \equiv \delta\rho / \rho$.
The Fourier bispectrum and its proxies {#sec:estimators}
======================================
In this section we introduce the proxy statistics to which we compare the Fourier bispectrum. This has already been defined—together with the power spectrum—in Equations –. We describe the integrated bispectrum in Section \[ssec:iB\], the line correlation function in Section \[ssec:lcf\] and the modal decomposition of the bispectrum in Section \[ssec:modal\]. Each of these represents a possible compression of the Fourier bispectrum, in the sense described in Section \[sec:intro\].
Integrated bispectrum {#ssec:iB}
---------------------
The integrated bispectrum (or ‘position-dependent power spectrum’) was developed by @Chiang:2014oga as a tool to search for primordial non-Gaussianity in large-scale structure. It has several convenient features: it is easily estimated using standard power-spectrum codes and it has a clear physical interpretation. As we shall see in Section \[sec:predict-ib\], it represents a weighted average of the Fourier bispectrum dominated by ‘squeezed’ configurations—that is, wavenumbers $(\bk_1, \bk_2, \bk_3)$ where one $k_i$ is much smaller than the other two. If we assume $k_3 \ll k_1, k_2$ then the bispectrum $\langle \delta(\bk_1) \delta(\bk_2) \delta(\bk_3) \rangle$ expresses correlations between a single long-wavelength mode $\delta(\bk_3)$ and the two-point function $\langle \delta(\bk_1) \delta(\bk_2) \rangle$. This makes it sensitive to ‘local-type’ non-Gaussianity produced by inflationary models with more than one active field. However, because gravitational collapse correlates modes with comparable wavenumbers, the bispectrum produced during mass assembly is typically concentrated away from squeezed configurations. For this reason it is not clear how sensitive the integrated bispectrum might be to the cosmological parameters that influence this assembly process.
To define the integrated bispectrum divide the total survey volume into $N_s$ cubic subvolumes, each of volume $V_s \equiv L_s^3$ and centred at positions $\br_L$. Compute the power spectrum and average overdensity for each subvolume, which we denote $P(\bk,\br_L)$ and $\bar{\delta}(\br_L)$, respectively. (The power spectrum $P(\bk, \br_L)$ may depend on the orientation of $\bk$ if the subvolumes are not isotropic.) Finally, the integrated bispectrum is defined to be the expectation of $P(\bk, \br_L) \bar{\delta}(\br_L)$, averaged over the orientation of $\bk$, (k) P(,\_L) |(\_L) \_[N\_s]{} . \[eq:iB\] The notation $\langle \cdots \rangle_{N_s}$ indicates that the expectation is to be taken over all subvolumes.
To compute this expectation we Taylor expand $P(\bk, \br_L)$ in powers of $\bar{\delta}(\br_L)$ [@Chiang:2014oga]. The leading contribution is P(,\_L)|(\_L)\_[N\_s]{} = |(\_L) \_[N\_s]{} |\_[|=0]{} P() \^2\_[L]{} , \[eq:iB-lowest\] where $\sigma^2_{L}\equiv \langle\bar{\delta}^2(\br_L)\rangle_{N_s}$ is the variance in mean overdensity over the subvolumes. Therefore, at lowest order, the integrated bispectrum describes variation of the power spectrum in response to changes in the large-scale overdensity. [^1] We conclude that measurements of $\iB$ contain both the power spectrum and its variance. Since these can be measured directly, any new information contained in the integrated bispectrum must reside in its normalized component [@Chiang:2014oga], (k) . |\_[|=0]{} , \[eq:ib\] where the second approximate equality applies when only the lowest-order contribution from the Taylor expansion need be retained. This is the linear response approximation. The quantity $\D\ln P(k)/\D\bar{\delta}$ is the *linear response function* and provides a good approximation to $\ib$ for large $k$.
Line Correlation Function {#ssec:lcf}
-------------------------
Equation shows that the power spectrum is sensitive only to information carried by the amplitude of each Fourier mode. In contrast, higher-order statistics generally encode information carried by both amplitudes and phases. Phase correlations are an exclusive signature of non-Gaussian density fields. For instance, they may arise through processes in the primordial Universe or from mode coupling in the non-linear regime of gravitational collapse. Therefore, unlike the amplitudes, phases directly probe cosmological information that is absent from the two-point function.
With this motivation, @Obreschkow:2012yb proposed the line correlation function (often abbreviated as ‘LCF’). It measures a subset of three-point phase correlations of the density field—specifically, correlations between collinear points, each separated by a distance $r$. @Obreschkow:2012yb demonstrated that the LCF is a robust tracer of filamentary structures, and showed that it could be used as a phenomenological tool to distinguish between cold and warm dark matter scenarios. Subsequent work established its connection to conventional higher-order statistics [@Wolstenhulme:2014cla; @Eggemeier:2015ifa; @Eggemeier:2016asq].
The line correlation function can be understood as follows: for a given density field $\delta(\bx)$ in some volume $V$, its real-space phase field $\epsilon_r(\bx)$ smoothed on a scale $r$ satisfies $$\label{eq:estimators.epsilon}
\epsilon_r(\bx)
=
\int \frac{\D{^3 k}}{(2\pi)^3} \epsilon(\bk) \e{\im\bk\cdot\bx} W(k | r)
\equiv
\int \frac{\D{^3 k}}{(2\pi)^3}\frac{\delta(\bk)}{|\delta(\bk)|}
\e{\im\bk\cdot\bx} W(k | r)
,$$ where $W(k | r)$ is the Fourier transform of the smoothing window function. We take this to be a spherical top-hat in $k$-space, $W(k | r) \equiv \Theta(1-k\,r/2\pi)$, where $\Theta(x)$ denotes the Heaviside step function. The phase at $\bk=0$ is defined so that $\epsilon(\bZero) \equiv 0$. Following @Obreschkow:2012yb the LCF is defined by (r) ()\^[3/2]{} \_[r]{}() \_[r]{}(+) \_r(-) \[eq:l1\] , where the factor $V^3/(2\pi)^9$ represents a volume regularization. After taking Fourier transforms we require the three-point function of the $\epsilon_r(\bk)$ in order to evaluate this integral. @Wolstenhulme:2014cla and @Eggemeier:2016asq demonstrated that, at lowest order in the expansion of the probability density function for Fourier phases, this three-point function is directly related to the Fourier bispectrum. Therefore the LCF must contain some fraction of the information in $B$, but because $\LCF(r)$ is an average over specific collinear configurations it represents a compression. Specifically, the number of LCF bins will vary linearly with changes in the effective cut-off on Fourier modes.
Modal bispectrum {#ssec:modal}
----------------
Our final proxy is a ‘modal’ expansion of the three-point function. This is very similar to the Fourier bispectrum, except that we exchange the Fourier basis $\e{\im\bx\cdot\bk}$ for a set of alternative modes that are better adapted to the structure of $B$. The exchange is helpful if we can represent the bispectrum to the same accuracy using fewer modes than required by the Fourier representation. This approach was originally developed by @FergussonShellard2009 and @Reganetal2010 to analyse microwave background data, and subsequently applied to large-scale structure by @Fergusson:2010ia and @Reganetal2012.
In the alternative basis we represent the Fourier bispectrum in the form B(k\_1,k\_2,k\_3)(k\_1,k\_2,k\_3) \_[n=0]{}\^[-1]{} \_n\^[Q]{} Q\_n(k\_1,k\_2,k\_3) , \[eq:Bmodal\] where the $Q_n$ are basis functions that span the space of configurations compatible with a triangle condition on $(k_1, k_2, k_3)$, but can otherwise be chosen freely provided they are linearly independent. The $\beta_n^Q$ are numbers that we describe as ‘modal coefficients’. They can be regarded as averages of the Fourier bispectrum over a set of configurations picked out by the corresponding $Q_n$. The function $w(k_1,k_2,k_3)$ is an arbitrary weight that will be chosen in Section \[sec:predict-modal\].
If the $Q_n$ form a complete basis we expect $B$ and $\Bmodal$ to become equivalent in the limit $\nmax \rightarrow \infty$. In this limit the modal expansion is merely a reorganization of the Fourier representation. But if we select the lowest $Q_n$ to average over the most relevant Fourier configurations then it may be possible to represent a typical $B$ using only a small number of modes. [^2] Taking $\nmax$ to be of order this number, the outcome yields useful compression whenever $\nmax \ll \Ntriangles$, where $\Ntriangles$ is the number of Fourier configurations contained in the volume under discussion. At least for reasonably smooth bispectra, @Schmittfull:2012hq found that this could be done with no more than modest loss of signal.
Given a choice of $Q_n$ we may redefine the basis by taking arbitrary linear combinations. For example, we will use this freedom in Section \[sec:predict-modal\] to obtain a basis for which the $\beta$-coefficients are uncorrelated. The covariance matrix in this redefined basis is especially simple.
Such a redefinition can be performed using an invertible matrix $\lambda_{mn}$. We define $R_n \equiv \sum_m \lambda_{nm}^{-1} Q_m$. The $\beta$-coefficients in the $R$-basis now satisfy $\beta_n^R \equiv \sum_m \lambda_{mn} \beta_m^Q$. Since the $Q$- and $R$-bases are reorganizations of each other, the modal bispectrum defined using either basis is equivalent, $$B(k_1, k_2, k_3)
\approx
\frac{1}{w(k_1, k_2, k_3)}
\sum_{n=0}^{\nmax-1} \beta_n^Q Q_n(k_1, k_2, k_3)
=
\frac{1}{w(k_1, k_2, k_3)}
\sum_{n=0}^{\nmax-1} \beta_n^R R_n(k_1, k_2, k_3) .
\label{eq:Brecdef}$$
Predicting typical values and covariances for the proxies {#sec:modelling}
=========================================================
In this section we explain how to obtain predictions for the typical values and covariances of $\ib(k)$, $\LCF(r)$ and $\beta^R_m$ in a given cosmological model. This can be done with different degrees of sophistication, corresponding—for example—to truncations at different levels in the loop expansion of standard perturbation theory [@Bernardeau:2001qr], or by using fitting functions calibrated to match the output of simulations [@Mead:2015yca]. Since each proxy aggregates a different group of Fourier configurations, and these configurations vary in their response to features of the background cosmology, the sophistication needed to adequately capture the behaviour of the proxies may vary.
This is both a challenge and an opportunity. Proxies that require delicate modelling to obtain accurate predictions are harder to use, and may be expensive to deploy in a parameter-estimation Monte Carlo. In favourable cases, however, the payoff will be sensitive discrimination between nearby cosmological models. On the other hand, proxies that can be modelled robustly using simple methods are easy to use and cheap to deploy, but may offer correspondingly coarse discrimination. We study these trade-offs by contrasting predictions made using tree-level and one-loop SPT, and the halo model. For the halo-model power spectrum we choose the implementation [@Mead:2015yca]. For the halo-model bispectrum we use the standard formulae given by @CooraySheth2002 with a Sheth–Tormen mass function [@Sheth:1999mn] and Navarro–Frenk–White halo profile [@Navarro:1995iw]. In Section \[sec:comparison\] we study the performance of each method compared to numerical estimates extracted directly from simulations, which enables us to characterize the minimum adequate sophistication for each proxy. For simplicity our analysis is framed in terms of the underlying dark matter density field, although in Section \[sec:galaxy-bias\] we explain how this can be extended to predict galaxy clustering.
To compute a likelihood for a given proxy, either for the purposes of parameter estimation or to make forecasts, we require an estimate for the covariance between different configurations. Therefore the minimum sophistication needed to adequately predict this covariance matrix will play an additional role in determining the relative expense of each proxy. In practice the covariance matrix is typically estimated by taking measurements from a large suite of simulations or 2LPT catalogues, or, if this is cannot be done, by falling back to a Gaussian approximation. simulations give accurate results, but are expensive enough that assembling sufficient independent realizations to determine the inverse covariance is often not feasible. In comparison, catalogues based on 2LPT are significantly cheaper but become inaccurate in the non-linear regime, while the Gaussian prediction breaks down even earlier and may miss cross-correlations that significantly affect the outcome.
The relative importance of these cross-correlations varies between proxies. In Sections \[sec:covariance\]–\[sec:paramEstim\] we estimate their significance by comparing results from and Gaussian covariances. We describe our procedure to estimate covariance matrices from the simulations in Section \[sec:comparison\], but collect formulae for the Gaussian approximation here.
For comparison, the Gaussian covariance for the power spectrum and Fourier bispectrum, measured on a grid of spacing $\Delta k$ with fundamental frequency $\kf=2\pi/V^{1/3}$, can be written $$\label{eq:CovGauss-P}
\CovGauss[P(k_i),P(k_j)] \approx \Kronecker_{ij} \frac{2\kf^3}{4\pi k_i^2 \Delta k} P^2(k_i) ,$$ where $\Kronecker_{ij}$ is the Kronecker symbol, and $$\label{eq:CovGauss-B}
\CovGauss[B(\bk_1,\bk_2,\bk_3),B(\bq_1,\bq_2,\bq_3)] \approx
\Kronecker_{\bk,\bq}
\frac{\BispectrumDegeneracy \pi \kf^3}{k_1 k_2 k_3 (\Delta k)^3}
P(k_1)P(k_2)P(k_3) .$$ The Kronecker symbol $\Kronecker_{\bk,\bq}$ should be interpreted to equal unity if the triangles defined by $\{ \bk_1, \bk_2, \bk_3 \}$ and $\{ \bq_1, \bq_2, \bq_3 \}$ are equal, and zero otherwise. The degeneracy factor $\BispectrumDegeneracy$ equals unity for a scalene triangle, two for an isosceles triangle and six for an equilateral triangle.
Integrated bispectrum {#sec:predict-ib}
---------------------
To evaluate the expression we first establish its relation to the underlying 3-point function. The overdensity within the subvolume labelled by $\br_L$ can be written $$\delta(\bk,\br_L)=\int \frac{\D^3 q}{(2\pi)^3} \delta(\bk - \bq) W_L(\bq) \e{-\im \bq \cdot \br_L }
,$$ where $W_L(\bq)=V_s \prod_{i=1}^3 \sinc(q_i L_s/2)$ is the Fourier transform of the cubic window function with side length $L_s$, and $\sinc x \equiv (\sin x) / x$. The power spectrum in this subvolume is $P(\bk, \br_L) \equiv \langle |\delta(\bk, \br_L)|^2 \rangle / V_s$ and the mean overdensity is $\bar{\delta}(\br_L) \equiv \delta(\bZero,\br_L)/V_s$. Combining these with equation yields [@Chiang:2014oga] $$\iBtheory(k)
= \frac{1}{V_s^2}\int \frac{\D^2\hat{k}}{4\pi} \int \frac{\D^3 q_1}{(2\pi)^3} \int
\frac{\D^3 q_2}{(2\pi)^3} \Btheory(\bk-\bq_1,-\bk+\bq_1+\bq_2,-\bq_2)
W_L(\bq_1)W_L(-\bq_1-\bq_2)W_L(\bq_2)\,. \label{eq:iBtheory}$$ Because $\sinc x$ is strongly peaked for $|x| \lesssim \pi$ the window functions $W_L$ effectively constrain the $q_i$ integrals to $q_i \lesssim 1/L_s$. Since $k \gtrsim 1/L$ within each subvolume, the integral receives significant contributions only from squeezed configurations of the Fourier bispectrum that are of order the subvolume size or larger, because in the limit $q_1, q_2 \ll k$ we have $\Btheory(\bk-\bq_1,-\bk+\bq_1+\bq_2,-\bq_2) \approx \Btheory(\bk,-\bk,-\bq_2)$.
@Chiang:2014oga computed the linear response function using and tree-level SPT, and verified that it reproduces equation to within $2\%$ for $k \gtrsim 0.2\,h^{-1}\,\Mpc$. For our purposes we require accurate estimates at smaller $k$, and therefore we perform a numerical integration using directly. The integral is 8-dimensional and its evaluation is challenging; we implement it using the algorithm provided by the package [@Hahn:2016ktb]. To make the integration time feasible we densely sample $\Btheory$ on a 3-dimensional cubic mesh in coordinates $(k_1, k_2, \mu_{12})$, where $\mu_{12} \equiv ( k_1^2 + k_2^2 - k_3^2 ) / (2 k_1 k_2)$ is the cosine of the angle between $\bk_1$ and $\bk_2$ and can be used in place of the third wavenumber $k_3$. We construct a 3-dimensional cubic spline that interpolates between lattice points and use this spline to evaluate the integrand. To validate this procedure we have verified that our numerical results match the analytic prediction from the linear response function at large $k$.
Although we have not written subvolume labels explicitly, $\sigma_{L}^2$ and all power spectra in refer to subsampled quantities, and therefore should be computed by appropriate convolution with the subvolume window function $W_L(\bq)$.
This procedure yields good results for tree-level and one-loop SPT, but does not perform well when applied to the halo model. In this case we we do not recover equivalence between our evaluation of and the linear response function, which we compute by numerical differentiation of the power spectrum. We interpret this disagreement as an indication that the standard halo model makes inconsistent predictions for the modulation of the power spectrum with $\bar{\delta}$, or the squeezed limit of the bispectrum, or both. Moreover, comparison of the halo-model $\ib$ computed using to our simulations shows poor agreement, suggesting that estimates based on will be inaccurate. Therefore, for the halo model only, we estimate $\ib$ by assuming the linear response approximation and computing $\D\ln P / \D\bar{\delta}$. We calculate the derivative using the simulation-calibrated formula proposed by @Chiang:2014oga, $$\label{eq:ibsqlim_halo}
\frac{\D\ln \Phalo(k)}{\D\bar{\delta}} =
\frac{13}{21}\frac{\D\ln \Phalo(k)}{\D\ln \sigma_8} + 2 -
\frac{1}{3}\frac{\D\ln k^3 \Phalo(k)}{\D\ln k} \, ,$$ which gives reasonable agreement with our simulations.
In the absence of shot noise, the Gaussian covariance for estimates of $\ib$ constructed from data can be written $$\CovGauss\big[\ib(k_i),\ib(k_j) \big]
=
\frac{V_s}{V N_{ks}}
\frac{1}{\sigma_L^2}
\Kronecker_{ij} .
\label{eq:ib_covg}$$ In this expression $V_s$ is the volume of a subsampled region and $V$ denotes the total survey volume. The quantity $N_{ks}=2\pi k^2 \Delta k V_s$ is the number of Fourier modes in a subvolume $k$-bin.
Line correlation function {#sec:predict-lcf}
-------------------------
@Wolstenhulme:2014cla used tree-level SPT to predict the line correlation function. Their result was generalized to an arbitrary bispectrum by @Eggemeier:2016asq, who gave the formula $$\label{eq:models.lcf-perturbative}
\LCFtheory(r)
\simeq \Big(\frac{r}{4\pi}\Big)^{9/2}
\iint\displaylimits_{\substack{|\bk_1|,|\bk_2|,\\|\bk_1+\bk_2|\leq 2\pi/r}}
\D{^3 k_1}\,\D{^3 k_2} \;
\Bphasetheory(k_1,k_2,k_3)
j_0\big(\left|\bk_1-\bk_2\right| r\big)\,,$$ where $j_0(x)=\sin(x)/x$ is the spherical Bessel function of order zero and the integrals over $\bk_1$ and $\bk_2$ are cut off at the scale $k_i = 2\pi/r$. The quantity $\Bphase$ is defined by (k\_1,k\_2,k\_3) \[eq:phaseB\] and gives the dominant contribution to the bispectrum of the phase field $\epsilon(\bk) = \delta(\bk) / |\delta(\bk)|$ in the limit of large volume $V$. For smaller volumes there are corrections scaling as powers of $V^{-1/2}$ compared to the dominant term [@Eggemeier:2016asq].
To evaluate we must perform a 6-dimensional integral. We use a strategy similar to that described in Section \[sec:predict-ib\], by sampling the bispectrum over a cubic lattice and interpolating between lattice sites. The integration is again performed using .
In the special case of tree-level SPT, @Wolstenhulme:2014cla showed that could be reduced to a 3-dimensional integral, $$\begin{split}
\label{eq:models.lcf-tree2}
\LCFtree(r) = \mbox{} & 16\pi^2\Big(\frac{r}{4\pi}\Big)^{9/2}
\int_0^{\frac{2\pi}{r}}\D{k_1}\,k_1^2
\int_0^{\frac{2\pi}{r}}\D{k_2}\,k_2^2
\int_{-1}^{\mu_{\mathrm{cut}}}\D{\mu}\,F_2^{(s)}(k_1,\,k_2,\,\mu)\,
\sqrt{\frac{\Ptree(k_1) \Ptree(k_2)}{\Ptree(|\bk_1+\bk_2|)}}\, \\
& \mbox{} \times \left[j_0{\big(\left|\bk_2-\bk_1\right|r\big)}+2j_0{\big(\left|\bk_1+2\bk_2\right|r\big)}\right]\,,
\end{split}$$ where $\Ptree$ is the tree-level power spectrum, and the upper limit of the $\mu$-integral is chosen to guarantee $|\bk_1 + \bk_2| \leq 2 \pi / r$. That requires $$\mu_{\text{cut}} =
\min\left\{
1,
\max\left\{
-1,
\frac{(2\pi/r)^2-k_1^2-k_2^2}{2 k_1 k_2}
\right\}
\right\} .$$ Equation is useful because it provides a means to test the accuracy of our 6-dimensional integrations, and the 3-dimensional interpolations they entail. We have compared estimates for the tree-level line correlation function using both and and find good agreement.
To determine the Gaussian covariance we require the two-point function of the phase field, $$\langle \epsilon(\bk_1) \epsilon(\bk_2)\rangle
= \frac{(2\pi)^3}{V}\,\DiracD(\bk_1+\bk_2) .$$ It follows that, in the absence of shot noise, the covariance between estimators for the the line correlation function on scales $r_i$ and $r_j$ can be written [@Eggemeier:2016asq] $$\label{eq:models.lcf-cov}
\CovGauss\big[ \LCF(r_i),\LCF(r_j) \big] = \frac{(r_i r_j)^{9/2}}{V^3}
\iint\displaylimits_{\substack{|\bk_1|,|\bk_2|,\\|\bk_1+\bk_2| \leq 2\pi/r}}
\frac{\D^3 k_1}{\kf^3}\,\frac{\D^3 k_2}{\kf^3}
\Big(
j_0(|2\bk_1+\bk_2|\,r_i)
\big[
2j_0(|\bk_1-\bk_2|\,r_j)+j_0(|2\bk_1+\bk_2|\,r_j)
\big]
+
r_i \leftrightarrow r_j
\Big) ,$$ where $\kf = 2\pi/V^{1/3}$ denotes the fundamental frequency (defined above equation ), and $r = \max\{r_i,\,r_j\}$. Note that is not diagonal; the integral that defines the line correlation function depends on a range of Fourier modes for any scale $r_i$, and any Fourier modes that are common between $\LCF(r_i)$ and $\LCF(r_j)$ will contribute a nonzero covariance. Moreover, equation shows that the Gaussian covariance is independent of redshift and all cosmological parameters.
Modal bispectrum {#sec:predict-modal}
----------------
It was explained in Section \[ssec:modal\] that the modal decomposition is defined by choice of a basis $Q_n$ that samples groups of relevant Fourier configurations. The structure and ordering of the $Q_n$ determine those configurations we wish to prioritize. But unless we carefully adjust the $Q_n$ they will be correlated, and these correlations will be inherited by the $\beta_n^Q$. The outcome is that the covariance matrix for estimators of the $\beta_n^Q$ is rather complex.
To avoid this we redefine the basis, as in equation , to simplify the covariance matrix for estimators of the corresponding $\beta_n^R$. The construction proceeds in stages. First, consider the expected signal-to-noise with which it is possible to measure a single mode $Q_n/w$ from . Using a Gaussian approximation for the noise this can be written $$6\left(\SignalToNoise \right)^2_{Q_n}
=
\int
\frac{\D^3 k_1}{(2\pi)^3}
\frac{\D^3 k_2}{(2\pi)^3}
\frac{\D^3 k_3}{(2\pi)^3}
(2\pi)^3
\frac{\DiracD(\bk_1 + \bk_2 + \bk_3)}{w(k_1, k_2, k_3)^2}
\frac{Q_n(k_1, k_2, k_3)^2}{P(k_1) P(k_2) P(k_3)} .
\label{eq:Qn-signal-to-noise}$$ We are free to choose the weight $w$ to simplify this integral. We define $$w(k_1, k_2, k_3) = \sqrt{\frac{k_1 k_2 k_3}{P(k_1) P(k_2) P(k_3)}} ,
\label{eq:weight-function}$$ after which the computation of the expected signal-to-noise reduces to $$6\left(\SignalToNoise \right)^2_{Q_n}
=
\llangle Q_n | Q_n \rrangle .$$ To write this and similar expressions economically we have introduced the notation $$\llangle f | g \rrangle
\equiv
\int
\frac{\D{^3 k_1}}{(2\pi)^3}
\frac{\D{^3 k_2}}{(2\pi)^3}
\frac{\D{^3 k_3}}{(2\pi)^3}
(2\pi)^3 \DiracD(\bk_1 + \bk_2 + \bk_3)
\frac{f(\bk_1, \bk_2, \bk_3) g(\bk_1, \bk_2, \bk_3)}{k_1 k_2 k_3}
\label{eq:inner-product}$$ for any $f$ and $g$. In the special case that these depend only on the wavenumbers $k_i$ and not their orientations $\hat{\bk}_i$ some of the angular integrations are trivial and we obtain the simpler expression $$\llangle f | g \rrangle
\equiv
\frac{1}{8\pi^4} \int_{\TriangleRegion} \D{k_1} \, \D{k_2} \, \D{k_3} \;
f(k_1, k_2, k_3) g(k_1, k_2, k_3) .
\label{eq:inner-product-simple}$$ Here, $\TriangleRegion$ represents the set of points $(k_1, k_2, k_3)$ where lines of length $k_1$, $k_2$ and $k_3$ can be arranged to form a triangle, ie. $2 \max \{ k_i \} \leq \sum_i k_i$; for details, see @Fergusson:2009nv. In principle the integral can be carried over all $k_i$, but in practice it will be cut off at upper and lower limits $\kmax$ and $\kmin$. The expressions and can be regarded as an inner product on the $Q_n$ that weights each contributing Fourier configuration according to its individual signal-to-noise.
Second, the $R$-basis is chosen to be diagonal with respect to this inner product. As we will see below, because the resulting $R_n$ modes are orthogonal when weighted by signal-to-noise, the covariance matrix for estimators of the coefficients $\beta_n^R$ becomes diagonal under the same approximation of Gaussian noise used to determine the weighting in . Specifically, we define $$\llangle Q_m | Q_n \rrangle
\equiv
\gamma_{mn}
\equiv
\frac{(\kmax - \kmin)^3}{8\pi^4} \bar{\gamma}_{mn} .
\label{eq:gamma-matrix-def}$$ It is sometimes preferable to express results in terms of $\bar{\gamma}_{mn}$, which is independent of $\kmin$ and $\kmax$. For any suitable $Q$-basis both $\gamma_{mn}$ and $\bar{\gamma}_{mn}$ will be symmetric and positive-definite and may be factored into the product of a matrix and its transpose. Therefore there exists a matrix $\lambda_{mn}$ such that $\bar{\gamma}_{mn} = \sum_r \lambda_{mr} \lambda_{nr}$. Application of with $\lambda_{mn}$ as the transformation matrix yields $R_n = \sum_{n'} \lambda_{nn'}^{-1} Q_{n'}$, and these modes are orthogonal in the sense $$\llangle R_m | R_n \rrangle
=
\frac{(\kmax - \kmin)^3}{8\pi^4} \Kronecker_{mn} .
\label{eq:R-basis-inner-product}$$
Whether we work with the $Q$- or $R$-basis, we must predict the corresponding $\beta$-coefficients for each model of interest. In practice the extra matrix operations needed to obtain the $R$-basis mean that it is simplest to perform calculations in the $Q$-basis, before translating to the $R$-basis to interpret the results. We adopt this procedure whenever concrete calculations using the modal decomposition are required. We use the $Q$-basis constructed by @Fergusson:2009nv. (The details are summarized in Appendix \[app:poly\].) It is not intended to prioritize any single class of Fourier configurations, but rather attempts to provide a good description of reasonably smooth bispectra over a range of shapes and scales.
To extract the $\beta_n^Q$ we use . Assuming can be interpreted as an equality, we conclude that for an arbitrary bispectrum $\Btheory(k_1, k_2, k_3)$ $$\llangle w \Btheory | Q_m \rrangle
=
\sum_{n=0}^{\nmax-1} \beta_n^{Q,\theory} \gamma_{nm} .
\label{eq:Qbasis-project}$$ Finally, the individual $\beta_n^Q$ should be extracted by contraction with the inverse matrix $\gamma_{mn}^{-1}$. If the bispectrum has no angular dependence then the inner product can be computed using the simplified expression , which yields $$\beta_n^{Q,\theory}
=
\frac{1}{8\pi^4}
\sum_m \gamma^{-1}_{nm}
\int_{\TriangleRegion} \D{k_1} \, \D{k_2} \, \D{k_3} \;
\sqrt{k_1 k_2 k_3}
\Bphasetheory(k_1, k_2, k_3) Q_m(k_1, k_2, k_3) ,
\label{eq:modalBT}$$ where we have used the quantity $\Bphase$ defined in . The $\beta_n^{R,\theory}$ may be obtained by the transformation $\beta_n^R = \sum_m \lambda_{mn} \beta^Q_m$. The appearance of the phase bispectrum in is a consequence of our choice of weight $w$.
Equation would continue to apply were we to change the definition of the ‘inner product’ $\llangle \cdot | \cdot \rrangle$, and an analogue of would continue to give the individual $\beta_n^{Q,\theory}$. Our choice of signal-to-noise weighting in $\llangle \cdot | \cdot \rrangle$ is important only for construction of the $R$-modes and the covariance inherited by the $\beta_n^{R,\theory}$.
In practice, equation requires evaluation of a 3-dimensional integral over the region $\TriangleRegion$. To implement it we compute $wB$ on a $200^3$ cubic lattice in $(k_1, k_2, k_3)$ and estimate the integral by volume-weighted cubature over this lattice. Some work is required to account for irregular boundary orientations; we give these details in Appendix \[app:voxel\].
Finally we compute the covariance of estimators for the $\beta_n^R$ coefficients under the assumption of Gaussian covariance for the bispectrum estimator $\delta(\bk_1) \delta(\bk_2) \delta(\bk_3)
V^{-1} \Kronecker_{\bk_1 + \bk_2 + \bk_3, \bZero}$. Using equation , and with $R$ exchanged for $Q$, we obtain $$\label{eq:modalcov-pre}
\begin{split}
\langle \beta_m^R \beta_n^R \rangle
& =
(2\pi)^3 \delta(\bZero)
\frac{6}{V^2}
\frac{(8\pi^4)^2}{(\kmax - \kmin)^6}
\int \frac{\D{^3 k_1} \, \D{^3 k_2} \, \D{^3 k_3}}{(2\pi)^9}
(2\pi)^3 \delta(\bk_1 + \bk_2 + \bk_3)
\frac{R_m(k_1, k_2, k_3) R_n(k_1, k_2, k_3)}{k_1 k_2 k_3} ,
\\
& =
\frac{6}{V}
\frac{(8\pi^4)^2}{(\kmax - \kmin)^6}
\llangle R_m | R_n \rrangle .
\end{split}$$ The weighting for each Fourier configuration matches the signal-to-noise, making this correlator diagonal as a consequence of our construction of the $R$-basis. Therefore we conclude $$\label{eq:modalcov}
\CovGauss(\beta_m^R, \beta_n^R) =
\frac{6}{V}\frac{8 \pi^4}{(\kmax-\kmin)^3}\Kronecker_{mn} .$$ As for the line correlation function, it is independent of redshift and cosmological parameters. If we were to abandon the approximation of Gaussian covariance then would no longer be proportional to exactly $\llangle R_m | R_n \rrangle$. In this case the amplitude of the diagonal elements would be modified, and non-diagonal components would appear.
Galaxy bias {#sec:galaxy-bias}
-----------
The discussion in Sections \[sec:predict-ib\]–\[sec:predict-modal\] was framed in terms of the dark matter overdensity $\delta$, but this is not what is measured by surveys of large-scale structure. Instead, they record the abundance of galaxies or some other population of tracers whose density responds to the dark matter density but need not match it.
On large scales the relation between the galaxy ($\delta_g$) and dark matter ($\delta$) density fields is well-described by the linear model $\delta_g = b_1 \delta$ [@Kaiser1984; @Fry:1992vr]. The *linear bias parameter* $b_1$ may be redshift-dependent, and varies between different populations of galaxies. On small scales the overdensities are larger, and both non-linear and non-local corrections become important. To obtain a satisfactory description we must typically include terms at least quadratic (or higher) in $\delta$ [@Fry:1992vr; @Smith:2006ne], together with terms involving the tidal gravitational field [@Catelan:2000vn; @McDonald:2009dh; @Chan:2012jj; @Baldauf:2012hs].
In what follows we assume the local Lagrangian bias model, in which the galaxy overdensity at early times is taken to be a local function of the dark matter overdensity. At later times the bias is determined by propagating this relationship along the dark matter flow. @McDonald:2009dh demonstrated that this implies the Eulerian galaxy overdensity at the time of observation can be written $$\label{eq:models.bias}
\delta_g(\bx)
=
b_1 \delta(\bx)
+ \frac{1}{2}b_2 \big[
\delta^2(\bx)
- \langle\delta^2(\bx)\rangle
\big]
+ \frac{1}{2}b_{s^2} \big[
s^2(\bx)
-\langle s^2(\bx)\rangle
\big]
+ \cdots ,$$ where ‘$\cdots$’ denotes terms of third order and higher that we have not written explicitly. The field $s^2(\bx) =
s^{ij}(\bx)\,s_{ji}(\bx)$ is a contraction of the tidal tensor, defined by $s_{ij}(\bx) \equiv
\left[\partial_i\partial_j\nabla^{-2}-\frac{1}{3}\Kronecker_{ij}\right] \delta(\bx)$. Therefore, up to second order in $\delta$, we require two additional redshift- and population-dependent bias parameters: the *quadratic bias* $b_2$, as well as the *non-local bias* $b_{s^{2}}$. In the local Lagrangian model the non-local bias satisfies $b_{s^2} = - 4 (b_1 - 1)/7$ [@Chan:2012jj; @Baldauf:2012hs], although in more general biasing prescriptions it could be allowed to vary independently.
After translating to Fourier space it follows that the tree-level galaxy power spectrum can be written $$\Pgaltree(k)
=
b_1^2 \Ptree(k)
.
\label{eq:Ptree}$$ To obtain a consistent result at one-loop we should include the unwritten third-order contributions in , which generate multiplicative renormalizations of the linear power spectrum in the same way as the ‘13’ terms of one-loop SPT. @McDonald:2009dh showed that these could be collected into a single new parameter which we denote $b_{3\text{nl}}$ to match @Gil-Marin:2014sta. Therefore $$\begin{split}
\Pgalloop(k)
= \mbox{} &
b_1^2 \Ploop(k)
+ 2 b_1 b_2 P_{b2}(k)
+ 2 b_1 b_{s^2} P_{bs2}(k)
+ b_2^2 P_{b22}(k)
+ 2 b_2b_{s^2} P_{b2,bs2}(k)
+ b_{s^2}^2 P_{bs22}(k)
\\
& \mbox{}
+ 2 b_1 b_{3\text{nl}} \sigma_3^2(k) \Ptree(k)
.
\label{eq:models.Pg-1loop}
\end{split}$$ @Saito:2014qha showed that in the local Lagrangian model $b_{3\text{nl}}$ satisfies $b_{3\text{nl}} =
32 (b_1-1) / 315$. Explicit expressions for all terms appearing in were given by @McDonald:2009dh. Note that contributions from the non-linear bias appear only in the one-loop power spectrum.
In contrast to the power spectrum, the bispectrum receives corrections from non-linear bias terms even at tree-level. Specifically, $$\label{eq:models.Bg-tree}
\Bgaltree(\bk_1,\,\bk_2,\,\bk_3)
=
b_1^3 \Btree(\bk_1, \bk_2, \bk_3)
+ b_1^2 \Ptree(k_1) \Ptree(k_2) \left[
b_2
+ b_{s^2} S_2(\bk_1,\bk_2)
\right]
+ \text{cyclic}
,$$ where $S_2(\bk_1,\bk_2) \equiv (\bk_1 \cdot \bk_2)^2 / (k_1 k_2)^2 - 1/3$ is the kernel appearing in the Fourier transform of the contracted tidal field, $s^2(\bk) = (2\pi)^{-3} \int \D{^3 q} \, S_2(\bq, \bk - \bq) \delta(\bq) \delta(\bk - \bq)$.
To obtain the galaxy bispectrum consistently at one loop one should compute the dark matter overdensity to fourth order in perturbation theory and develop the bias expansion to the same order. This procedure has been adumbrated in the literature [@Assassi:2014fva] but not developed completely. Therefore to obtain an estimate of the one-loop bispectrum we make the approximation $$\label{eq:models.Bg-1loop}
\Bgalloop(\bk_1,\,\bk_2,\,\bk_3)
=
b_1^3 \Bloop(\bk_1,\,\bk_2,\,\bk_3)
+ b_1^2 \Ploop(k_1) \Ploop(k_2) \left[
b_2
+ b_{s^2} S_2(\bk_1,\bk_2)
\right]
+
\text{cyclic}
.$$ This is consistent with the prescriptions used by @Gil-Marin:2014sta and [@Baldauf:2016sjb].
The outcome of this discussion is that, to predict the integrated bispectrum, line correlation function, or modal bispectrum for the galaxy density field, we should make the replacements $\Ptheory(k)\rightarrow \Pgaltheory(k)$ and $\Btheory(k_1,k_2,k_3) \rightarrow \Bgaltheory(k_1,k_2,k_3)$ where necessary in equations , and .
To obtain theory predictions at tree-level we use equations and , whereas to obtain perdictions at one-loop we use equations and . Finally, to evaluate predictions using the halo model we apply equations and , but with $\Ploop\rightarrow \Phalo$ and $\Btree\rightarrow \Bhalo$ for the dark matter correlations.
Estimating bispectrum proxies from simulations {#sec:estimation}
==============================================
In this section we briefly describe our simulations and explain how they are used to estimate the Fourier bispectrum and its proxies $\ib$, $\LCF$ and $\beta_n^Q$.
Simulations {#ssec:simulations}
-----------
Our measurements are based on two sets of simulations: (1) $200$ simulations containing dark matter only, with a fixed choice of fiducial cosmological parameters; (2) a total of $60$ simulations constructed by varying one cosmological parameter at a time, with four realizations per model including the fiducial set. These simulations were performed on the supercomputer at the University of Zurich and were described in @Smith:2008ut and @Smith:2012uz. Each set uses a comoving boxsize of $L = 1500\,h^{-1}\,\Mpc$ and contains $N = 750^3$ particles. Initial conditions for the particles were set at redshift $z=49$ using second-order Lagrangian perturbation theory acting on a realization of a Gaussian random field [@Crocce:2006ve] with transfer functions from [@Seljak:1996is]. The particles are evolved to $z=0$ under the influence of gravity using the code [@Springel:2005mi], modified to allow a time-evolving equation of state for dark energy.
Parameter $\theta$ $\Omega_m$ $\Omega_b$ $w_0$ $w_a$ $\sigma_8$ $n_s$ $h$
-------------------- ------------- -------------- ------------ ------------ ------------ ------------- -------------
Fiducial value $0.25$ $0.040$ $-1.0$ $0.0$ $0.8$ $1.00$ $0.70$
$\Delta\theta$ $\pm\,0.05$ $\pm\,0.005$ $\pm\,0.2$ $\pm\,0.1$ $\pm\,0.1$ $\pm\,0.05$ $\pm\,0.05$
: Fiducial values of the cosmological parameters, together with the stepsize $\Delta\theta$ used to vary each parameter in the simulations. We perform one simulation with offset $+\Delta\theta$ and one with increment $-\Delta\theta$, giving two offset simulations per parameter. With seven parameters and four realizations per model this gives $4 + 2 \times 7 \times 4 = 60$ simulations in the suite. The bias parameters are assumed to be $b_1 = 1$ and $b_2 = 0$.
\[tab:parameters\]
The fiducial cosmological parameters correspond to a flat $\Lambda$CDM model and are summarized in Table \[tab:parameters\]. Specifically, $\Omega_m$ and $\Omega_b$ are the matter and baryon density parameters; $w_0$ and $w_a$ parametrize the equation of state for dark energy, viz. $w(a) \equiv w_0 + (1-a)\,w_a$; $\sigma_8$ is the amplitude of density fluctuations smoothed on a scale $8\,h^{-1}\,\Mpc$; $n_s$ is the spectral index of the primordial power spectrum; and $h$ is the dimensionless Hubble parameter. We collectively write these as a vector $\theta_\alpha$ with index $\alpha$ labelling the different parameters. To construct set (2) each parameter is offset by $+\Delta\theta_\alpha$ and $-\Delta\theta_\alpha$, with all other parameters held fixed. The stepsizes $\Delta \theta_\alpha$ are listed in Table \[tab:parameters\]. To reduce noise when estimating parameter derivatives, we construct initial conditions for each of the four realizations using the same Gaussian random field as its fiducial partner. Since we vary over seven cosmological parameters this gives a total of $4 + 2 \times 7 \times 4 = 60$ simulations in the suite.
Density field
-------------
To compute the overdensity field in each simulation we use the cloud-in-cell assignment scheme to distribute particles over a regular Cartesian grid. We apply a fast Fourier transform and extract the discrete real-space density field by deconvolving the cloud-in-cell window function. The result is $$\delta^{\text{disc}}(\bk) = \frac{\delta^\text{grid}(\bk)}{\WCIC(\bk)}
,
\quad \text{where} \quad
\WCIC(\bk)
=
\prod_{i=1}^3 \left[\frac{\sin{\left(\pi k_i/ 2\kNy\right)}}{\pi k_i/ 2\kNy}\right]^2
.$$ The labels ‘disc’ and ‘grid’ label Fourier-space fields in the full volume $V$ and on the cloud-in-cell grid, respectively. The Nyquist frequency $\kNy = \pi \Ngrid / L$ is determined by the number of grid cells per dimension. For our numerical results we use $\Ngrid = 512$.
Estimating the power spectrum {#sec:stdPEst}
-----------------------------
Given a realization of the $\delta$-field within a simulation volume $V = L^3 = (2\pi)^3 \DiracD(\bZero)$, a simple estimator for the power at wavevector $\bk_1$ can be written $\hat{\Pestimator}(\bk_1,\bk_2) = \delta(\bk_1)\delta(\bk_2)\Kronecker_{\bk_1,-\bk_2}/V$. [^3] Unfortunately this procedure is very noisy. An improved estimate can be obtained by summing over a set of modes satisfying the closure criterion $\sum_{i}\bk_i=\bZero$ within a thin $\bk$-shell. Since we are working in finite volume the available modes are discretized in units of the fundamental frequency $\kf = 2\pi/L$, and therefore the thin-shell average should be written $$\label{eq:estimPSE}
\hat{P}(k)
= \frac{1}{V_P(k)}
\int \D{^3 q_1} \, \D{^3 q_2} \; \DiracD(\bq_1 + \bq_2)
\hat{\Pestimator}(\bq_1,\bq_2)
\tilde\Pi_k(\bq_1)\tilde\Pi_k(\bq_2)
,$$ where $\Delta k \geq \kf$ represents a bin width, and we have introduced the binning function $\tilde\Pi_k(\bq)$ which is defined to be unity if $|\bq|\in [k-\Delta k/2, k+\Delta k/2]$ and zero otherwise. Finally, the quantity $V_P$ represents the volume of the spherical shell accounting for discretization, $$V_P(k)
\equiv \int \D{^3 q_1} \, \D{^3 q_2} \; \DiracD(\bq_1 + \bq_2)
\tilde\Pi_{k}(\bq_1)\tilde\Pi_k(\bq_2)
=
\int \D{^3 q} \; \tilde\Pi^2_k(\bq)
=
\int \D{^3 q} \; \tilde\Pi_k(\bq)
=
4 \pi k^2 \Delta k
\bigg[
1
+ \frac{1}{12}
\Big(
\frac{\Delta k}{k}
\Big)^2
\bigg]
.$$
Estimating the bispectrum {#sec:stdEst}
-------------------------
In analogy with the power spectrum, an estimator for a single configuration of the Fourier bispectrum can be written $\hat{\Bestimator}(\bk_1,\bk_2,\bk_3) = \delta(\bk_1)\delta(\bk_2)\delta(\bk_3)\Kronecker_{\bk_1+\bk_2+\bk_3,\bZero}/V$. \[This expression was already used in Section \[sec:predict-modal\] to obtain the Gaussian covariance for estimators of the $\beta_n^R$.\] To obtain an acceptable signal-to-noise we should again average over a set of configurations whose wavenumbers lie within suitable discretized $\bk$-shells. After doing so we obtain the estimator $$\hat{B}(k_1,k_2,k_3)
=
\frac{1}{V_B(k_1,k_2,k_3)}
\int \D{^3 q_1} \, \D{^3 q_2} \, \D{^3 q_3} \;
\DiracD(\bq_1 + \bq_2 + \bq_3)
\hat{\Bestimator}(\bq_1,\bq_2,\bq_3)
\tilde\Pi_{k_1}(\bq_1)\tilde\Pi_{k_2}(\bq_2)
\tilde\Pi_{k_3}(\bq_3)
,
\label{eq:BSE}$$ where the normalization $V_B$ should now be evaluated using [@Sefusatti:2006pa; @Joachimietal2009] $$V_B(k_1,k_2,k_3)
\equiv
\int \D{^3 q_1} \, \D{^3 q_2} \, \D{^3 q_3} \;
\DiracD(\bq_1 + \bq_2+ \bq_3)
\tilde\Pi_{k_1}(\bq_1)\tilde\Pi_{k_2}(\bq_2)\tilde\Pi_{k_3}(\bq_3)
\approx
8 \pi^2 k_1 k_2 k_3 (\Delta k)^3
.$$ Dividing by the square of the fundamental cell volume shows that the number of configurations scales as $\Ntriangles(k_1,k_2,k_3)=V_B(k_1,k_2,k_3)/\kf^6\propto
N_1N_2N_3$, where $N_i\equiv k_i/\kf$ is the length of the side $k_i$ in units of the fundamental mode. Hence, if we scale the configuration by $k_i \rightarrow \lambda k_i$ then the number of available configurations scales as $\lambda^3$.
@Sefusatti2005, @Fergusson:2010ia and @Scoccimarro:2015bla observed that could be implemented efficiently by rewriting the Dirac $\delta$-function using its Fourier representation, $(2\pi)^3\DiracD(\bq)=\int \D{^3 x} \, \e{\im \bq\cdot\bx}$, and factorizing the dependence on the $\bq_i$. This yields $$\hat{B}(k_1,k_2,k_3)
=
\frac{\kf^3}{(2\pi)^6 V_B(k_1,k_2,k_3)}
\int \D{^3 x} \; \Dfactor{k_1}(\bx) \Dfactor{k_2}(\bx) \Dfactor{k_3}(\bx) ,
\quad \text{where} \quad
\Dfactor{k}(\bx)
\equiv
\int \D{^3 q} \; \e{\im \bx\cdot\bq} \delta(\bq)\tilde\Pi_{k}(\bq)
.
\label{eq:BSE-factorized}$$ Similarly, $$V_B(k_1,k_2,k_3) = \int \frac{\D{^3 x}}{(2\pi)^3} \Pi_{k_1}(\bx) \Pi_{k_2}(\bx) \Pi_{k_3}(\bx)
,$$ where $\Pi_k(\bx)$ is the inverse Fourier transform of $\tilde\Pi_k(\bq)$.
Equation is numerically more efficient than a direct implementation of , because it requires only three Fourier transforms to compute $\Dfactor{k}$ for each wavenumber in the triplet $\{ k_1, k_2, k_3 \}$. Moreover, once each $\Dfactor{k}$ has been obtained it can be re-used for any configuration that shares the same wavenumber. In spite of this improvement, however, it remains a formidable computational challenge to estimate all bispectrum configurations contained within a large volume $V$. Different strategies have been employed to make the calculation feasible. One option is to coarsely bin configurations with binning width equal to several times the fundamental mode. This drastically reduces the number of configurations to be measured. An alternative is to search only among a limited subset of configurations. This may be helpful if we wish to search for specific physical effects, but risks overlooking important signals if we are searching blindly. In either case the analysis is unlikely to be optimal because information is lost.
Estimating the integrated bispectrum
------------------------------------
Our procedure to estimate the integrated bispectrum is based directly on its definition. We separate the total volume into $N_s$ subvolumes, enumerated by the labels $i = 1, \ldots, N_s$. We compute the mean overdensity $\hat{\bar{\delta_i}}$ and power spectrum $\hat{P}(k)_i$ within each subvolume. Finally, we average the product $\hat{P}(k)_i \hat{\bar{\delta}}_i$ over all subvolumes. Therefore, $$\widehat{\iB}(k) = \frac{1}{N_s} \sum_{i=1}^{N_s}\hat{P}(k)_i \hat{\bar{\delta}}_i .$$ The normalized integrated bispectrum can be obtained by rescaling, $$\widehat{ib}(k) = \frac{\widehat{\iB}(k)}{\hat{P}(k)\hat{\sigma}_L^2} ,$$ where here $\hat{P}(k) = \sum_{i=1}^{N_s} \hat{P}(k)_i /N_s$ is the average subvolume power spectrum and $\hat{\sigma}_L^2 = \sum_{i=1}^{N_s} \hat{\bar{\delta}}_i^2 /N_s$ is the average variance of the mean overdensity.
Estimating the line correlation function
----------------------------------------
A procedure to estimate the line correlation function was outlined by @Eggemeier:2016asq. We evaluate $$\hat{\ell}(r)
=
\Big(
\frac{r^3}{V}
\Big)^{3/2}
\hspace{-1.5em}
\sum_{\substack{|\bk_1|,|\bk_2|,\\|\bk_1+\bk_2| \leq 2\pi/r}}
\hspace{-1.5em}
\overline{j_0}(|\bk_1-\bk_2|r)\,
\epsilon(\bk_1)\,\epsilon(\bk_2)\,\epsilon(-\bk_1-\bk_2)
,
\label{eq:est1}$$ where $\overline{j_0}(|\bk|r)$ denotes an average of $j_0(kr)$ taken over the volume of a fundamental $k$-space cell centred at $\bk$. The sum scales as $\sim (2L/r)^6$, making its evaluation fast on large scales but challenging on small ones, where the sum includes the majority of Fourier modes. On scales below $\sim 105\,h^{-1}\,\Mpc$ we find that the real space estimator described by @Eggemeier:2016asq becomes more efficient and therefore we use it within that regime. For scales accessible to both schemes we verified that both estimators yield the same result.
Estimating the modal bispectrum {#sec:modalEst}
-------------------------------
Equation shows that an estimate of the modal coefficient $\beta^{Q}_m$ requires evaluation of $\llangle w \hat{\Bestimator}|Q_n\rrangle$, where $\hat{\Bestimator}$ is the bispectrum estimator defined in Section \[sec:stdEst\]. Using equation , writing the $\delta$-function using its Fourier representation, and factorizing the integral as described in Section \[sec:stdEst\], we find $$\label{eq:wBhat_Qn}
\llangle w \hat{\Bestimator} | Q_n \rrangle
=
\frac{1}{V}
\int \D{^3 x} \;
\Mfactor{n_1}(\bx) \Mfactor{n_2}(\bx) \Mfactor{n_3}(\bx)
,
\quad \text{where} \quad
\Mfactor{n}(\bx)
\equiv
\int \frac{\D{^3 k}}{(2\pi)^3}
\e{\im\bk\cdot\bx}
\frac{q_{n}(k)}{\sqrt{k \hat{P}(k)}}\delta(\bk)
.$$ Here, $q_n(k)$ is a polynomial used in the construction of the modes $Q_n$; see Appendix \[app:poly\]. Equation shows that the computation can be reduced to a single 3-dimensional integral over the $\Mfactor{n}(\bx)$, which are themselves weighted Fourier transforms of $\delta$. Finally, $\beta_m^Q$ can be estimated by contracting with the inverse inner product matrix $\gamma_{mn}^{-1}$ defined in , $$\hat{\beta}_m^{Q}
= \sum_{n=0}^{\nmax-1}
\llangle w \hat{\Bestimator} | Q_n\rrangle\gamma^{-1}_{nm}
.
\label{eq:betaEst}$$ To obtain the corresponding $R$-basis coefficients requires a further linear transformation $$\hat{\beta}_n^R=\sum_{m}\lambda_{m n}\hat{\beta}_m^{Q} ,$$ where $\lambda_{mn}$ is the matrix defined above . As explained in Section \[sec:predict-modal\], we generally perform numerical calculations in the $Q$-basis in order to preserve the simplicity of , but present results in the $R$-basis because their covariance properties make these coefficients simpler to interpret. In either basis, the measured coefficients can be used to reconstruct the bispectrum for any required Fourier configuration using equation .
Note that, because the matrix $\gamma_{nm}$ can be tabulated, measuring a single modal coefficient has the same computational complexity as measuring a single configuration of the Fourier bispectrum.
Choice of bins
--------------
In Table \[tab:binning\] we summarize the parameters used in implementing estimators for each of these statistical quantities. The power spectrum and Fourier bispectrum are binned by averaging over shells of width $\Delta k$ as explained in Sections \[sec:stdPEst\]–\[sec:stdEst\]. For the same reasons we also average the subvolume power spectra used to construct the integrated bispectrum. The line correlation function and modal coefficients do not involve averaging over shells, but instead are evaluated using equations and which are themselves aggregates over groups of configurations. For each statistic we report the minimum and maximum $k$-modes that contribute, and the total number of measurements or bins. Note that the bispectrum bin width corresponds to $\Delta k = 8\,\kf$.
In what follows we will label the Fourier configurations for the bispectrum using the scheme of @Gil-Marin:2016wya. We assign the label (or ‘index’) zero to the equilateral configuration with $k_1 = k_2 = k_3 =
\kmin$. The remaining configurations are ordered so that $k_1 \leq k_2 \leq k_3$ and $k_3 \leq k_1 + k_2$. Their labels are assigned by sequentially increasing $k_3$, $k_2$ and $k_1$ (in this order) and incrementing the index for each valid triangle.
In our measurements of the integrated bispectrum we split the simulation box into $125$ subcubes, corresponding to a side of $300\,h^{-1}\,\Mpc$. This increases $\kmin$ by a factor of five compared to the full box. Finally, for the line correlation function we use a non-regular $r$-spacing, spanning the range from $10$ to $200\,h^{-1}\,\Mpc$. The first seven bins are separated by $2.5\,h^{-1}\,\Mpc$, which doubles to $5\,h^{-1}\,\Mpc$ for the next eleven and to $10\,h^{-1}\,\Mpc$ for the remaining twelve bins.
$\Delta k$ \[$h\,\Mpc^{-1}$\] $\kmin$ \[$h\,\Mpc^{-1}$\] $\kmax$ \[$h\,\Mpc^{-1}$\] $\Nbin$
--------- ------------------------------- ---------------------------- ---------------------------- ---------
$P$ $0.010$ $0.004$ $0.300$ $30$
$B$ $0.034$ $0.004$ $0.302$ $95$
$\beta$ $-$ $0.004$ $0.302$ $50$
$\ib$ $0.010$ $0.021$ $0.306$ $29$
$\LCF$ $-$ $0.016$ $0.314$ $30$
: Shell widths $\Delta k$ used to average estimators for the power spectrum and bispectrum (where used), together with minimum and maximum modes $\kmin$, $\kmax$ and the total number of bins or measurements $\Nbin$.
\[tab:binning\]
Comparison of theoretical predictions and simulations {#sec:comparison}
=====================================================
In this section we present estimates of the typical values for each bispectrum proxy introduced in Section \[sec:estimators\], and implemented using the formulae of Section \[sec:estimation\]. We derive these from the 200 simulations of our fiducial cosmology in set (1)—see Section \[ssec:simulations\]—at redshifts $z = 0$, $z = 0.52$ and $z=1$. Also, using the simulation set (2) we determine how each proxy responds to changes in the cosmological parameters (Section \[sec:derivs\]). These measurements enable us to characterize the accuracy of the theoretical predictions for these typical values discussed in Section \[sec:modelling\]. Finally, in Section \[sec:ngcovariance\] we discuss measurements of the covariances and cross-covariances for each pair of proxies.
Mean values in the fiducial cosmology {#sec:means}
-------------------------------------
### Comparison of measurements and theoretical predictions {#sec:means.comparison-measurement-theory}
![: Measurements of the bispectrum as a function of configuration index (see text), estimated from $200$ simulations at redshifts $z=0$, $0.52$ and $1$. We compare these measurements to the theoretical estimates of Section \[sec:modelling\]: the tree-level predictions are shown as dashed light-blue lines, the one-loop predictions are shown as solid red lines, and the halo model predictions are shown as short-dashed dark-blue lines. Black crosses mark the measured values. : One-loop and halo model predictions relative to the tree-level prediction. : Differences between measurements and theoretical predictions (ie., $\Delta B = \Bdata-\Btheory$), normalized to the corresponding $1\sigma$ standard deviation in the value.[]{data-label="fig:bspec"}](plots/bspec)
![Same configuration as Fig. \[fig:bspec\], showing values for the Fourier bispectrum reconstructed from the modal coefficients $\beta_n^Q$ using equation . In the bottom row we plot differences computed using $\Delta B = \Bmodal - \Btheory$.[]{data-label="fig:bspec-modes"}](plots/bspec_modes)
In Figs. \[fig:bspec\]–\[fig:linecorr\] we show measurements of each proxy for all three redshifts, averaged over the $200$ different realizations. We do not explicitly display our power spectrum measurements, which have been well-studied by previous authors [e.g. @Makino:1991rp; @Lokas:1995xf; @Scoccimarro:1996se; @Scoccimarro:1997st; @Scoccimarro:2000gm; @Smith:2002dz; @Seljak:2000gq; @Peacock:2000qk; @Scoccimarro:2001cj; @Mead:2015yca]. In each figure, the top row contrasts our measurements with the tree-level, one-loop and halo model predictions. The middle row displays the one-loop and halo model predictions relative to the tree-level prediction, and the bottom row shows the difference between the measurements and the theoretical prediction in units of the standard deviation of the estimate.
We find that both of the SPT predictions are more accurate at large scales and high redshifts. The halo model prediction is a better match at low redshift. The differences between each theoretical estimate and the typical values measured from simulation are broadly consistent with previous analyses; see @Scoccimarro:1997st [@Scoccimarro:2000gm; @Schmittfull:2012hq; @Lazanu:2015rta].
In Fig. \[fig:bspec-modes\] we plot the Fourier bispectrum reconstructed from using our measurements of the $\beta_n^Q$ coefficients. This is easier to interpret than the $\beta$-values themselves. The scatter between predicted and measured values (most clearly visible in the bottom row) is similar to the scatter for the directly-measured Fourier bispectrum (Fig. \[fig:bspec\]), and indicates that differences between the reconstructed and directly-measured values are small. We give a more detailed analysis of the accuracy of the modal bispectrum in Section \[sec:reconstructions\].
We give values for the normalized integrated bispectrum in Fig. \[fig:ibspec\]. Except for a few $k$-bins the error bars are too large to show any preference for a particular theoretical model. In contrast to Figs. \[fig:bspec\]–\[fig:bspec-modes\], the bottom row shows that tree-level SPT is a good match to the measured $\ib$ at all three redshifts. Conversely, the halo model prediction is a better match at high redshift. Our theoretical predictions are consistent with those reported by @Chiang:2014oga, but our measured values have larger error bars because we work with a smaller simulation volume.
![Same configuration as Fig. \[fig:bspec\], showing values for the normalized integrated bispectrum. Error bars show the $1\sigma$ interval.[]{data-label="fig:ibspec"}](plots/ibspec)
Finally, we present our measurements of the line correlation function in Fig. \[fig:linecorr\]. The one-loop and halo-model predictions appearing here are new, and have not previously been studied. The most striking feature is the discrepancy between the halo model and SPT-based predictions in the smallest $r$-bins. This is consistent with the analyses of [@Wolstenhulme:2014cla] and @Eggemeier:2016asq, which both found differences between the tree-level prediction and values measured from simulation on scales with $r \lesssim 30\,h^{-1}\,\Mpc$. The agreement is good for larger $r$.
![Same configuration as Fig. \[fig:bspec\], showing values for theline correlation function at scale $r$. Error bars show the $1\sigma$ interval.[]{data-label="fig:linecorr"}](plots/linecorr)
The bottom panels of Figs. \[fig:bspec\]–\[fig:linecorr\] show that our theoretical predictions are accurate within a restricted range of scales. Outside this range it becomes progressively more difficult to model the observables. This mis-modelling should be regarded as an additional source of systematic error—a *theory error*—when forecasting constraints, or analysing data, using any of these theoretical models. In particle phenomenology such theory errors are routinely estimated when performing fits to data, but their use in cosmology is less common. In this paper we construct Fisher forecasts for parameter error bars using both SPT-based models and the halo model. Comparison of these error bars enables us to estimate the impact of theoretical uncertainties on future constraints that incorporate three-point statistics (see Section \[sec:theory-dep\]).
An alternative prescription for estimating theory errors was used by @Baldauf:2016sjb and @Welling:2016dng. In their approach the theoretical uncertainty in one-loop SPT is estimated from the next-order term in the loop expansion. We find that this prescription gives noticeably larger estimates than the difference between one-loop SPT and the values we measure from simulations. Therefore, although [@Baldauf:2012hs] and @Welling:2016dng concluded that (for example) constraints on some types of primordial non-Gaussianity would be weakened significantly after accounting for theory errors, our numerical comparison suggests that the attainable error may degrade by less than their analysis would suggest.
### Accuracy of modal reconstruction {#sec:reconstructions}
![Modal bispectra reconstructed using $10$ modes (blue) and $50$ modes (red) at redshifts $z=0$, $0.52$ and $1$. The lower panels show the ratio of the measured normal bispectrum and modal bispectrum.[]{data-label="fig:bspecrecon"}](plots/bspecrecon)
Comparison of Figs. \[fig:bspec\] and \[fig:bspec-modes\] demonstrates that the Fourier bispectrum reconstructed from our measurements of the $\beta^Q_n$ accurately reproduces the correct amplitude and shape dependence. This information is embedded in the modal coefficients. For example, the zeroth basis mode $R_0 \propto Q_0$ is a constant and therefore $\beta^R_0 \propto \beta^Q_0$ captures information about the mean amplitude of the Fourier bispectrum over all configurations—or, equivalently, the skewness of $\delta$. The next few modes are slowly varying functions of configuration. Taken together, these low-order modes carry the principal amplitude information and for reasonably smooth bispectra we expect they exhibit the strongest dependence on background cosmological parameters. The higher modes capture more subtle detail. As with any basis decomposition, their inclusion increases the accuracy of the reconstruction.
To see this in detail, consider a reconstruction using only $\nmax = 10$ modes. In Fig. \[fig:bspecrecon\] we plot the Fourier bispectrum reconstructed in this way (blue line) compared to the reconstruction using $\nmax = 50$ described above (red line). Black crosses mark the measured data points. In the lower panel we plot the ratio between these measured values and the reconstructions. The accuracy is good whether we use $\nmax = 10$ or $\nmax = 50$, but the scatter is smaller for $\nmax = 50$. We conclude that, in this case, the first 10 modes are sufficient to capture the main behaviour of the Fourier bispectrum, but extra modes are helpful if we wish to reproduce the precise configuration dependence to within $\lesssim 10\%$ accuracy.
Derivatives with respect to cosmological parameters {#sec:derivs}
---------------------------------------------------
![Derivatives of the Fourier bispectrum and its proxies with respect to the parameters at $z=0$. The four columns show (from left to right) the derivatives of: $B$, $\Bmodal$ (reconstructed from $\beta$), $\ib$ and $\LCF$. As in Figs. \[fig:bspec\]–\[fig:linecorr\], measured values are in black, while light blue dashed, red solid and dark blue short-dashed lines are the tree-level, 1-loop and halo model predictions, respectively.[]{data-label="fig:derivatives"}](plots/derivatives)
In the remainder of this paper our aim is to obtain Fisher forecasts of error bars for a parameter set $\theta_\alpha$, where the index $\alpha$ labels one of the cosmological parameters of Table \[tab:parameters\]. For this purpose the role of a theoretical model is to predict the derivatives of observables with respect to each parameter, and the accuracy of the forecast depends on the reliability of these predictions. In this section we study how well our three theoretical models reproduce the derivatives estimated from our simulation suite. We compute the derivative of some estimator $\hat{X}$ at wavenumber $k$ with respect to a parameter $\theta_\alpha$ by the rule $${\frac{\D{\hat{X}}(k\,|\,\B{\theta})}{\D{\theta_{\alpha}}}}
=
\hat{\bar{X}}(k\,|\,\B{\theta})
{\frac{\D{\ln{\hat{X}}}(k\,|\,\B{\theta})}{\D{\theta_{\alpha}}}}
,$$ where $\hat{\bar{X}}(k|\B{\theta})$ is the average over the $200$ fiducial simulations of set (1) (described in Section \[ssec:simulations\]) for $X \in
\{P, B, \beta, \ib, \LCF \}$, and the logarithmic derivative with respect to $\theta_\alpha$ is computed using $$\label{eq:means+derivs.logderiv}
{\frac{\D{\ln{\hat{X}}}(k \mid \B{\theta})}{\D{\theta_{\alpha}}}}
=
\frac{1}{4}
\sum_{i=1}^4
\frac{\hat{X}^{(i)}(k \mid \B{\theta} + \Delta\theta_{\alpha})
- \hat{X}^{(i)}(k \mid \B{\theta} - \Delta\theta_{\alpha})}
{2\Delta\theta_{\alpha} \hat{X}^{(i)}(k \mid \B{\theta})}
.$$ The sum is over the four realizations used in simulation set (2), and the derivative is constructed using the $+\Delta\theta_\alpha$ and $-\Delta\theta_\alpha$ offset simulations described in Section \[ssec:simulations\]. The advantage of the logarithmic derivative is that both realizations in the numerator on the right-hand side of share initial conditions with their fiducial partner in the denominator. Therefore, division by the fiducial estimate $\hat{X}^{(i)}(k \mid \B{\theta})$ minimizes dependence on the specific realization. [^4]
In Fig. \[fig:derivatives\] we plot the derivatives of each observable with respect to the cosmological parameters at $z=0.52$. Our forecasts use three redshift bins, but their behaviour is similar to the $z=0.52$ bin and the statements made below can be taken to apply at all three redshifts. We do not include the power spectrum, for which the derivatives appeared in @Smith:2012uz.
To simplify comparison of the modal bispectrum with the Fourier bispectrum, Fig. \[fig:derivatives\] plots derivatives of the reconstructed bispectrum rather than derivatives of $\beta_n^Q$ or $\beta_n^R$. Comparison of the first two columns shows that the cosmology-dependence is accurately captured using $\nmax = 50$, either for theoretical predictions or the measured values.
There is a significant spread in performance of the theoretical models, with tree-level SPT and the halo model generally offering the poorest match. For the derivatives with respect to $\Omega_m$, $\Omega_b$, $n_s$ and $h$ these models give similar predictions. The probable reason is that, in the standard halo model, the halo mass function and halo profile are fixed to the fiducial cosmology. Only the input power spectrum is taken to vary with the cosmological parameters, and since it matches the tree-level SPT prediction its derivatives will be equal. Therefore the halo-model derivatives will differ from those of tree-level SPT only via a (possibly scale-dependent) prefactor. More complex halo models with cosmology-dependent halo parametrizations have been studied (see, eg., @Mead:2016zqy for an application to dark energy models). However, determining which variation of the halo model captures the cosmological parameter dependence of the bispectrum most accurately is outside the scope of this paper. We simply note that, if the halo model is to be used for analysis or forecasting of the Fourier bispectrum, its implementation should be chosen with care because its performance depends on these details.
The derivatives of the integrated bispectrum are shown in the third column of Fig. \[fig:derivatives\]. The errors bars on the measured values are again too large to show a clear preference for any model—and they are generally so large that the measurement is not significantly different from zero. These results are consistent with those reported by @Chiang:2015pwa for a range of values of $\Omega_m$, $\sigma_8$ and $n_s$. We conclude that the integrated bispectrum is rather insensitive to the background cosmology and is therefore a comparatively poor tool to constrain it. While this means we must expect a Fisher forecast to predict weaker error bars for the parameters of Table \[tab:parameters\], this insensitivity could be an advantage if the intention is to use the integrated bispectrum as a probe of other physics. For example, in addition to the background cosmology we may wish to use the large-scale structure bispectrum to constrain the possibility of *primordial* three-point configurations produced by inflation on squeezed configurations. Insensitivity to the background cosmology would reduce the likelihood of degeneracies in these measurements.
The last column of Fig. \[fig:derivatives\] shows the derivatives of the line correlation function. As for the typical values discussed above, the values predicted by our theoretical models are significantly discrepant with the measured values in the smallest $r$ bins. Also, the derivative with respect to the dark energy parameter $w_a$ is particularly discrepant for the halo model. One possible explanation is the construction of the halo model as described above, with its fixed halo mass function and halo profile. Alternatively, it is possible that the halo model power spectrum and bispectrum that we use are subtly inconsistent in a way that produces inaccuracies in the line correlation function on small scales.
Non-Gaussian covariance {#sec:ngcovariance}
-----------------------
![Correlation matrices for (clockwise from top left) $P+B$, $P+\beta$, $P+\LCF$ and $P+\ib$ and at redshift $z=0.0$. In each panel, the lower-left quadrant contains the power spectrum auto-correlation ($P \times P$), while the upper-right quadrant contains the auto-correlation of the corresponding 3-point correlation measure. The upper-left and lower-right quadrants contain the cross-covariance. []{data-label="fig:correlation"}](plots/correlation.png){width="\textwidth"}
The analytic, Gaussian covariance of each proxy is most accurate at high redshifts and on large scales, where the matter fluctuations are more nearly Gaussian and therefore more accurately described by the power spectrum alone. At low redshifts and on small scales, however, the Gaussian approximation fails due to non-linear evolution of matter fluctuations. This evolution generates additional contributions to the covariance through higher-order $n$-point correlations.
The simplest and most robust approach to obtain accurate non-Gaussian covariances has been to analyse large suites of simulations. This method was used by @Takahashi:2009bq, @Takahashi:2009ty, [@Blot:2015cvj], and @Klypin:2017iwu to study the non-Gaussian covariance of the power spectrum. Other authors have performed analogous studies for the bispectrum [@Sefusatti:2006pa; @Chan:2016ehg], the real-space partner of the integrated bispectrum [@Chiang:2015eza], and the line correlation function [@Eggemeier:2016asq]. In this section, we present our measurements of the non-Gaussian covariance for each proxy, estimated from our suite of simulations. We also discuss the cross-covariance between pairs of proxies.
In Sections \[sec:covariance\] and \[sec:paramEstim\] we quantify the impact of these complex non-diagonal covariances on estimates of signal-to-noise and Fisher forecasts.
We plot correlation matrices for the measurements $P+B$, $P+\beta$, $P+\ib$, and $P+\LCF$ in Fig. \[fig:correlation\]. We show measurements only at $z=0$ where differences between the Gaussian and non-Gaussian covariances are largest.
The correlation coefficient $\CorrMatrix_{ij}$ between two data bins $i$ and $j$ is defined to satisfy $\CorrMatrix_{ij} \equiv \hat{\CovMatrix}_{ij}/\sqrt{\hat{\CovMatrix}_{ii}\,\hat{\CovMatrix}_{jj}}$, where $\hat{\CovMatrix}$ is the covariance matrix estimated from the simulation suite, $$\hat{\CovMatrix}_{ij}
=
\frac{1}{\Nreal}
\sum_{n=1}^{\Nreal}
\Big[
\hat{S}_i^{(n)}
- \hat{\bar{S}}_i
\Big]
\Big[
\hat{S}_j^{(n)}
- \hat{\bar{S}}_j
\Big]
,
\label{eq:covariance-def}$$ and $\Nreal = 200$ is the number of realizations. To measure an auto-covariance the data vector $S$ contains all measurements of a single proxy, $S = ( X_{a,1}, \ldots, X_{a,n})$ or to measure a cross-covariance it contains all measurements from a pair, $S = ( X_{a,1}, \ldots, X_{a,n_1}, X_{b,1}, \ldots, X_{b,n_2} )$, where $X_a, X_b \in \{ P, B, \beta, \ib, \LCF \}$. The correlation matrix measures the degree of coupling between different measurements. Its elements take values between $-1$ (where the bins are fully anti-correlated) and $+1$ (where the bins are fully correlated). A value of zero corresponds to independent measurements. For comparison, the Gaussian covariance matrices for $P$, $B$, $\beta$ and $\ib$ are diagonal, whereas for $\LCF$ there are correlations between neighbouring bins with similar $r$ because it is a real-space statistic and therefore includes contributions from many Fourier configurations. In the Gaussian approximation the cross-covariance between $P$ and any bispectrum proxy is zero.
For $P+B$ (upper-left panel of Fig. \[fig:correlation\]) the correlation matrix has an approximate block structure due to the ordering of the 95 triangle configurations that we measure. The blocks correspond to groups of adjacent configurations with shared values of $k_1$ or $k_2$. While the power spectrum $P(k)$ shows mild correlations between different bins at high $k$, the bispectrum exhibits much stronger correlations. There are also non-zero cross-correlations between power spectrum and bispectrum bins. The correlation between power spectrum and bispectrum tends to be higher when $P(k)$ and $B(k_1, k_2, k_3)$ have wavenumber bins that overlap. Similarly, the correlation between different bispectrum bins is higher when the configurations share at least one wavenumber. However, even configurations that have no wavenumbers in common can be strongly correlated, with correlation coefficient as large as $\sim 0.8$, due to non-linear growth.
In the upper-right panel of Fig. \[fig:correlation\] we present measurements of the correlation coefficients for $P+\beta^R$. These have not previously been reported. As explained in Section \[sec:predict-modal\] these measurements apply to the $R$-basis, for which the covariance matrix is *constructed* to be diagonal in the Gaussian approximation. We find that only the first two modes are correlated with the majority of $P(k)$ bins. This is reasonable because the lowest modes probe the most scale-independent features of the phase bispectrum. The remainder show low-to-moderate correlation or anti-correlation due to non-linear effects.
Correlation measurements for the integrated bispectrum appear in the lower-left panel of Fig. \[fig:correlation\]. The $\ib(k)$ measurements show stronger auto-correlations than $P(k)$ as $k$ increases, while the $P \times \ib$ cross-correlation is relatively featureless. This indicates that the two data sets are nearly independent. Similarly, we find that the $P \times \LCF$ cross-correlation is nearly featureless except where the smallest $r$ bins and highest $k$ bins show significant correlation. Relative to the Gaussian covariance matrix for $\LCF$, the $r$ bins with $r \lesssim 50 \, h^{-1} \, \Mpc$ are more strongly correlated due to non-linear growth.
Finally, we have computed the correlation matrices between the bispectrum and its proxies. These enable us to identify which bispectrum configurations contribute most to individual bins of $\beta^R$, $\ib$ or $\LCF$. We find that the first two $\beta^R$ modes are strongly correlated with the bispectrum over large range of triangles, while the remainder are generally more correlated with triangles on the largest scales (that is, lower triangle index). This structure is similar to the $P+\beta^R$ correlation matrix.
We find that $B$ and $\ib$ are very weakly correlated, which we attribute to $\ib$ being dominated by more strongly squeezed triangles than any we include in the 95 measured configurations of $B$. Finally, the line correlation function is correlated with a majority of bispectrum configurations when $r \lesssim 40 \, h^{-1} \, \Mpc$. This indicates that the line correlation function is sensitive to many different shapes of Fourier triangle. We do not find particularly strong correlations for $\LCF \times \ib$, but $\LCF \times \beta^R$ shows that the line correlation function at small $r$ is highly correlated with the first two $\beta^R$ modes. This is consistent with the observation that both are sensitive to a wide range of Fourier configurations.
Cumulative signal-to-noise of the bispectrum proxies {#sec:covariance}
====================================================
Before discussing the constraining power of each proxy we first compute the available signal-to-noise. This is an intermediate step that characterizes the significance with which measurements of each proxy can be extracted from a data set. Negligible signal-to-noise would normally imply poor prospects for parameter constraints. For example, @Chan:2016ehg and [@Kayo:2012nm] studied the signal-to-noise as a proxy for the information content of the Fourier bispectrum in the context of large-scale structure and weak lensing, respectively.
The cumulative signal-to-noise $\mathcal{S} / \mathcal{N}$ up to a maximum wavenumber $\kmax$ is defined by $$\left( \SignalToNoise \right)^2
\equiv
\sum_{k_i, k_j \leq \kmax} S_i \CovMatrix^{-1}_{ij} S_j ,
\label{eq:signal-noise-def}$$ where $S$ is the vector of typical values for either a single proxy or a combination of proxies, defined below equation . In this and subsequent sections we drop the use of a hat to denote an estimated value, and an overbar to denote a mean. The sum in runs over all bins containing wavenumbers that satisfy the condition $k \leq \kmax$. For the Fourier bispectrum a bin corresponds to a triplet of wavenumbers $(k_1, k_2, k_3)$, all of which are required to be smaller than $\kmax$.
We use the non-Gaussian covariance matrix measured from simulations, described in Section \[sec:ngcovariance\], which we denote by $\CovMatrix_*$. Its inverse $\CovMatrix^{-1}_*$ is not an unbiased estimator of $\CovMatrix^{-1}$. A simple prescription to approximately correct for this bias is to rescale $\CovMatrix^{-1}_*$ by an Anderson–Hartlap factor [@Anderson2003; @Hartlap2007], which yields $$\CovMatrix^{-1}
\approx
\frac{\Nreal-\Nbin-2}{\Nreal-1}\,\CovMatrix^{-1}_*
,$$ where $\Nreal$ is the number of realizations used to estimate the covariance matrix and $\Nbin$ is its dimensionality. [^5] Care should be taken when computing the numerical inverse $\CovMatrix^{-1}_*$, especially for combinations of measurements with signals of widely disparate magnitude. To avoid issues associated with ill-conditioning we first compute the correlation matrix $\CorrMatrix_{*,ij} = \CovMatrix_{*,ij}/\sqrt{\CovMatrix_{*,ii}\,\CovMatrix_{*,jj}}$, whose entries lie between $-1$ and $+1$. We determine the inverse $\CorrMatrix^{-1}_{ij}$ using a singular value decomposition and check that all singular values are above the noise. Finally, we compute the inverse covariance using $$\CovMatrix^{-1}_{*,ij}
=
\frac{\CorrMatrix^{-1}_{*,ij}}{\sqrt{\CovMatrix_{*,ii} \CovMatrix_{*,jj}}}
.$$
![Cumulative signal-to-noise at redshift $z=0$ as a function of the maximal mode $\kmax$ for the measure $X$—equal to the Fourier bispectrum, phase bispectrum, line correlation function or integrated bispectrum (clockwise, starting from the top left panel). In each panel, blue circles refer to the measured signal-to-noise for $X$, while black crosses represent the signal-to-noise for the power spectrum. We plot the signal-to-noise for the combination $P+X$, including cross-covariance, as red stars. The blue, black and red lines give the theoretical prediction using the Gaussian approximation and tree-level SPT. The percentage quoted in the bottom right corner gives the increase in signal-to-noise relative to the power spectrum alone at $\kmax = 0.3 \, h \, \Mpc^{-1}$.[]{data-label="fig:s2n"}](plots/s2n)
In Fig. \[fig:s2n\] we plot the resulting signal-to-noise measurements for the Fourier bispectrum, integrated bispectrum, line correlation function and the quantity $\Bphase$ defined in and used in the construction of the line correlation function and the modal bispectrum. (The signal-to-noise from $\Bphase$ and the reconstructed modal bispectrum give almost identical results.) We estimate $\Bphase$ using the prescription $$B_{\epsilon}(k_1,\,k_2,\,k_3) = \frac{B(k_1, k_2, k_3)}{\sqrt{P(k_1) P(k_2) P(k_3)}} .$$ Each panel of Fig. \[fig:s2n\] shows the cumulative signal-to-noise of the Fourier bispectrum or a proxy (blue circles), together with the power spectrum (black crosses) and their combination including the cross-covariance matrix (red stars). The first four data points in the $B$ and $\Bphase$ panels use a bin size $\Delta k = 2\kf$ in order to probe the low-$k$ regime. The remainder derive from the measurements presented in Section \[sec:comparison\] and use $\Delta k = 8 \kf$. Our measurements of the integrated bispectrum and line correlation function carry forward the binning procedure used in Section \[sec:comparison\]. The step-like structure that occurs for $P+\ell$ is due to a mismatch of scales between the power spectrum and the bins of the line correlation function. In each panel, for comparative purposes, we plot lines of matching colour to show the signal-to-noise computed using a Gaussian approximation to the covariance matrix and tree-level SPT to evaluate any correlation measures it contains.
First, we note that the Gaussian approximation overpredicts the signal-to-noise for each proxy $X$ and its combination $P+X$ with the power spectrum. This is consistent with the results reported by @Chan:2016ehg. The overprediction occurs because bins become coupled by non-linear evolution, and therefore do not provide independent information as the Gaussian approximation assumes. The effect can be quite severe: while the power spectrum signal-to-noise at $\kmax = 0.3 \, h \, \Mpc^{-1}$ is overpredicted by a factor of three, the impact on the Fourier bispectrum and its proxies is much larger. In these cases the overprediction ranges from a factor of $\sim 5$ or $8$ for $\ib$ and $\LCF$ up to more than an order of magnitude for the Fourier bispectrum. At smaller $\kmax$ the overprediction is less, becoming significant for $\kmax \gtrsim 0.1 \, h \, \Mpc^{-1}$.
The Fourier bispectrum, phase bispectrum, and line correlation function *individually* contribute $\sim 30\%$ of the signal-to-noise of $P(k)$ at $\kmax = 0.3 \, h \, \Mpc^{-1}$, while the integrated bispectrum achieves only $5\%$ of the $P(k)$ signal-to-noise. For the Fourier bispectrum, this result is consistent with @Chan:2016ehg.
However, for estimating parameter constraints from the joint combination of $P$ and $B$, or one of its proxies, the individual signal-to-noise contributed by one of these measurements is less important than whether it contains information that is not already present in the power spectrum. This is determined by the signal-to-noise of the combination $P+X$ compared to $P$ alone. The different proxies show significant variation in the improvement from use of $P+X$, which we indicate as a percentage in the bottom-right corner of each panel. Although $B$, $\Bphase$ and $\LCF$ individually carry roughly the same signal-to-noise, the uplift in $P+X$ varies from $\sim 91\%$ to $\sim 11\%$. Note that the signal-to-noise of $P+B$ receives a large improvement from the cross-covariance, which was ignored in @Chan:2016ehg.
The discrepancy in uplift between $B$ and $\Bphase$ is striking. If this discrepancy were to carry over to parameter constraints it would imply that the Fourier bispectrum carries *significantly* more constraining power than $\Bphase$, even though both statistics are equivalent in the approximation of Gaussian covariance. If true, this would be very surprising. We return to this question in Section \[sec:signoiseProxy\] after we have obtained forecast parameter uncertainties for $B$ and its proxies, which enable us to precisely quantify the constraining power of each statistic.
Parameter uncertainty forecasts {#sec:paramEstim}
===============================
In this section we collect our major results, which are Fisher forecasts of the error bars achievable on the parameter set $\theta_\alpha = ( \Omega_m, \Omega_b, w_0, w_a, \sigma_8, n_s, h )$ of Table \[tab:parameters\], based on a fiducial flat cosmology. We perform these forecasts with and without inclusion of the bias parameters $(b_1, b_2)$.
In Section \[sec:forecasting-method\] we summarize our implementation of the Fisher forecasting method, and in Section \[sec:information\_content\] we present and compare the forecasts from each proxy. By comparing forecasts with and without non-Gaussian covariances, and using different theoretical models to describe the dark matter density, we are able to characterize their influence on the final parameter constraints. These discussions appear in Sections \[sec:ng-cov\] and \[sec:theory-dep\], respectively. Finally, we return to the discussion of Section \[sec:covariance\] and examine to what extent the signal-to-noise provides a reliable metric by which to estimate improvements in parameter constraints (Section \[sec:signoiseProxy\]).
Forecasting method {#sec:forecasting-method}
------------------
The Fisher formalism can be used to forecast the precision with which cosmological parameters could be measured in a future survey. Consider a data vector $\bx$ containing measurements of any combination of statistical quantities. The likelihood function $\Likelihood(\B{\theta} \mid \bx)$ is defined to be the probability of the data given the parameters $\B{\theta}$, so $\Likelihood(\B{\theta} \mid \bx) = P(\bx \mid \B{\theta})$. Then the Fisher matrix $\FisherMatrix_{\alpha\beta}$ satisfies $$\FisherMatrix_{\alpha\beta}
\equiv
-
\left\langle
\frac{\partial^2 \ln \Likelihood(\B{\theta} \mid \bx)}
{\partial\theta_{\alpha}\,\partial\theta_{\beta}}
\right\rangle
.$$ The expected $1\sigma$ error on each parameter $\theta_\alpha$, marginalized over all other parameters, can be obtained from the diagonal elements of the inverse Fisher matrix using $\sigma^2(\theta_\alpha) = (\FisherMatrix^{-1})_{\alpha\alpha}$. To simplify the computation of $\FisherMatrix_{\alpha\beta}$ we make the assumption that the likelihood function is a multivariate Gaussian, $$\label{eq:paramEstim.likelihood}
\Likelihood
=
\frac{1}{\sqrt{(2\pi)^n|\CovMatrix|}}
\exp
\left[
-\frac{1}{2}(\bx-\B{\mu})^\transpose \CovMatrix^{-1} (\bx-\B{\mu})
\right]
,$$ where $\transpose$ denotes a matrix transpose and $|\CovMatrix| = \det \CovMatrix$ is the determinant of $\CovMatrix$. We have written the mean of the data vector as $\B{\mu} = \langle\bx\rangle$, and its covariance matrix is $\CovMatrix_{ij} = \langle x_i\,x_j\rangle - \mu_i\,\mu_j$. With these assumptions it can be shown that [@Tegmark:1996bz], $$\label{eq:paramEstim.fisherG}
\FisherMatrix_{\alpha\beta}
=
\frac{1}{2}
\Tr
\left[
\CovMatrix^{-1}
\frac{\partial\CovMatrix}{\partial\theta_{\alpha}}
\CovMatrix^{-1}
\frac{\partial\CovMatrix}{\partial\theta_{\beta}}
\right]
+
\frac{\partial \B{\mu}^\transpose}{\partial \theta_{\alpha}}
\CovMatrix^{-1}
\frac{\partial \B{\mu}}{\partial \theta_{\beta}}
.$$ The first term measures variation of the covariance matrix with respect to the parameters, which is often a smaller effect than the variation of the means represented by the second term. In the approximation that this first term may be neglected the Fisher matrix can be computed in terms of the inverse covariance matrix for the fiducial model. Our procedure to obtain this matrix from the simulation suite has already been described in Sections \[sec:comparison\] and \[sec:covariance\].
The Fisher formalism depends explicitly on details of the survey under discussion, both through the specification of the data vector $\bx$—such as how many redshift bins are used and which Fourier configurations are included—and the properties of the covariance matrix $\CovMatrix$. In the following we adopt the parameters of an idealized survey of large-scale structure consisting of three independent redshift slices at $z=0$, $z=0.52$ and $z=1$. Each slice has volume $V = 3.375 \,h^{-3}\,\Gpc^3$ and a mode cutoff at $\kmax = 0.3 \, h \, \Mpc^{-1}$. The total Fisher matrix can be written as a sum of the Fisher matrix in each slice, $$\FisherMatrix^{\text{LSS}}_{\alpha\beta}
=
\FisherMatrix_{\alpha\beta}(z=0)
+ \FisherMatrix_{\alpha\beta}(z=0.52)
+ \FisherMatrix_{\alpha\beta}(z=1)
.$$ We assume that, in each redshift bin, the number density of galaxies is sufficiently high that the effect of shot noise is small. We do not include redshift-space distortions or the effect of complex survey geometry. In general, all of these effects will be significant for a realistic survey and cannot be neglected. However, in this paper our intention is to address the question of whether the proxies described in Section \[sec:estimators\] can be competitive with measurements of the Fourier bispectrum *in principle*. Survey-specific effects will generally reduce the number of configurations that can be measured, or increase the noise on those for which measurements are possible. This will typically weaken the performance of the proxies, meaning that their neglect gives us an estimate of the best-case scenario. While we do not anticipate that astrophysical or observational systematics will affect any one proxy more than the others, this is an interesting question to explore in future.
Each of the constraints we present includes a prior from the cosmic microwave background power spectrum. We implement this prior by adding a fourth Fisher matrix, $$\FisherMatrix^{\text{tot}}
=
\FisherMatrix^{\text{LSS}}
+ \FisherMatrix^{\text{CMB}}
.$$ Details of the computation of $\FisherMatrix^{\text{CMB}}$ for our choice of fiducial parameters were given by @Smith:2012uz.
Constraining power of the bispectrum and its proxies {#sec:information_content}
----------------------------------------------------
In this section we present our forecasts. To minimize modelling errors we construct the Fisher matrix for each proxy using quantities measured from simulation, except for derivatives with respect to the bias parameters which cannot be obtained in this way. For the Fourier bispectrum we compute these derivatives analytically by differentiating the one-loop power spectrum and the tree-level bispectrum . Once the derivatives have been obtained we replace occurrences of the dark matter power spectrum and bispectrum with their measured values. Our prescription for the proxies is similar, using the one-loop power spectrum to estimate derivatives of $P(k)$ and tree-level formulae together with the formulae of Section \[sec:modelling\] to estimate derivatives of the proxy.
We plot the forecast $1\sigma$ confidence contours in Fig. \[fig:fisher\_forecast\]. Each panel shows predicted joint constraints for a pair of parameters after marginalizing over all the others. The grey shaded region marks the constraint predicted from measurements of the power spectrum only, except for inclusion of the CMB prior that we apply to all estimates. The solid dark-blue line marks the constraint predicted from $P+\ib$; the long-dashed red line marks the constraint predicted from $P+\LCF$; the short-dashed light-blue line marks the constraint predicted from $P+\beta$; and the solid black line marks the constraint predicted from $P+B$. We summarize the marginalized $1\sigma$ error bars in Table \[tab:fisher\]. The value in parentheses following each uncertainty indicates the percentage improvement compared to use of $P(k)$ alone.
![Comparison of marginalized $1\sigma$ likelihood contours forecast from a combination of the power spectrum and one of the following 3-point measures: integrated bispectrum (dark blue, solid), line correlation function (red, long-dashed), modal decomposition (light blue, short-dashed) and Fourier bispectrum (black, solid). The grey shaded regions show the error ellipses for the power spectrum alone. All forecasts include priors from a Planck-like CMB experiment and use a cut-off scale $\kmax = 0.3 \,h\,\Mpc^{-1}$. The covariance matrices and parameter derivatives for the Fisher forecasts shown here are all derived from our simulation results in Section \[sec:comparison\].[]{data-label="fig:fisher_forecast"}](plots/fisher_forecast)
First consider the joint constraints from $P+B$ (solid black lines in Fig. \[fig:fisher\_forecast\]). These demonstrate that substantial improvements can be achieved compared to measurement of the power spectrum only. This is especially evident for $\sigma_8$ and the two bias parameters, for which the improvement is roughly $70\%$–$80\%$; compare the second column of Table \[tab:fisher\]. This is perhaps unsurprising: the bispectrum constrains a different combination of $\sigma_8$ and $b_1$ than the power spectrum, and therefore assists in breaking their degeneracy [@Fry1994; @Matarrese:1997sk]. Nevertheless, other parameters that do not participate in this degeneracy also experience improvements in the range $13\%$–$22\%$, with the exception of $\Omega_b$. This is already very well-measured by the CMB prior, and large-scale structure measurements can add little new information. These conclusions are similar to those reported by @Sefusatti:2006pa, who suggested that inclusion of Fourier bispectrum measurements could reduce uncertainties on $\Omega_m$ and $\sigma_8$ by a factor in the range $1.5$ to $2$.
Next, the forecast for the integrated bispectrum (solid dark-blue lines) shows that it offers negligible improvement, of order $\sim 2\%$, in comparison to $P$ alone. This is consistent with the very small dependence on cosmological parameters discussed in Section \[sec:derivs\], and the low signal-to-noise obtained in Section \[sec:covariance\]. On the other hand, the line correlation function offers comparable constraints to the Fourier bispectrum for $\sigma_8$ and $b_1$, which receive improvements of $53\%$ and $68\%$, respectively. @Eggemeier:2016asq demonstrated that this occurs because the line correlation function is nearly independent of $b_1$ and therefore probes a different direction in parameter space than $P$ or $B$. Also, inclusion of $\LCF$ measurements increases sensitivity to the dark energy parameters $w_0$ and $w_a$ by $\sim 9\%$. These improvements are only marginally degraded compared to those from $P+B$, which are of order $15\%$.
Finally, Fig. \[fig:fisher\_forecast\] demonstrates that the modal bispectrum with $\nmax = 50$ (short-dashed light-blue lines) is predicted to yield error bars nearly equivalent to the Fourier bispectrum with $95$ triangles. Note especially that there is no sign of the significant difference in constraining power between $B$ and $\Bphase$—which is the quantity implicitly measured by $\beta$ with our choice of basis—that was suggested by our analysis of signal-to-noise in Section \[sec:covariance\]. We return to this apparent discrepancy in Section \[sec:signoiseProxy\] below. Just as important, the differences between the cases $\nmax = 10$ and $\nmax = 50$ are mostly negligible. Therefore, even with as few as $\nmax = 10$ modes, the modal decomposition retains nearly the full constraining power of the bispectrum. However, it should be remembered that Fig. \[fig:bspecrecon\] suggests the Fourier bispectrum reconstructed with so few modes will introduce more significant scatter. In a realistic analysis, these reconstruction errors could manifest themselves as a bias on the best-fit cosmological parameters. Unfortunately we cannot account for this bias in our Fisher analysis, but it deserves further investigation.
[cllXlXlXlXlX]{} & & & & & &\
& & & & & & & &\
(r)[2-2]{} (lr)[3-4]{} (lr)[5-6]{} (lr)[7-8]{} (lr)[9-10]{} (l)[11-12]{} $\Omega_m$ & $0.00179$ & $0.00140$ & $(22\%)$ & $0.00141$ & $(21\%)$ & $0.00144$ & $(19\%)$ & $0.00172$ & $(4\%)$ & $0.00167$ & $(7\%)$\
$\Omega_b$ & $0.00015$ & $0.00014$ & $(5\%)$ & $0.00014$ & $(5\%)$ & $0.00014$ & $(4\%)$ & $0.00015$ & $(2\%)$ & $0.00015$ & $(1\%)$\
$w_0$ & $0.084$ & $0.070$ & $(16\%)$ & $0.068$ & $(19\%)$ & $0.069$ & $(17\%)$ & $0.076$ & $(9\%)$ & $0.082$ & $(2\%)$\
$w_a$ & $0.370$ & $0.315$ & $(15\%)$ & $0.306$ & $(17\%)$ & $0.310$ & $(16\%)$ & $0.338$ & $(9\%)$ & $0.360$ & $(3\%)$\
$\sigma_8$ & $0.0092$ & $0.0023$ & $(75\%)$ & $0.0024$ & $(74\%)$ & $0.0025$ & $(73\%)$ & $0.0043$ & $(53\%)$ & $0.0090$ & $(2\%)$\
$n_s$ & $0.00327$ & $0.00284$ & $(13\%)$ & $0.00281$ & $(14\%)$ & $0.00284$ & $(13\%)$ & $0.00303$ & $(7\%)$ & $0.00323$ & $(1\%)$\
$h$ & $0.00103$ & $0.00087$ & $(15\%)$ & $0.00086$ & $(16\%)$ & $0.00087$ & $(15\%)$ & $0.00095$ & $(7\%)$ & $0.00101$ & $(2\%)$\
$b_1$ & $0.0103$ & $0.0020$ & $(81\%)$ & $0.0021$ & $(79\%)$ & $0.0022$ & $(79\%)$ & $0.0032$ & $(68\%)$ & $0.0100$ & $(3\%)$\
$b_2$ & $0.0100$ & $0.0031$ & $(69\%)$ & $0.0031$ & $(69\%)$ & $0.0031$ & $(69\%)$ & $0.0085$ & $(15\%)$ & $0.0100$ & $(1\%)$\
\[tab:fisher\]
The strong degeneracy between $\sigma_8$ and $b_1$ can be broken by other means. For example, it is possible to use weak lensing measurements that probe the matter power spectrum directly. Given that inclusion of 3-point correlation data yields the largest improvements for $\sigma_8$ and the bias, it is worthwhile considering what improvements should be expected were the bias to be fixed by other cosmological observations.
In a scenario of this kind the power spectrum constraints would not be weakened by marginalization over the bias parameters, and therefore inclusion of 3-point correlation data would no longer yield such a dramatic improvement for $\sigma_8$. However, we still find encouraging improvements for many parameters. For example, inclusion of either Fourier or modal bispectrum measurements would decrease uncertainty on $\sigma_8$ by $\sim 25\%$ and all other parameters except $\Omega_b$ by $10\%$–$15\%$. Inclusion of $\LCF$ measurements would decrease uncertainty on $\sigma_8$ by $20\%$, on the dark energy parameters by $\sim 10\%$, and for all other parameters by $\lesssim 5\%$. We conclude that, even in the extreme case that $b_1$ and $b_2$ can somehow be determined exactly, inclusion of 3-point correlation data still provides valuable additional information.
These Fisher forecasts should be interpreted with some care. As explained above, we do not include a number of astrophysical and observational effects that complicate the analysis of realistic galaxy survey data. These include redshift uncertainties, redshift-space distortions, irregular survey geometries and shot noise. In particular, for the forecasts presented here the effective shot noise is set by the number density $\bar{n} = 0.125 \, h^3 \, \Mpc^{-3}$ of particles in our simulation suite. This is substantially larger than the galaxy number densities that will be achieved by upcoming surveys. We return to this issue in Section \[sec:shotnoise\], where we discuss how our predictions would be modified by a more realistic number density.
Effect of non-Gaussian covariance and cross-covariance {#sec:ng-cov}
------------------------------------------------------
The non-Gaussian covariance measured in simulations differs from the Gaussian approximation in two ways: (1) it includes additional contributions to the variance of each bin from higher-order correlations, and (2) it adds or enhances coupling between different bins of a single proxy, and between bins of different proxies. These non-Gaussian corrections generally lead to weaker parameter constraints when compared to forecasts constructed using the Gaussian approximation, because this assumes that every bin contributes independent information. In this section we compare the relative impact of non-Gaussian covariance for the different proxies by contrasting Fisher forecasts made with and without its inclusion. We give results for the combinations $P+\ib$, $P+\LCF$, $P+\beta$ and $P+B$ and each choice of theoretical model—tree-level SPT, 1-loop SPT, or the halo model.
Fig. \[fig:fisher-ngcov\] shows the relative increase $\sigma_{NG}/\sigma_G -1$ in predicted uncertainty for each parameter when non-Gaussian contributions are included. To estimate $\sigma_G$ we use the expressions for Gaussian covariance given in Section \[sec:modelling\] with each quantity replaced by its value measured from our simulations. For example, to construct the Gaussian covariance for $\ib$ we use equation with $\sigma_L^2$ replaced by its measured value. We could equally well have constructed similar estimates using one of the theoretical models to calculate such values, but the result is not very different. The discussion in this section would continue to apply if we were to reproduce Fig. \[fig:fisher-ngcov\] using estimates generated by any of these prescriptions.
The increase in uncertainty induced by inclusion of non-Gaussian effects depends on the measure of 3-point correlations used to generate constraints, the method used to estimate the Gaussian covariance matrix, and the parameter in question. In general we find that the Gaussian approximation underpredicts the uncertainty for the Fourier bispectrum more strongly than for its proxies. Note also that—although $P+\beta$ and $P+B$ yield nearly identical constraints when the non-Gaussian covariance is used, as described in Section \[sec:information\_content\]—the importance of the non-Gaussian covariance for these combinations is not the same. Since the quantity $\Bphase$ measured by $\beta$ is not the same as $B$, neglecting cross-covariance with $P$ (as the Gaussian covariance does) will leave out different information for $P+\beta$ compared to $P+B$.
Inclusion of non-Gaussian covariance impacts uncertainties for $w_0$, $w_a$ and $\sigma_8$ more significantly than the other parameters. This non-uniformity means that it is not obvious how inclusion of non-Gaussian covariance might impact constraints from 3-point correlations on further parameters not considered here. For instance, a number of authors have used Gaussian covariances to forecast future constraints on a primordial bispectrum generated by inflation; see @Scoccimarro:2003wn, @Sefusatti:2007ih, @Sefusatti:2011gt, @Baldauf:2016sjb, @Welling:2016dng and @Tellarini:2016sgp. It is not yet clear how these forecasts will change when more realistic non-Gaussian covariances are used.
![Increase in parameter uncertainties from non-Gaussian covariances, measured using $\sigma_{NG}/\sigma_{G}-1$, where $\sigma_{NG}$ ($\sigma_{G}$) is the predicted error bar using the non-Gaussian (Gaussian-like) covariance from simulations. Predictions that include (do not include) a marginalization over the bias parameters are in blue-green (purple).[]{data-label="fig:fisher-ngcov"}](plots/Fig9_Gaussian_vs_non-Gaussian){width="\textwidth"}
![Improvement in parameter uncertainties from the inclusion of cross-covariance, measured using $\sigma_{NG-\text{no}-CC}/\sigma_{NG-\text{with}-CC}-1$, where $\sigma_{NG-\text{with}-CC}$ is the error bar predicted using non-Gaussian covariance measured from simulations and $\sigma_{NG-\text{no}-CC}$ is the error bar predicted from the same covariance matrix, except with cross-covariances between $P$ and each 3-point statistic set to zero. Predictions that include (do not include) a marginalization over the bias parameters are in blue-green (purple).[]{data-label="fig:fisher-ngcov-cc"}](plots/Fig10_cross-covariance){width="\textwidth"}
In Fig. \[fig:fisher-ngcov-cc\] we summarize the influence of cross-covariance between $P$ and the 3-point measures by comparing constraints using the full non-Gaussian covariance to constraints where the cross-covariance has been set to zero. We find that inclusion of cross-covariances *reduces* the predicted uncertainties for nearly all parameters and choices of combination $P+X$, whether or not we marginalize over galaxy bias. In the few cases where inclusion of cross-covariance did not reduce the uncertainties (e.g. constraints on $\Omega_m$ from $P+B$ and $P+\beta$), the predicted error bar is weakened by less than 12% of the error bar without cross-covariance. Overall, we find that ignoring cross-covariances can overestimate uncertainties by up to $\sim 40\%$ when we do not marginalize over the bias, and by $40-70\%$ for the special case of bispectrum constraints on the bias parameters themselves.
This reduction of uncertainties due to inclusion of cross-covariances may be surprising. While we have not explicitly identified the source of the improved constraining power, this is not a new feature of Fisher forecasts using non-Gaussian covariances. For example, a number of authors using cross-correlations between cluster counts, weak lensing power spectra and the weak lensing bispectrum have found that parameter constraints can improve when cross-covariances between strongly-coupled measurements are included [@Takada:2007fq; @Sato:2013mq; @Kayo:2012nm]. But it is also possible that our improvements are partly due to the galaxy biasing model we have chosen. A simulation of halos, rather than dark matter alone, could be used to verify the effect when simultaneously constraining both cosmological parameters and galaxy bias.
The conclusion of this discussion is that an accurate estimate for the covariance matrix, including non-Gaussian contributions and off-diagonal terms, is important if we wish to obtain reliable constraints. Unfortunately, this is especially true for the Fourier bispectrum for which the Gaussian approximation most significantly underestimates the true parameter uncertainties. This implies that surveys aiming to generate constraints from inclusion of $B$ measurements cannot evade the computational difficulties associated with estimating their covariance matrix.
To mitigate these difficulties we could consider use of $P+\beta$ rather than $P+B$. As we have seen in Section \[sec:information\_content\], these combinations yield nearly equivalent constraints using $95$ Fourier configurations and $50$ modal coefficients respectively, and therefore the modal decomposition makes the information content of the bispectrum more accessible by reducing the size of the covariance matrix needed to obtain it. We consider the efficiency with which each proxy can compress the information carried by $B$ in Section \[sec:sufficient-statistic\].
Theory-dependence of the forecasts {#sec:theory-dep}
----------------------------------
![Improvements in parameter uncertainties from the addition of 3-point statistics are shown as bars with height $\sigma(P)/\sigma(P+X)$, the ratio of parameter errors from $P$ only and the combination $P+X$. The labels at the top of each column indicate whether bias parameters are included or excluded, and whether Gaussian or non-Gaussian covariances are used. Each group of four bars corresponds to a different choice of theoretical model, and the colour of each bar indicates the $P+X$ combination. We note that, since the tree-level power spectrum does not depend on $b_2$, for the two tree-level bar groups in the last row, the bar heights measure $\sigmamax/\sigma(P+X)$, where $\sigmamax$ is the maximum error on $b_2$ among the four $\sigma(P+X)$ values. []{data-label="fig:fisher-biggrid"}](plots/Fig11_big_grid){width="80.00000%"}
In Section \[sec:information\_content\] we have presented our Fisher forecasts based on simulated data, and in Section \[sec:ng-cov\] we have discussed the influence of non-Gaussian covariance and cross-covariances. These results enable us to assess the information content carried by the Fourier bispectrum and its proxies, but the question of how easily these statistics can be deployed remains open. In particular, we would like to know whether the use of simulated data is essential, or whether any of the models described in Section \[sec:modelling\] are sufficient. In this section we study the dependence of our forecasts on the choice of theoretical model used to estimate the derivatives $\partial \B{\mu}/\partial \theta_\alpha$ in equation .
First, we consider whether there is a model that provides a clear best-match to the forecast using simulated data. Fig. \[fig:fisher-biggrid\] compares the forecasts for each parameter using different prescriptions for the covariance matrix and for different choices of theoretical model, with marginalization over the bias included or excluded. The bar heights represent the reduction in the predicted uncertainty provided by a given combination, relative to the base model of power spectrum data only combined with a CMB prior. The results of Section \[sec:information\_content\] are labelled ‘sim’. Unfortunately, for each combination $P+X$ there is no single choice of theoretical model yielding forecasts that provide the best match to the ‘sim’ outcome for all parameters—with or without marginalization over bias.
For example, consider the combination $P+B$ in the first column of Fig. \[fig:fisher-biggrid\]. This summarizes forecasts generated by including non-Gaussian covariance and marginalization over the bias. For $\sigma_8$ it is 1-loop SPT that gives the best match to the ‘sim’ result, but for the linear bias parameter $b_1$ the best match comes from tree-level SPT.
Alternatively, one could ask whether any one model provides uniformly conservative or uniformly optimistic forecasts. If so, that model could be used to estimate upper or lower limits on the uncertainty for any chosen parameter. But Fig. \[fig:fisher-biggrid\] demonstrates that there are no models with such properties. For example, focusing again on the first column, there is no single choice of theoretical model for $P+B$ that forecasts the largest or smallest improvement for all parameters.
![Sensitivity factors, defined as the ratio between the largest and smallest forecast parameter uncertainty among the three theoretical models, for each $P+X$ combination. The forecasts compared here include bias parameters and use non-Gaussian covariances. []{data-label="fig:fisher-theory-sens"}](plots/Fig13_sensitivity_to_modelling){width="55.00000%"}
![Fractional difference in predicted uncertainties induced by theoretical modelling of derivatives (orange) or by using a Gaussian approximation to the covariance (blue). (See text for details of how the fractional differences are defined.)[]{data-label="fig:fisher-ngcov-vs-theory"}](plots/Fig12_error_from_models_vs_Gaussian_cov){width="\textwidth"}
Next, we study the variation in forecasts for the Fourier bispectrum and its proxies when we change the model used to compute $\partial \B{\mu} / \partial \theta_\alpha$. To understand the sophistication required to obtain accurate models we will need to understand which of these statistics (if any) are especially sensitive or immune to theoretical mis-modelling. We measure this dependence by a *sensitivity factor*, which we define to be the ratio between the largest and smallest forecast uncertainties taken over the models of Section \[sec:modelling\]. A sensitivity factor close to unity indicates that a forecast uncertainty depends only weakly on the choice of theoretical model, while a large value indicates that the model has a strong influence on the final outcome.
We plot these sensitivity factors in Fig. \[fig:fisher-theory-sens\], computed with inclusion of all bias parameters and using non-Gaussian covariances. Therefore the sensitivity factor solely reflects the variation in uncertainty produced by different choices for theoretical model. We conclude that there is no single measure of 3-point correlations that consistently yields the largest or smallest sensitivity to variations in modelling. Therefore, there is apparently no single combination $P+X$ that should be preferred to minimize the effect of theory errors on inferred parameter constraints.
Neither of these criteria provide a rationale to prefer a choice of theoretical model. Nevertheless, we do find some general trends. Irrespective of theoretical model, we find the largest reductions in parameter uncertainties when the bias is constrained simultaneously with the cosmological parameters. Also, the Fourier bispectrum and modal bispectrum consistently offer the most significant improvements compared to $P$-only measurements, with very similar predicted uncertainties. The line correlation function achieves moderate improvement compared to $P$-only, while the integrated bispectrum has very weak constraining power—at least for the parameter set we consider. We conclude that $P+B$ or $P+\beta$ should be preferred for constraints on parameters, with $P+\beta$ offering similar information at reduced computation cost as discussed at the end of Section \[sec:ng-cov\].
Finally, we consider the relative importance of non-Gaussian covariance and theoretical modelling for obtaining quantitatively accurate forecasts. In Fig. \[fig:fisher-ngcov-vs-theory\] we show the fractional difference in Fisher forecasts induced by variation of theoretical model (orange bars) and use of the Gaussian approximation (blue bars). To quantify the significance of theoretical modelling we plot $\max(|\sigma_{NG,i}/\sigma_{NG}(\text{sim}) -1|)$, where $i \in \{ \text{tree}, \text{1-loop}, \text{halo} \}$. Therefore larger orange bars reflect more significant deviation from the simulated forecast due to theoretical uncertainty. Meanwhile we quantify the role of the covariance matrix by plotting $|\sigma_G(\text{sim})/\sigma_{NG}(\text{sim})-1|$, so increasing blue bars show that the Gaussian approximation generates more significant errors in the forecast.
Fig. \[fig:fisher-ngcov-vs-theory\] shows that the impact of theoretical uncertainty for $P+\beta$ and $P+B$ is generally less significant than neglect of non-Gaussian covariance, whether or not we marginalize over the bias. In contrast, for $P+\LCF$ the effect of modelling nearly always dominates because of the difficulties with the halo model discussed in Section \[sec:derivs\]. For $P+\ib$ the non-Gaussian covariance plays an important role if the bias parameters are not included, but theoretical modelling dominates when they are.
On balance, these results indicate that our forecasts are slightly less sensitive to theory error than to the approximation of Gaussian covariance. This could be because the inverse covariance weighting suppresses contributions from the non-linear regime where the theoretical predictions are most discrepant. But the difference is not large: the average variation in our predicted uncertainties from $P+B$ and $P+\beta$ due to theory modelling is $36\%$, whereas the variation due to Gaussian covariances is $49\%$. Therefore, we conclude that both issues must be addressed in order to obtain quantitatively accurate results.
Signal-to-noise as a proxy for the information content {#sec:signoiseProxy}
------------------------------------------------------
It is now necessary to address the question of why the large discrepancy in uplift between the signal-to-noise of $B$ and $\Bphase$ (equivalently $\beta$) observed in Section \[sec:covariance\] did not translate into significant differences in the forecast for parameter uncertainties in Section \[sec:information\_content\].
Consider a vector of values $S$ combining measures $P$ and $X$ of the 2- and 3-point correlation data, respectively, as defined below equation . For a given parameter $\theta$ the reduction in uncertainty compared to measurements from $P$ alone can be estimated in the Fisher framework by $$\label{eq:fisherratio}
\frac{\FisherMatrix_\theta(S)}{\FisherMatrix_\theta(P)}
=
\sum_{i,j}
\frac{\partial S_i}{\partial \theta}
\CovMatrix_{ij}^{-1}
\frac{\partial S_j}{\partial \theta}
\Big/
\sum_{i,j}
\frac{\partial P_i}{\partial \theta}
(\CovMatrix^\text{P})_{ij}^{-1}
\frac{\partial P_j}{\partial \theta}
.$$ To avoid ambiguity we use the notation $\CovMatrix^\text{P}$ to denote the covariance matrix of the power spectrum *only*. Meanwhile, the increase in signal-to-noise in the same scenario is given by $$\label{eq:signoiseratio}
\frac{(\mathcal{S}/\mathcal{N})^2_{S}}{(\mathcal{S}/\mathcal{N})^2_P}
=
\sum_{i,j}
S_i
\CovMatrix_{ij}^{-1}
S_j
\Big/
\sum_{i,j}
P_i
(\CovMatrix^\text{P})_{ij}^{-1}
P_j
.$$ The uplift in signal-to-noise is often taken as an approximation to the reduction in parameter uncertainty, which avoids the need to compute $\partial S_i / \partial \theta$. As we have seen in Section \[sec:derivs\], these derivatives can be rather fragile and are susceptible to significant errors caused by theory mis-modelling. Unfortunately, when applied to $S = P+B$ and $S = P+\beta$ our analysis demonstrates that the ratios $\FisherMatrix_\theta(P+B) / \FisherMatrix_\theta(P)$ and $\FisherMatrix_\theta(P+\Bphase) / \FisherMatrix_\theta(P)$ are nearly equal, whereas the same ratios constructed using $\mathcal{S}/\mathcal{N}$ are very discrepant. Therefore we must conclude that improvements in signal-to-noise cannot always be interpreted as a predictor of the improvement in Fisher information.
First consider the Fisher matrix. Suppose we perform a redefinition so that $S_i \rightarrow S'_i = S'_i(S_j)$, where $S'_i$ may be an arbitrary nonlinear function of the original measurements. For example, the transformation from $B$ to $\Bphase$ is of this type. The derivative $\partial S_i / \partial \theta_\alpha$ transforms ‘contravariantly’ on its index $i$, in the sense $\partial S'_i / \partial \theta_\alpha =
\sum_m (\partial S'_i / \partial S_m) (\partial S_m / \partial \theta_\alpha)$. Meanwhile, the covariance matrix becomes $$\CovMatrix^S_{ij} \rightarrow \CovMatrix^{S'}_{ij}
=
\langle (S'_i - \bar{S}'_i) (S'_j - \bar{S}'_j) \rangle
=
\sum_{m,n}
\frac{\partial S'_i}{\partial S_m}
\frac{\partial S'_j}{\partial S_n}
\CovMatrix^S_{mn}
+
\cdots ,
\label{eq:covmatrix-tensor}$$ where ‘$\cdots$’ denotes terms involving higher order correlations that we have not written explicitly. Provided these are small compared to the $\CovMatrix^S_{mn}$ term, equation shows that the covariance matrix also transforms ‘contravariantly’, and therefore that its inverse transforms ‘covariantly’. Subject to these approximations we conclude that the Fisher matrix should be roughly *invariant*. This agrees with our observation that $\FisherMatrix_\theta(P+B)$ and $\FisherMatrix_\theta(P+\Bphase)$ are nearly equal, demonstrated numerically in Table \[tab:fisher\].
Now consider the signal-to-noise. Since $S_i$ has neither a co- or contravariant transformation law, the combination $\sum_{i,j} S_i \CovMatrix^{-1}_{ij} S_j$ appearing in the signal-to-noise will typically *not* be invariant. Therefore different choices $S_i$ and $S'_i$ may yield inequivalent results for $\mathcal{S}/\mathcal{N}$. For example, we have verified that using $P + \ln B$ predicts a significant increase in the signal-to-noise compared to $P+B$, whereas their Fisher matrices continue to agree. In Table \[tab:unmargin\_improve\] we summarize the improvement in unmarginalized constraints from the addition of $B$ or $\Bphase$. This demonstrates that empirically the increase in signal-to-noise from $\Bphase$ provides a more accurate estimate of the Fisher information than $B$. This property holds for both proxies of $\Bphase$, namely the modal bispectrum, and the line correlation function.
This outcome is not inconsistent with the result that $B$ and $\Bphase$ show an equivalent uplift in signal-to-noise in the Gaussian approximation. In this case the covariance matrix for $\Bphase$ is $\CovMatrix_{ij}^{B_\epsilon}=\BispectrumDegeneracy \Kronecker_{i j}$, where the constant $\BispectrumDegeneracy$ takes the values $1$, $2$ or $6$ for scalene, isosceles and equilateral configurations, respectively, as described in Section \[sec:modelling\]. In the same approximation the covariance matrix for the Fourier bispectrum is $\CovMatrix_{ij}^{ B}=\BispectrumDegeneracy P(k_{i_1})P(k_{i_2})P(k_{i_3}) \Kronecker_{i j}$. Therefore we conclude that the signal-to-noise for $B$ and $\Bphase$ is identically equal as $$B_i (\CovMatrix^B)_{ij}^{-1} B_j
=
B_{\epsilon i} (\CovMatrix^{\Bphase})_{ij}^{-1} B_{\epsilon j}
=
\frac{1}{\BispectrumDegeneracy}
\frac{B_i^2 \Kronecker_{ij}}{P(k_{i_1})P(k_{i_2})P(k_{i_3})}
.$$ In the Gaussian approximation the power spectrum is an independent source of information, which explains the agreement. However, once off-diagonal contributions in the covariance matrix are included, $B$ and $P$ are no longer independent and non-linear combinations may give very different results for the signal-to-noise.
Our signal-to-noise for $P+B$ differs from that reported by @Chan:2016ehg because we include cross-covariance (Section \[sec:covariance\]). Since empirically the signal-to-noise of $P+\Bphase$ gives a more accurate estimate of the information gain from 3-point correlation data, the $\sim 26\%$ expected improvement from the 3-point information in $\Bphase$ is in good agreement with the $\sim 30\%$ improvement suggested by @Chan:2016ehg. However, the details of these calculations are rather different. The unmarginalized constraints in Table \[tab:unmargin\_improve\] and most of the marginalized constraints in Table \[tab:fisher\] support this conclusion. For $\sigma_8$, $b_1$ and $b_2$, for which the effect in Table \[tab:fisher\] is substantially larger than $\sim 30\%$, we ascribe the improvement to degeneracies of $P$ that are broken by 3-point correlation data.
$\Omega_M$ $\Omega_B $ $ w_0 $ $ w_a $ $ \sigma_8 $ $ n_s $ $ h$ $b_1$ $b_2$
------------ ------------- ---------- ---------- -------------- ---------- ---------- ---------- ----------
$12.9\%$ $19.4\%$ $26.0\%$ $27.0\%$ $26.4\%$ $15.1\%$ $15.6\%$ $42.4\%$ $43.4\%$
\[tab:unmargin\_improve\]
Discussion {#sec:discussion}
==========
Compression and efficiency of the Fourier bispectrum proxies {#sec:sufficient-statistic}
------------------------------------------------------------
In an ideal survey aiming to measure the Fourier bispectrum we should clearly choose a bin width $\Delta k$ that is sufficiently small to reproduce all small-scale features of interest. However, because the number of Fourier configurations in a volume with mode cut-off $\kmax$ scales as $\sim (\kmax / \Delta k)^3$ this task will quickly become computationally expensive. And, as we have emphasized several times, a more serious problem is that we must estimate and invert the covariance matrix for all these measurements. This requires us to perform at least as many simulations as the number of configurations that we retain.
In this section we consider how well this large number of Fourier configurations can be compressed by the proxies described in Section \[sec:estimators\]. Suppose that available resources limit the number of simulations that can be performed in such a way that we can estimate an accurate covariance matrix for $\sim 30$ bins of the Fourier bispectrum or one of its proxies, in combination with another $30$ measurements of the power spectrum $P(k)$. Among the measures of 3-point correlations that we consider, is there a preferred choice that provides optimal constraints on our set of cosmological parameters? If so, this measure would provide the most successful compression of the full Fourier bispectrum into a manageable number of measurements.
To this end we combine the power spectrum bins with a single additional configuration from the Fourier bispectrum or one of its proxies, and compute the corresponding Fisher matrix (as in Section \[sec:forecasting-method\]) using values for $\partial \B{\mu} / \partial \theta_\alpha$ estimated from our simulation suite. The four left panels of Fig. \[fig:data\_compression\] show the reduction in predicted uncertainty—defined as the shrinkage of the error bar, $1-\sigma_{P+X} / \sigma_P$—for the representative parameters $\sigma_8$ (solid lines) and $w_0$ (dotted lines) for each of the possible bins. Using these reductions as a measure of the information stored in each bin we conclude that most of the information carried by the Fourier bispectrum $B$ is contained in small-scale triangles (towards larger triangle index). A similar conclusion applies for the line correlation function, for which significant reductions occur only for the first $\sim 12$ bins, corresponding to the range of scales $10\,h^{-1}\,\Mpc$ – $50\,h^{-1}\,\Mpc$. This is reasonable, because the line correlation is constructed to give a negligible signal on large scales. Finally, while the modal decomposition exhibits some variability, smaller mode numbers typically provide larger gains. The integrated bispectrum shows consistently weak improvements over all bins.
![: decrease in forecast parameter uncertainty (improvement) from combining the power spectrum with a single bin of a 3-point correlation measure, compared to the power spectrum alone. The Fisher matrix was computed from the non-Gaussian covariance matrix and the measured parameter derivatives $\partial\B{\mu} / \partial \theta_\alpha$. Solid (dotted) lines show $\sigma_8$ ($w_0$) with all other parameters (including bias) marginalized. : cumulative improvement from adding the $30$ best bins. Arrows indicate the maximal improvement obtained from the Fourier bispectrum with $\Delta k = 8 \kf$, while stars show the uncertainty for $\sigma_8$ using Fourier bispectrum measurements with the larger bin width $\Delta k = 12 \kf$. []{data-label="fig:data_compression"}](plots/data_compression)
Second, for each combination $P+X$ we identify a set of $30$ bins for $X$ that provide the largest improvements. Adding them cumulatively to the power spectrum, starting from the bin carrying most information, we obtain the plot on the right-hand side of Fig. \[fig:data\_compression\]. Both the line correlation function and the modal bispectrum converge rapidly to the maximal improvement available from the entire set of bins that we measure (this is $30$ bins for $\ell$ and $50$ modes for $\beta$—see Table \[tab:binning\]). For example, the line correlation is already within $2\%$ of the maximum after we have added $\sim 2$ bins, while only $\sim 5$ modes of $\beta$ are required to arrive at a similar value for the modal bispectrum. In comparison the Fourier bispectrum converges much more slowly to the maximum provided by the $95$ bins that we measure. This is especially evident for $\sigma_8$, for which the improvement from the Fourier bispectrum has not yet converged to its maximum value after the $30^{\mathrm{th}}$ bin. (For guidance, we mark this maximum value with black arrows on the plot.) However, it should be noted that our procedure to select the set of $30$ bins is not optimal because it does not account for covariances between them. By analysing random subsets of the $95$ possible bispectrum bins we find that faster convergence is possible, giving up to $\sim 90\%$ of the maximum reduction after $30$ bins.
Rather than reducing the number of configurations by restriction to a subset, we might alternatively increase the width of each bin. The same volume of data would then be compressed into fewer measurements. To compare the performance of this strategy we repeat the analysis described above for the Fourier bispectrum with a broader bin width $\Delta k = 12 \kf$, which gives $34$ rather than $95$ Fourier configurations with $\kmax = 0.3 \, h \, \Mpc^{-1}$. We plot the corresponding cumulative reduction in uncertainty for $\sigma_8$ as star-shaped symbols in the right-hand panel of Fig. \[fig:data\_compression\]. After $30$ bins the improvement is similar to that obtained from the modal bispectrum, with the same caution about rate of convergence due to correlation between bins. Therefore—rather surprisingly—in this case we find no clear preference for the bin width $\Delta k = 8\kf$ or $\Delta k = 12 \kf$, except that $\Delta k = 8\kf$ is more computationally expensive, and it is more difficult to find an optimal subset of configurations. However, it is not clear whether this conclusion would survive in a more realistic analysis, where the signal can be noisy and demands finer binning. To explore these issues in detail would require a more comprehensive analysis.
This analysis agrees with the conclusions of Sections \[sec:ng-cov\] and \[sec:theory-dep\], and supports the modal bispectrum as a good choice of proxy for 3-point correlation data. In addition to the advantages discussed in previous sections, it requires the fewest bins and loses almost no information.
These results could be modified in cases where it is possible to compute a covariance matrix for $\gg 30$ configurations of the Fourier bispectrum, as done (for example) by @Gil-Marin:2016wya. However, the mock catalogues used to produce such covariance matrices are often generated using perturbation theory and therefore are likely to be inaccurate on small scales. We expect that it is a better strategy to use fewer bins and obtain high-quality measurements of the covariance matrix from catalogues generated using full simulations. The significant benefit of the modal decomposition is that it facilitates construction of the smallest set of bins that still carry a majority of the information.
Finally, although the line correlation function provides weaker improvements than either the Fourier bispectrum or modal bispectrum, it has the advantage that it clearly separates the scales carrying useful information from those that do not—all bins with $r \gtrsim 50 \, h^{-1} \, \Mpc$ have negligible impact. It is also possible that the performance of the line correlation function could be improved by relaxing the condition of strict collinearity, which would increase the range of Fourier configurations it is able to aggregate.
Shot Noise {#sec:shotnoise}
----------
![Comparison of the Fisher forecasts with shot noise corresponding to $\bar{n}_1 =10^{-2}\,h^3\,\Mpc^{-3}$ (orange) and $\bar{n}_2 = 10^{-4}\,h^3\,\Mpc^{-3}$ (blue). The pale ellipses correspond to uncertainties using the power spectrum only, while the dark ellipses show the predicted uncertainty when 3-point correlation information is included. []{data-label="fig:fisher-shotnoise"}](plots/fisher_forecast_shot_noise){width="80.00000%"}
Galaxies are discrete, point-like tracers of the underlying matter fluctuations, and therefore samples of their abundance are affected by shot noise. This noise is expected to impact higher-order statistics more significantly than the power spectrum [@Sefusatti:2004xz; @Chan:2016ehg]. Up to this point our analysis has implicitly used the low effective shot noise provided by our simulations, and therefore there is some concern that our forecasts will degrade with larger, more realistic noise. In this section we perform an approximate analysis of this degradation and quantify its effect on our predicted parameter uncertainties.
Assuming Poisson statistics, we may correct for shot-noise contributions to the observed discrete power spectrum $\hat{P}^{\text{disc}}$ and bispectrum $\hat{B}^{\text{disc}}$ by subtraction [@Peebles1980; @Matarrese:1997sk],
$$\begin{aligned}
\hat{P}(k) & = \hat{P}^{\text{disc}}(k) - \frac{1}{\bar{n}} , \label{eq:Pshot}\\
\hat{B}(k_1,k_2,k_3) & = \hat{B}^{\text{disc}}(k_1,k_2,k_3) - \frac{1}{\bar{n}}\Big[\hat{P}(k_1)+\hat{P}(k_2)+\hat{P}(k_3)\Big] - \frac{1}{\bar{n}^2}
.
\label{eq:Bshot}\end{aligned}$$
Here, $\bar{n}$ is the average number density of the discrete tracers. We use the upper and lower limits $\bar{n}_1=10^{-2} \, h^3 \, \Mpc^{-3}$ and $\bar{n}_2 = 10^{-4} \, h^{3}\,\Mpc^{-3}$ to represent optimistic and pessimistic levels of shot noise for upcoming galaxy surveys. To measure $\hat{P}^{\text{disc}}$ and $\hat{B}^{\text{disc}}$ we downsample the number of particles in our simulation suite by selecting random subsets matching the desired averaged density $\bar{n}$, and use this to compute corrected estimators $\hat{P}$ and $\hat{B}$ from equations and . Although this downsampling procedure will not introduce exactly Poisson shot noise, we have checked that it is nearly Poisson by verifying that the corrected quantities agree with measurements made using the full set of particles to within a few percent. Strictly speaking, the covariance matrix of $\hat{P}$ and $\hat{B}$ obtained in this way is the matter covariance with Poisson shot noise, but for our fiducial biasing model we may interpret it as the covariance of the galaxy power spectrum and bispectrum with Poisson shot noise. We use this covariance, leaving the parameter derivatives unchanged from Section \[sec:information\_content\], to compute the Fisher matrices.
We plot forecasts using the fiducial number densities $\bar{n}_1$ and $\bar{n}_2$ in Fig. \[fig:fisher-shotnoise\], with orange ellipses corresponding to the lower noise level (higher number density) and blue ellipses corresponding to the higher noise level (lower number density). The orange ellipses show good agreement with the forecasts for the idealized scenario of Section \[sec:information\_content\], indicating that relatively little degradation occurs. However, it is unlikely that such high number densities will be attained in the near future. By contrast the blue ellipses represent a conservative view of what should be possible.
If shot noise degrades the signal from 3-point correlations more strongly than for 2-point correlations then the fractional improvement from its inclusion should be smaller for low $\bar{n}$. In terms of Fig. \[fig:fisher-shotnoise\] this means that the difference between the light and dark blue ellipses should be smaller than the difference between the light and dark orange ellipses. This effect is visible for some parameters, such as $\sigma_8$. However, in the case of $\Omega_m$, $w_0$ and $w_a$ the fractional improvement from inclusion of 3-point correlation data is *larger* at lower $\bar{n}$. The effect for $w_0$ and $w_a$ is particularly striking. Using all particles in our simulations, the addition of $B$ data decreased measurement uncertainties by $16\%$ and $15\%$, respectively (see Table \[tab:fisher\]). With $\bar{n} = 10^{-4} \, h^3 \, \Mpc^{-3}$ we find improvements of $41\%$ and $36\%$. We interpret this to mean that recovery of cosmological information in the presence of shot noise depends significantly on cross-covariances between measurements. These cross-covariances themselves depend on the shot noise and can partially subtract its effect.
Conclusions {#sec:conclusions}
===========
As large scale structure surveys grow in size and sophistication, the rapidly-approaching cosmic variance limit on 2-point statistics encourages us to look to higher-order correlations, such as the 3-point function, as a new source of information. Previously, @Sefusatti:2006pa suggested that considerable additional constraining power could be achieved by combining the power spectrum and bispectrum. On the other hand, the signal-to-noise analysis given by @Chan:2016ehg pointed to no more than modest improvements. Our results show that there is a significant benefit from inclusion of three-point correlation data, but its benefits must be balanced against the challenges it brings.
In this paper, we focus on two particular challenges: (1) The number of measurable configurations of the Fourier bispectrum is generally very large unless one coarse-grains the data. We have investigated whether the *modal bispectrum*, *line correlation function* and *integrated bispectrum* can act as ‘proxies’ for the Fourier bispectrum, compressing its information into fewer configurations without unacceptable information loss. (2) Bispectrum observations are difficult to model to the same accuracy as the power spectrum. Errors in clustering predictions from theoretical models, in addition to assumptions about covariances and noise properties, generally propagate into inaccurate error bars or a bias on inferred parameters. We have quantified how our forecasts are influenced by both the assumption of Gaussian covariance and theoretical errors.
To do so we have measured the power spectrum, Fourier bispectrum and each of its proxies from a suite of 200 dark matter simulations at redshifts $z=0$, $z=0.52$ and $z=1$ to obtain fully non-Gaussian covariances and cross-covariances. We measure the dependence of each measurement on the cosmological parameters $\{\Omega_m,\Omega_b, w_0, w_a, \sigma_8, n_s, h\}$ using additional simulations displaced from our fiducial model. We assume an local Lagrangian biasing scheme that includes two bias parameters, $\{b_1,b_2\}$. Using all these components, in combination with theoretical predictions for each proxy from tree-level and 1-loop SPT and the halo model, we have conducted a signal-to-noise analysis and implemented the Fisher forecasting method for an idealized survey scenario. Our main results on the constraining power and future viability of each measure of 3-point correlations are:
Section \[sec:information\_content\] presented our main results. Our forecasts show that inclusion of the Fourier bispectrum offers significant improvements over the power spectrum alone, with $\BigO(10\%-30\%)$ improvement on cosmological parameter constraints, and up to $\BigO(80\%)$ improvement when it is used to break degeneracies with the bias parameters. The *modal bispectrum* offers an attractive alternative, achieving equivalent constraints with as few as 10 modes. However, up to 50 modes may be necessary to reconstruct the Fourier bispectrum to within $\lesssim 10\%$ accuracy on individual triangle configurations. The *line correlation function* appears to be slightly less optimal, although a future extension to sample more Fourier configurations by relaxing the requirement of strict collinearity may improve its performance. The *integrated bispectrum* offers little constraining power for our set of cosmological parameters. It is sensitive to highly squeezed triangles, whereas the gravitational bispectrum peaks on equilateral triangles. This property of $\ib$ is a disadvantage for our purposes, but may be an advantage if one is interested in studying squeezed-mode primordial non-Gaussianity with minimal degeneracies.
In Section \[sec:sufficient-statistic\], we explored how the total constraining power of each measure is distributed over the total number of data bins. While the Fourier bispectrum and modal bispectrum give nearly equivalent parameter constraints when $\sim 30$ bins are used, the modal method converges to its full constraining power with a smaller subset of bins. We conclude that the modal bispectrum provides more efficient access to the information carried by 3-point correlations.
We note that more realistic survey scenarios—for example, accounting for noisy data—may require finer binning. Increasing the binning resolution of the Fourier bispectrum by a factor of $n$ in each $k$-dimension corresponds to a factor $\BigO(n^3)$ increase in configurations. The number of simulations required to accurately capture their covariance would increase similarly. If the number of modal coefficients required to capture fine features of the bispectrum does not grow so dramatically, it is possible that the modal bispectrum could accumulate an even larger advantage compared to the Fourier bispectrum.
In Sections \[sec:covariance\], \[sec:information\_content\] and \[sec:signoiseProxy\] we argue that use of the signal-to-noise ratio to predict the constraining power of 3-point correlation data can be misleading. We show that the bispectrum and phase bispectrum—which is probed by the modal bispectrum—give significantly different signal-to-noise ratios, but still yield nearly identical forecasts. As we describe in Section \[sec:signoiseProxy\], for the scenarios considered in this paper, the improvement shown by these forecasts is empirically better predicted by the signal-to-noise ratio of the phase bispectrum $\Bphase$ than the Fourier bispectrum $B$. The $\sim \BigO(30\%)$ uplift in signal-to-noise from the phase bispectrum translates to the same improvement in cosmological parameter constraints, except for those where degeneracies play a significant role. As we explain in Section \[sec:signoiseProxy\], while this improvement is numerically consistent with @Chan:2016ehg, our procedure is rather different. For a general parameter set and a given measure of the 3-point correlations, the signal-to-noise will not typically give an accurate estimate of its constraining power.
Accounting for non-Gaussian covariance is essential for optimally constraining cosmological parameters. In Section \[sec:ng-cov\] we showed that the Fourier bispectrum estimator is particularly sensitive to the covariance: our predicted uncertainties may be nearly a factor of 4 too small if the Gaussian approximation is used. At the same time, we find that the non-Gaussian cross-covariance between the power spectrum and the Fourier bispectrum or its proxies generally results in parameter errors that are $\BigO(10\%)$ *smaller* than if cross-covariances are ignored.
Our results in Section \[sec:theory-dep\] indicate that the impact of theory errors on our predicted uncertainties is smaller than the impact of assuming Gaussian covariance, although both approximations change the forecasts by $\sim 30\%$ to $50\%$ on average. In this paper we measure the effect of theoretical uncertainty by comparing forecasts using SPT and the halo model to forecasts derived purely from measurements. Our approach differs from that of @Baldauf:2016sjb and @Welling:2016dng, who incorporated estimates of the theory error into their Fisher forecasts by taking the error in each data bin to be the sum of statistical and theoretical errors.
To assess the impact of shot noise, in Section \[sec:shotnoise\] we down-sample our simulation suite to averaged number densities of $\bar{n}=10^{-2}\,h^3\,\Mpc^{-3}$ and $10^{-4}\,h^3\,\Mpc^{-3}$, and compute forecasts using non-Gaussian covariance matrices that include low and high levels of Poisson shot noise. Contrary to naïve expectations, we find that the addition of 3-point correlation information can become *more* significant at high levels of shot noise owing to the non-trivial dependence of the cross-covariance on $\bar{n}$. This appears most significant for the dark energy parameters $w_0$ and $w_a$, and suggests that 3-point correlation information may be crucial to distinguish between dark energy models. More generally, our result implies that 3-point correlation measurements may yield significant additional constraining power even when shot noise levels are high.
To make robust inferences with 3-point correlation information, future surveys will require refinement of the methods we have considered here. For example, while we have demonstrated that the modal decomposition provides efficient data compression of the matter bispectrum in an idealized survey, it will be important to verify that this remains true when halo distributions, redshift-space distortions and the complex noise properties of realistic surveys are introduced. We have emphasized the importance of including non-Gaussian covariances and theory uncertainties in our forecasts. Realistic analyses will likely require more efficient ways to obtain covariances, and a consistent approach to inclusion of theory errors in software pipelines. Achieving each of these aims will be important milestones ahead of upcoming surveys of large-scale structure.
Acknowledgements {#sec:acknow .unnumbered}
================
The work reported in this paper has been supported by the European Research Council under the European Union’s Seventh Framework Programme (FP/2007–2013) and ERC Grant Agreement No. 308082 (JB, DR, DS). This work was supported by the Science and Technology Facilities Council \[grant numbers ST/L000652/1, ST/P000525/1\] (DS, RES). AE acknowledges support from the UK Science and Technology Facilities Council via Research Training Grant \[grant number ST/M503836/1\], and thanks Roman Scoccimarro and the Physics Department of New York University for hospitality during the final phases of this project. DR acknowledges useful conversations on the normalization of the modal decomposition with Hemant Shukla. JB would like to thank Benjamin Joachimi for useful discussions.
To assist those wishing to replicate or extend our results, we have made available measurements of the power spectrum, bispectrum, integrated bispectrum, line correlation function, and modal bispectrum coefficients that have been extracted from our simulation suite.\
-- ----------------------------------------------------------------------------------------------------------------------------------------------------------
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/)  
$\copyright$ University of Sussex 2017. Contributed by Donough Regan & Alexander Eggemeier
Please cite `zenodo.org` DOI and this paper
[](https://zenodo.org/record/438187)
-- ----------------------------------------------------------------------------------------------------------------------------------------------------------
\
\[acknow\]
Construction of the modal decomposition {#app:modal}
=======================================
Construction of the $Q$-basis {#app:poly}
-----------------------------
The goal of the modal decomposition is to write the estimated bispectrum in the form $$\label{eq:weighting}
w(k_1,k_2,k_3) \hat{B}(k_1,k_2,k_3)
=
\sum_{n =0}^{\nmax-1}
\hat{\beta}^Q_n Q_n(k_1,k_2,k_3)
,$$ where $w(k_1,k_2,k_3)$ is the arbitrary weighting function , and the $Q_n$ represent basis modes with coefficients $\beta_n^Q$. The $Q_n$ then contain all the information about the bispectrum. They should span the possible functions on wavenumbers $k_i$ that satisfy the triangle condition, $\sum_i k_i \geq 2 \max\{k_1,k_2,k_3\}$ (denoted by $\TriangleRegion$ in the main text) but are otherwise arbitrary. For our concrete numerical results we choose a basis built out of one-dimensional polynomials $q_p(x)$ which are orthonormal within $\TriangleRegion$ [@Fergusson:2009nv]. More precisely, in a unit box, we define the integral $\mathcal{T}[f]=\int_{\TriangleRegion} f(x) \, \D{x}\, \D{y}\, \D{z}$, where $x,y,z$ satisfy the triangle condition within the box $x,y,z\in
[0,1]$. Evaluating the $y$ and $z$ integrals, one finds that $\mathcal{T}[f]= 0.5 \int_0^1 f(x) \, x(4-3 x) \, \D{x}$. This allows one to define an inner product, $\langle f , g\rangle \equiv \mathcal{T}[f g]$ (which is not equal to the inner product ) and set up a generating function for the one-dimensional polynomials, $q_n$, using $w_n=\mathcal{T}[x^n]$, in the form of a secular determinant $$q_n(x)
=
\frac{1}{\mathcal{N}}
\begin{vmatrix}
1/2 & 7/24 &\dots & w_n \\
7/24 & 1/5&\dots & w_{n+1} \\
\dots & \dots &\dots & \dots\\
w_{n-1}& w_{n}& . & w_{2n-1}\\
1& x& \dots & x^n\\
\end{vmatrix}$$ where $\mathcal{N}$ is chosen such that $\langle q_n , q_m\rangle =
\Kronecker_{n m}$. The basis functions $Q_n(x,y,z)$ are defined as symmetric combinations of combinations of these 1-dimensional polynomials, in the form $$\label{eq:basis_expan}
Q_n(x,y,z)
=
\frac{1}{6}
\Big[
q_r(x) q_s(y) q_t(z) +q_r(x) q_t(y) q_s(z)+\dots+q_t(x) q_s(y) q_r(z)
\Big]
\equiv
q_{\{ r}(x) q_s(y) q_{t\}}(z)
,$$ with $n$ representing the triple of indices $\{ r,s,t \}$. After choosing an ordering of these triples we can exchange $n$ for a simpler integer label. For a particular realization with wavenumbers in the range $\kmin$ and $\kmax$ we use the notation $Q_n(k_1,k_2,k_3)$ to represent $Q_n(x_1,x_2,x_3)$, where $x_i = (k_i
- \kmin)/(\kmax - \kmin)\in [0,1]$.
Calculation of the modal coefficients using the voxel method {#app:voxel}
------------------------------------------------------------
In Section \[sec:modalEst\] we explained how equation reduces estimation of the modal coefficients from simulation or data to a single 3-dimensional integral over a product of three Fourier transforms $\Mfactor{n}(\bx)$. If the bispectrum is given analytically, however, we may instead use the simpler equation and compute the inner product using a sum of volumes of all ‘voxels’ within a cubic grid with linear spacing along each axis $(k_1, k_2, k_3)$.
To calculate the volume of each voxel we relabel the coordinates as $(x, y, z)$, rescaled so that $0 \leq x, y, z \leq 1$. We associate each of the 8 possible vertices of the voxel with a value $p_1,\dots, p_8$, given by the product of $Q_m$ and $wB$ (or $Q_m$ and $Q_n$ in the case of $\llangle Q_m | Q_n \rrangle$) at that vertex. Finally, we define an interpolation function $f$ by writing $$f(x,y,z)=a_1+a_2 x + a_3 y + a_4 z + a_5 x y + a_6 x z + a_7 y z + a_8 x y z .$$ The coefficients $a_i$ may be obtained analytically in terms of the $p_i$. We assign the volume of the voxel to be zero if fewer than four of its vertices satisfy the triangle condition, while if all $8$ vertices satisfying the triangle condition its volume is $$\int_{0 \leq x,y,z \leq 1} f(x,y,z) \, \D{x} \, \D{y} \, \D{z}
=
\frac{1}{8} \sum_{i=1}^8 p_i
,$$ as expected. For intermediate cases we write the volume in the form $$\int_{\mathcal{C}} f(x,y,z) \, \D{x} \, \D{y} \, \D{z}
,$$ where $\mathcal{C}$ indicates that only those points satisfying the triangle condition and forming a closed volume within the voxel should be included. In the case of $4$ points there are 3 possible volumes given by $$\mathcal{C}_a^{(4)}=\{x,y,z\mid x+1 \leq y+z\}\,,\quad\mathcal{C}_b^{(4)}
=\{x,y,z \mid y+1 \leq x+z\}\,,\quad\mathcal{C}_c^{(4)}=\{x,y,z \mid z+1 \leq x+y\}\,.$$ For $5$ points the only possibility is that $x+y+z \geq 2 \max\{x,y,z\}$, while for $6$ and $7$ points there are again $3$ possibilities, given respectively by, $$\begin{split}
\mathcal{C}_a^{(6)} & =\{x,y,z \mid x \leq y+z , y\leq x+z\} ,
&
\mathcal{C}_b^{(6)} & =\{x,y,z \mid x \leq y+z , z\leq x+y\} ,
&
\mathcal{C}_c^{(6)} & =\{x,y,z \mid y \leq x+z ,z \leq x+y \} ,
\\
\mathcal{C}_a^{(7)} & =\{x,y,z \mid x \leq y+z\} ,
&
\mathcal{C}_b^{(7)} & =\{x,y,z \mid y \leq x+z\} ,
&
\mathcal{C}_c^{(7)} & =\{x,y,z \mid z \leq x+y\} .
\end{split}$$ In each case the analytic form of the integral in terms of the vertex values $p_i$ can be calculated easily. Computation of each integral using this voxel method is highly accurate and efficient.
[^1]: In field theory this is the ‘operator product expansion’.
[^2]: Here, ‘most relevant’ is defined by the features of the bispectrum for which we wish to search. For example, inspection of the formulae appearing in Sections \[sec:predict-ib\]–\[sec:predict-lcf\] below shows that both the integrated bispectrum and line correlation function can be regarded as instances of , with $Q_n$ adjusted to prioritize specific groups of Fourier configurations. For these cases, however, the resulting $Q$-basis is not complete. In this paper we distinguish the modal decomposition, for which the $Q$-basis is intended to be complete, from proxies such as $\ib$ and $\LCF$ which are intended to be projections.
[^3]: In the remainder of this paper we assume it is understood that we are dealing with the discrete density field whenever we refer to measured quantities, and drop the label ‘disc’.
[^4]: This strategy is less successful for the line correlation function. In this case the fiducial value could be very close to zero on some scales. In turn, this produces large errors in the logarithmic derivative. Therefore, for the line correlation function, we estimate the linear derivative $\D{\LCF} / \D{\theta_\alpha}$ instead.
[^5]: Although the Anderson–Hartlap prescription is simple to apply, it has been pointed out by @Sellentin2016 that this rescaling simply broadens the Gaussian likelihood of the data. These authors argued that the distribution of the data is more accurately modelled by a $t$-distribution.
|
---
abstract: 'In this paper, we are interested in exploiting textual and acoustic data of an utterance for the speech emotion classification task. The baseline approach models the information from audio and text independently using two deep neural networks (DNNs). The outputs from both the DNNs are then fused for classification. As opposed to using knowledge from both the modalities separately, we propose a framework to exploit acoustic information in tandem with lexical data. The proposed framework uses two bi-directional long short-term memory (BLSTM) for obtaining hidden representations of the utterance. Furthermore, we propose an attention mechanism, referred to as the multi-hop, which is trained to automatically infer the correlation between the modalities. The multi-hop attention first computes the relevant segments of the textual data corresponding to the audio signal. The relevant textual data is then applied to attend parts of the audio signal. To evaluate the performance of the proposed system, experiments are performed in the IEMOCAP dataset. Experimental results show that the proposed technique outperforms the state-of-the-art system by 6.5% relative improvement in terms of weighted accuracy.'
address: |
$^{1}$Department of Electrical and Computer Engineering,\
Seoul National University, Seoul, Korea\
$^{2}$Idiap Research Institute, Martigny, Switzerland\
$^{3}$Adobe Research, San Jose, USA\
[ mysmilesh@snu.ac.kr, byuns9334@snu.ac.kr, subhadeep.dey@idiap.ch, kjung@snu.ac.kr ]{}
bibliography:
- 'ICASSP19.bib'
title: 'Speech Emotion Recognition Using Multi-Hop Attention Mechanism'
---
speech emotion recognition, computational paralinguistics, deep learning, natural language processing
Introduction {#sec:intro}
============
In this era of high-performance computing, human-computer interaction (HCI) has become pervasive. To enrich the user experience, the system is often required to detect human emotion and produce a response with proper emotional context [@picard2003affective; @busso2013toward]. The first step in such an HCI involves building a system that recognizes emotion from the speech utterance. A speech emotional system aims to identify audio recording as belonging to one of the categories, like happy, sad, angry or neutral. Beside HCI, the output of emotion recognition engine is beneficial in the paralinguistic area as well [@kolakowska2014emotion]. In this paper, we build a speech emotion recognition system that uses acoustic and textual information in tandem.
Various approaches to address emotion recognition have been investigated in the literature. Most of the techniques involve extracting low-level or high-level acoustic features for this task [@han2014speech]. In emotion recognition, the lexical content of the audio recording is an important source of information that is usually ignored. For example, the presence of words such as “gorgeous" and “stunning" in the utterance would indicate that the person is happy. Recently researchers have also explored the application of textual content of the speech signal for this task. In [@schuller2004speech], frame and supra-segmental level features (such as pitch and spectral contours) are derived from the speech signal. Textual information is used by spotting keywords that emphases the emotional states of the speaker. The work in [@jin2015speech] also presents an approach to exploit the acoustic and lexical content. In particular, they explored conventional acoustic features from the speech signal while the textual information is derived from the bag of word representation.
Recently, deep neural network (DNN) has shown to provide good results for modeling acoustic and textual information for emotion identification. In [@schuller2004speech], textual and acoustic information of the utterance are used by a DNN to obtain hidden feature representations for both the modality. These features are then concatenated to represent the utterance and subsequently used to classify the emotion of the speaker. Experimental evidence shows the potential of the approach. In our previous work [@yoon2018multimodal], we applied a dual RNN in order to obtain a richer representation by blending the content and acoustic knowledge.
In this paper, we improve upon our earlier work by incorporating an attention mechanism in the emotion recognition framework. The proposed attention mechanism is trained to exploit both textual and acoustic information in tandem. We refer to this attention method as the multi-hop. The multi-hop attention is designed to select relevant parts of the textual data, which is subsequently applied to attend to the segments of the audio signal for classification. We hypothesize that this approach would automatically detect the segments that contain information relevant for the task. The emotion recognition experiments are performed on the standard IEMOCAP dataset [@busso2008iemocap]. Experimental results indicate that the proposed approach outperforms the state-of-the-art system published in the literature on this database by 6.5% relative improvement in terms of weighted accuracy.
This paper is organized as follows. Section \[sec:related\] provides a brief literature review on speech emotion recognition. In Section \[sec:model\], we start by describing the baseline bidirectional recurrent encoder model considered in this paper, then introducing the proposed technique in detail. Experimental setup for evaluating the system and discussion of the achieved results by various systems are presented in Sections \[sec:experiments\]. Finally, the paper is concluded in Section \[sec:conclusions\].
Related work {#sec:related}
============
Along with classical algorithms based models such as support vector machine (SVM), hidden markov model (HMM) and decision tree [@seehapoch2013speech; @schuller2003hidden; @lee2011emotion], various neural network architectures have been recently introduced for the speech emotion recognition task. For example, convolutional neural network (CNN)-based models were trained on spectrograms or audio features such as mel-frequency cepstral coefficients (MFCCs) and low-level descriptors (LLDs) [@bertero2017first; @badshah2017speech; @aldeneh2017using]. More complex models such as [@satt2017efficient] were designed to better learn nonlinear decision boundaries of emotional speech and achieved the best-recorded performance in audio modality models on IEMOCAP dataset [@busso2008iemocap]. Several neural network models with attention mechanism have been proposed to efficiently focus on a prominent part of speech and learn temporal dependency within whole utterance [@li2018attention; @mirsamadi2017automatic].
Multi-modal approaches using acoustic features and textual information have been investigated. [@schuller2004speech] identified emotional key phrases and salience of verbal cues from both phoneme sequences and words. Recently, [@yoon2018multimodal; @cho2018deep] combined acoustic information and conversation transcripts using a neural network-based model to improve emotion classification accuracy. However, none of these studies utilized attention method over audio and text modality in tandem for contextual understanding of the emotion in audio recording.
Model {#sec:model}
=====
This section describes the methodologies that are applied to the speech emotion recognition task. We start by introducing a baseline model, the bidirectional recurrent encoder, for encoding the audio and text modalities individually. We then propose an approach to exploit both audio and text data in tandem. In this technique, multi-hop attention is proposed to obtain relevant parts of audio and text data automatically.
Bidirectional Recurrent Encoder
-------------------------------
Motivated by the architecture used in [@yoon2018multimodal; @mirsamadi2017automatic; @wang2016audio], we train a recurrent encoder to predict the categorical class of a given audio signal. To model the sequential nature of the speech signal, we use a bi-directional recurrent encoder (**BRE**) as shown in the Figure \[fig\_BRE\]. We also added a residual connection to the model for promoting convergence during training [@wang2016recurrent]. A sequence of feature vectors is fed as input to the **BRE**, which leads to the formation of hidden states of the model as given by the following equation:
$$\begin{aligned}
& \overrightarrow{\textbf{h}}_{t} = f_{\theta}( \overrightarrow{\textbf{h}}_{t-1}, \overrightarrow{\textbf{x}}_{t}) +\overrightarrow{\textbf{x}_t},\\
& \overleftarrow{\textbf{h}}_{t} = f'_{\theta}( \overleftarrow{\textbf{h}}_{t+1}, \overleftarrow{\textbf{x}}_{t}) +\overleftarrow{\textbf{x}_t},\\
& \textbf{o}_{t} = [\overrightarrow{\textbf{h}}_{t} ;\overleftarrow{\textbf{h}}_{t}],\\
&\textbf{o}^{A}_{t}=[\textbf{o}_{t} ;\textbf{p}],
\end{aligned}
\label{eq_BRE}$$
where $f_{\theta}$, $f'_{\theta}$ are the forward and backward long short-term memory (LSTM) with weight parameter $\theta$, $\textbf{h}_t$ represents the hidden state at *t*-th time step, and $\textbf{x}_t$ represents the *t*-th MFCC features in audio signal. The hidden representations ($\overrightarrow{\textbf{h}}_{t}$, $\overleftarrow{\textbf{h}}_{t}$) from forward/backward LSTMs are concatenated for produce the feature, $\mathbf{o}_t$. To follow previous research [@yoon2018multimodal], we also add another prosodic feature vector, $\textbf{p}$, with each $\textbf{o}_{t}$ to generate a more informative vector representation of the signal, $\textbf{o}^{A}_{t}$. Finally, an emotion class is predicted from the acoustic signal by applying a softmax function to the final hidden representation at the last time step, $\textbf{o}^{A}_{\text{last}}$. We refer this model as **audio-BRE** with the objective function as follows: $$\begin{aligned}
\hat{y}_{c} = \text{softmax}(~({\textbf{o}^{A}_{\text{last}})}^\intercal~\textbf{W}+\textbf{b}~), \\
\mathcal{L} = -\log \prod_{i=1}^{N} \sum_{c=1}^{C} y_{i,c} \text{log} (\hat{y}_{i,c}),
\end{aligned}
\label{eq_BRE_loss}$$ where $y_{i,c}$ is the true label vector, and $\hat{y}_{i,c}$ is the predicted probability distribution from the softmax layer. The $\textbf{W}$ and the bias $\textbf{b}$ are learned model parameters. $C$ is the total number of classes, and $N$ is the total number of samples used in training.
Next, we attempt to use the processed textual information as another modality in predicting the emotion class of a given signal. To obtain textual hidden representation, $\textbf{o}_{t}^{T}$, we tokenize the transcript and feed it into the **BRE** in such a way that the acoustic signals are encoded by equation (\[eq\_BRE\]). We refer this model as **text-BRE**. The training objective for the **text-BRE** is same as the **audio-BRE** in equation (\[eq\_BRE\_loss\]).
Proposed Multi-Hop Attention
----------------------------
We propose a novel multi-hop attention method to predict the importance of audio and text, referred to multi-hop attention (**MHA**). Figure \[fig\_MHA\] shows the architecture of the proposed **MHA** model. Previous research used multi-modal information independently using neural network model by concatenating features from each modality [@yoon2018multimodal; @tripathi2018multi]. As opposed to this approach, we propose a neural network architecture that exploits information in each modality by extracting relevant segments of the speech data using information from the lexical content (and vice-versa).
First, the acoustic and textual data are encoded with the **audio-BRE** and **text-BRE**, respectively, using equation (\[eq\_BRE\]). We then consider the final hidden representation of **audio-BRE**, $\textbf{o}^{A}_{\text{last}}$, as a context vector and apply attention method to the textual sequence, $\textbf{o}_{t}^{T}$. As this model is developed with a single attention method, we refer to the model as **MHA-1**. The final hidden representation of the **MHA-1** model, $\textbf{\text{H}}$, is calculated as follows: $$\begin{aligned}
&a_i=\dfrac{\text{exp}({~(\textbf{o}^{A}_{\text{last}})}^\intercal~\textbf{o}^{T}_{i}~)}{\sum_{i} \text{exp}({~(\textbf{o}^{A}_{\text{last}})}^\intercal~\textbf{o}^{T}_{i}~)},~(i=1,...,t)
\\
&~\textbf{H}^1={\sum_{i}} a_{i}~{\textbf{o}^{T}_{i}},
~~\textbf{H}=[\textbf{H}^1 ;\textbf{o}^{A}_{\text{last}}].
\end{aligned}
\label{eq_hot_1}$$ The $\textbf{H}^1$ (equation \[eq\_hot\_1\]) is a new hidden representation for textual information with consideration of audio modality. With this information, we apply 2nd-hop attention, referred to **MHA-2**, to the audio sequence. The final hidden representation of the **MHA-2** model, **H**, is calculated as follows: $$\begin{aligned}
&a_{i}=\dfrac{~\text{exp}({~(\text{\textbf{H}}_1)}^\intercal~\textbf{o}^{A}_{i}~)}{\sum_i \text{exp}({~(\text{\textbf{H}}_1)}^\intercal~\textbf{o}^{A}_{i}~)},~(i=1,...,t)
\\
& ~\textbf{H}^2=\sum_{i} a_{i}~{\textbf{o}^{A}_{i}},
~~\textbf{H}=[\textbf{H}^1 ;\textbf{H}^2],
\end{aligned}
\label{eq_MHA_2}$$ where $\textbf{H}^2$ is a new hidden representation for audio information with the consideration of textual modality obtained from the **MHA-1**.
Similarly to **MHA-1**, we further apply 3rd-hop attention to textual sequence, referred to **MHA-3**, with the new audio hidden representation $\textbf{H}^2$ (equation \[eq\_MHA\_2\]). The final hidden representation of the **MHA-3** model, **H**, is calculated as follows: $$\begin{aligned}
&a_{i}=\dfrac{\text{exp}(~({\text{\textbf{H}}_2})^\intercal~\textbf{o}^{T}_{i})}{\sum_{i} \text{exp}(~({\text{\textbf{H}}_2})^\intercal~\textbf{o}^{T}_{i})},~(i=1,...,t)
\\
& ~\textbf{H}^3=\sum_{i} a_i~{\textbf{o}^{T}_{i}},
~~\textbf{H}=[\textbf{H}^3 ;\textbf{H}^2],
\end{aligned}
\label{eq_MHA_3}$$ where $\textbf{H}^3$ is updated representative vector of the textual information with the consideration of audio modality one more time.
In each case, the final hidden representation, $\textbf{H}$, is passed through the softmax function to predict the four-categories emotion class. We use the same training objective as the **BRE** model with equation (\[eq\_BRE\_loss\]), and the predicted probability distribution for the target class, $\hat{y}_{c}$ is as follows: $$\begin{aligned}
&\hat{y}_{c} = \text{softmax}((\textbf{H})^\intercal~\textbf{W}+\textbf{b}~), \\
\end{aligned}
\label{eq_MHA_loss}$$ where projection matrix $\textbf{W}$ and bias $\textbf{b}$ are leaned model parameters.
Experiments {#sec:experiments}
===========
Dataset and Experimental Setup
------------------------------
To train and evaluate our model, we use the Interactive Emotional Dyadic Motion Capture (IEMOCAP) [@busso2008iemocap] dataset, which includes five sessions of utterances between two speakers (one male and one female). Total 10 unique speakers participated in this work. For consistent comparison with previous works [@yoon2018multimodal; @cho2018deep], all utterances labeled “excitement" are merged with those labeled “happiness". We assign single categorical emotion to the utterance with majority of annotators agreed on the emotion labels. The final dataset contains 5,531 utterances in total (1,636 *happy*, 1,084 *sad*, 1,103 *angry* and 1,708 *neutral*). In the training process, we perform 10-fold cross-validation where each 8, 1, 1 folds are used for the train set, development set, and test set, respectively.
Feature extraction and Implementation details
---------------------------------------------
As this research is extended work from previous research [@yoon2018multimodal], we use the same feature extraction method as done in our previous work. After extracting 40-dimensional Mel-frequency cepstral coefficients (MFCC) feature (frame size is set to 25 ms at a rate of 10 ms with the Hamming window) using Kaldi [@povey2011kaldi], we concatenate it with its first, second order derivates, making the feature dimension to 120. We also extract prosodic features by using OpenSMILE toolkit [@eyben2013recent] and appending it to the audio feature vector.
In preparing the textual dataset, we first use the ground-truth transcripts of the IEMOCAP dataset. In a practical scenario where we may not access to transcripts of the audio, we obtain all of the transcripts from the speech signal using a commercial ASR system [@GoogleCloudSpeechAPI] (The performance of the ASR system is word error rate (WER) of 5.53%). We apply word-tokenizer to the transcripts and obtain sequential data for textual input.
The maximum length of an audio segment is set to 750 based on the implementation choices presented in [@neumann2017attentive] and 128 for the textual input which covers the maximum length of the tokenized transcripts. We minimize the cross-entropy loss function using (equation (\[eq\_BRE\_loss\])) the Adam optimizer [@kingma2014adam] with a learning rate of 1e-3 and gradients clipped with a norm value of 1. For the purposes of regularization, we apply the dropout method, 30%. The number of hidden units and the number of layers in the RNN for each model (**BRE** and **MHA**) are optimized on the development set.
Performance evaluation
----------------------
To measure the performance of systems, we report the weighted accuracy (WA) and unweighted accuracy (UA) averaging over the 10-fold cross-validation experiments. We use the same dataset and features as other researchers [@yoon2018multimodal; @cho2018deep].
Table \[table\_performance\] presents performances of proposed approaches for recognizing speech emotion in comparison with various models. To compare our results from previous approaches, we first use ground-truth transcripts included in the dataset in training textual modality. From the previous model, **E\_vec-MCNN-LSTM** encodes acoustic signal and textual information using a neural network (RNN and CNN, respectively) and then fuse each result by concatenating and feeding them into following (SVM) to predict emotion labels. On the other hand, **MDRE** model use dual-RNNs to encode both the modalities and merge the results using another fully-connect neural network layer. This **MDRE** approach applies end-to-end learning and outperforms **E\_vec-MCNN-LSTM** by 10.6% relative (0.649 to 0.718 absolute) in terms of WA.
Among our proposed system, the **audio-BRE** model that uses an acoustic signal with bidirectional-RNN architecture achieves WA 0.646. Interestingly, the **text-BRE** model that use textual information shows higher performance than that of **audio-BRE** by 8% relative (0.646 to 0.698) in WA. The multi-hop attention model, **MHA-N**, ($N$ = 1, 2, 3), shows a substantial performance gain. In particular, the **MHA-2** model (best performing system among MHA-N) outperformed the best baseline model, **MDRE**, by 6.5% relative (0.718 to 0.765) in WA. Although we observe performance degradation in the **MHA-3** model, we believe that this could be due to the limited data for training.
In a practical scenario, we may not access the audio transcripts. We describe the effect of using ASR-processed transcripts on the proposed system. From table \[table\_performance\], we observe performance degradation in **text-BRE-ASR** and **MHA-2-ASR** (our best system), compared to that of **text-BRE** and **MHA-2** by 6.6% (0.698 to 0.652) and 4.6% (0.765 to 0.730) relative in WA, receptively. Even with the erroneous transcripts (WER = 5.53%), however, the proposed approach (**MHA-2-ASR**) outperforms the best baseline system (**MDRE**) by 1.6% relative (0.718 to 0.730) in terms of WA.
Error analysis
--------------
Figure \[fig\_confusion\] shows the confusion matrices of the proposed systems. In **audio-BRE** (Fig. \[fig\_confusion\_audio\]), most of the emotion labels are frequently misclassified as *neutral* class, supporting the claims of [@yoon2018multimodal; @neumann2017attentive]. The **text-BRE** shows improvement in classifying most of the labels in Fig. \[fig\_confusion\_text\]. In particular, *angry* and *happy* classes are correctly classified by 32% (57.14 to 75.41) and 63% (40.21 to 65.56) relative in accuracy with respect to **audio-BRE**, receptively. However, it incorrectly predicted instances of the *happy* class as *sad* class in 10% of the time, even though these emotional states are opposites of one another.
The **MHA-2** (our best system, Fig. \[fig\_confusion\_MHA\]) compensates for the weaknesses of the single modality models and benefits from their strengths. It shows significant performance gain for *angry*, *happy*, *sad* and *neutral* classes by 6%, 20%, 15% and 13% relative in accuracy with respect to **text-BRE**. It also correctly classify *neutral* class similar to that of **audio-BRE** (81.63 and 78.00 for **audio-BRE** and **MHA-2**, receptively). Interestingly, although **MHA-2** shows superior discriminating ability among emotion classes, it still shows the tendency such that most of the incorrect cases are misclassified into *neutral* class. We consider this observation as a future research direction.
Conclusions {#sec:conclusions}
===========
In this paper, we propose a multi-hop attention model to combine acoustic and textual data for speech emotion recognition task. The proposed attention method is designed to select relevant parts of the textual data, which is subsequently applied to attend to the segments of the audio signal for classification. Extensive experiments show that the proposed **MHA-2** outperforms the best baseline system in classifying the four emotion categories by 6.5% (0.718 to 0.765 absolute) in terms of WA when the model is applied to the IEMOCAP dataset. We further test our model with ASR-processed transcripts and achieve WA 0.73 that shows the reliability of the proposed system (**MHA-2-ASR**) in the practical scenario where the ground-truth transcripts are not available.
Acknowledgments {#acknowledgments .unnumbered}
===============
We sincerely thank Trung H. Bui at Adobe Research for his in depth feedback that helped us to think the technology from the industry point of view as well. K. Jung and S. Yoon are with Automation and Systems Research Institute (ASRI), Seoul National University, Seoul, Korea. This work was supported by the Ministry of Trade, Industry & Energy (MOTIE, Korea) under Industrial Technology Innovation Program (No.10073144) and by the National Research Foundation of Korea (NRF) funded by the Korea government (MSIT) (No. 2016M3C4A7952632).
|
---
abstract: |
We prove the following quantitative hardness results for the Shortest Vector Problem in the $\ell_p$ norm (${{\ensuremath{\mathrm{SVP}} }}_p$), where $n$ is the rank of the input lattice.
1. For “almost all” $p > p_0 \approx 2.1397$, there no $2^{n/C_p}$-time algorithm for ${{\ensuremath{\mathrm{SVP}} }}_p$ for some explicit constant $C_p > 0$ unless the (randomized) Strong Exponential Time Hypothesis (SETH) is false.
2. For any $p > 2$, there is no $2^{o(n)}$-time algorithm for ${{\ensuremath{\mathrm{SVP}} }}_p$ unless the (randomized) Gap-Exponential Time Hypothesis (Gap-ETH) is false. Furthermore, for each $p > 2$, there exists a constant $\gamma_p > 1$ such that the same result holds even for $\gamma_p$-approximate ${{\ensuremath{\mathrm{SVP}} }}_p$.
3. There is no $2^{o(n)}$-time algorithm for ${{\ensuremath{\mathrm{SVP}} }}_p$ for any $1 \leq p \leq 2$ unless either (1) (non-uniform) Gap-ETH is false; or (2) there is no family of lattices with exponential kissing number in the $\ell_2$ norm. Furthermore, for each $1 \leq p \leq 2$, there exists a constant $\gamma_p > 1$ such that the same result holds even for $\gamma_p$-approximate ${{\ensuremath{\mathrm{SVP}} }}_p$.
author:
- |
Divesh Aggarwal[^1]\
Centre for Quantum Technologies, NUS\
`divesh.aggarwal@gmail.com`
- |
Noah Stephens-Davidowitz[^2]\
Princeton University\
`noahsd@gmail.com`
title: '(Gap/S)ETH Hardness of SVP'
---
Introduction {#sec:intro}
============
A lattice ${\mathcal{L}}$ is the set of all integer combinations of linearly independent basis vectors ${\ensuremath{\boldsymbol{b}}}_1,\dots,{\ensuremath{\boldsymbol{b}}}_n \in {\ensuremath{\mathbb{R}}}^d$, $${\mathcal{L}}= {\mathcal{L}}({\ensuremath{\boldsymbol{b}}}_1,\ldots, {\ensuremath{\boldsymbol{b}}}_n) := \Big\{ \sum_{i=1}^n z_i {\ensuremath{\boldsymbol{b}}}_i \ : \ z_i \in {\ensuremath{\mathbb{Z}}}\Big\}
\; .$$ We call $n$ the *rank* of the lattice ${\mathcal{L}}$ and $d$ the *dimension* or the *ambient dimension* of the lattice ${{\mathcal L}}$.
The Shortest Vector Problem (${{\ensuremath{\mathrm{SVP}} }}$) takes as input a basis for a lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^d$ and $r > 0$ and asks us to decide whether the shortest non-zero vector in ${\mathcal{L}}$ has length at most $r$. Typically, we define length in terms of the $\ell_p$ norm for some $1 \leq p \leq \infty$, defined as $$\|{\ensuremath{\boldsymbol{x}}}\|_p := (|x_1|^p + |x_2|^p + \cdots + |x_d|^p)^{1/p}$$ for finite $p$ and $$\|{\ensuremath{\boldsymbol{x}}}\|_\infty := \max |x_i|
\; .$$ In particular, the $\ell_2$ norm is the familiar Euclidean norm, and it is the most interesting case from our perspective. We write ${{\ensuremath{\mathrm{SVP}} }}_p$ for ${{\ensuremath{\mathrm{SVP}} }}$ in the $\ell_p$ norm (and just ${{\ensuremath{\mathrm{SVP}} }}$ when we do not wish to specify a norm).
Starting with the breakthrough work of Lenstra, Lenstra, and Lov[á]{}sz in 1982 [@LLL82], algorithms for solving ${{\ensuremath{\mathrm{SVP}} }}$ in both its exact and approximate forms have found innumerable applications, including factoring polynomials over the rationals [@LLL82], integer programming [@Lenstra83; @Kannan87; @DPV11], cryptanalysis [@Shamir84; @Odl90; @JS98; @NS01], etc. More recently, many cryptographic primitives have been constructed whose security is based on the (worst-case) hardness of ${{\ensuremath{\mathrm{SVP}} }}$ or closely related lattice problems [@Ajtai96; @oded05; @GPV08; @Pei10; @chris_survey]. Such lattice-based cryptographic constructions are likely to be used on massive scales (e.g., as part of the TLS protocol) in the not-too-distant future [@new_hope; @frodo; @NIST_quantum].
Most of the above applications rely on approximate variants of ${{\ensuremath{\mathrm{SVP}} }}$ with rather large approximation factors (e.g., the relevant approximation factors are polynomial in $n$ for most cryptographic constructions). However, the best known algorithms for the approximate variant of ${{\ensuremath{\mathrm{SVP}} }}$ use an algorithm for exact ${{\ensuremath{\mathrm{SVP}} }}_2$ over lower-rank lattices as a subroutine [@Schnorr87; @GN08; @MW16]. So, the complexity of the exact problem is of particular interest. We briefly discuss some of what is known below.
#### Algorithms for ${{\ensuremath{\mathrm{SVP}} }}$.
Most of the asymptotically fastest known algorithms for ${{\ensuremath{\mathrm{SVP}} }}$ are variants of the celebrated randomized sieving algorithm due to Ajtai, Kumar, and Sivakumar [@AKS01], which solved ${{\ensuremath{\mathrm{SVP}} }}_p$ in $2^{O(d)}$ time for $p = 2$ and $p=\infty$. This was extended to all $\ell_p$ norms [@BN09], then to ${{\ensuremath{\mathrm{SVP}} }}$ in all norms [@AJ08], and then even to “norms” whose unit balls are not necessarily symmetric [@DPV11]. These $2^{O(d)}$-time algorithms that work in all norms in particular imply $2^{O(n)} \cdot {\mathrm{poly}}(d)$-time algorithms for ${{\ensuremath{\mathrm{SVP}} }}_p$, by taking the ambient space to be the span of the lattice. We are therefore primarily interested in the running time of these algorithms as a function of the rank $n$. (Notice that, in the $\ell_2$ norm, we can always assume that $n=d$.) In the special case of $p = 2$, quite a bit of work has gone into improving the constant in the exponent in these $2^{O(n)}$-time algorithms [@NguyenVidick08; @PS09; @MV10; @LWXZ11]. The current fastest known algorithm for ${{\ensuremath{\mathrm{SVP}} }}_2$ runs in $2^{n + o(n)}$ time [@ADRS15; @AS17]. But, this is unlikely to be the end of the story. Indeed, there is also a $2^{n/2 + o(n)}$-time algorithm that approximates ${{\ensuremath{\mathrm{SVP}} }}_2$ up to a small constant factor,[^3] and there is some reason to believe that this algorithm can be modified to solve the exact problem [@ADRS15; @AS17]. Further complicating the situation, there exist even faster “heuristic algorithms,” whose correctness has not been proven but can be shown under certain heuristic assumptions [@NguyenVidick08; @WLTB11; @Laarhoven2015]. The fastest such heuristic algorithm runs in time $(3/2)^{n/2 + o(n)} \approx 2^{0.29 n}$ [@BDGL16].
#### Hardness of ${{\ensuremath{\mathrm{SVP}} }}$.
Van Emde Boaz first asked whether ${{\ensuremath{\mathrm{SVP}} }}_p$ was NP-hard in 1981, and he proved NP-hardness in the special case when $p = \infty$ [@Boas81]. Despite much work, his question went unanswered until 1998, when Ajtai showed NP-hardness of ${{\ensuremath{\mathrm{SVP}} }}_p$ for all $p$ [@Ajtai-SVP-hard]. A series of works by Cai and Nerurkar [@CN98], Micciancio [@Mic01svp], Khot [@Khot05svp], and Haviv and Regev [@HRsvp] simplified the reduction and showed hardness of the approximate version of ${{\ensuremath{\mathrm{SVP}} }}_p$. We now know that ${{\ensuremath{\mathrm{SVP}} }}_p$ is NP-hard to approximate to within any constant factor and hard to approximate to within approximation factors as large as $n^{c/\log \log n}$ for some constant $c > 0$ under reasonable complexity-theoretic assumptions.[^4]
However, such hardness proofs tell us very little about the *quantitative* or *fine-grained* complexity of ${{\ensuremath{\mathrm{SVP}} }}_p$. E.g., does the fastest possible algorithm for ${{\ensuremath{\mathrm{SVP}} }}_2$ still run in time at least, say, $2^{n/5}$, or is there an algorithm that runs in time $2^{n/20}$ or even $2^{\sqrt{n}}$? The above hardness results cannot distinguish between these cases, but we certainly need to be confident in our answers to such questions if we plan to base the security of widespread cryptosystems on these answers. Indeed, most proposed instantiations of lattice-based cryptosystems (i.e., proposed cryptosystems that specify a key size) can essentially be broken by solving ${{\ensuremath{\mathrm{SVP}} }}_2$ with, say, $n \ll 600$ or ${{\ensuremath{\mathrm{SVP}} }}_p$ for any $p$ with $n \ll 1500$. So, if we discovered an algorithm running in time, say, $2^{n/20}$-time for ${{\ensuremath{\mathrm{SVP}} }}_2$ or $2^{n/50}$ or $2^{n/\log^2 n}$ for ${{\ensuremath{\mathrm{SVP}} }}_p$, then these schemes would be broken in practice. And, given the large number of recent algorithmic advances, one might (reasonably?) worry that such algorithms will be found. We would therefore very much like to rule out this possibility!
To rule out such algorithms, we typically rely on a fine-grained complexity-theoretic hypothesis—such as the Strong Exponential Time Hypothesis (SETH[, see Section \[sec:fine-grained\_prelims\]]{}) or the Exponential Time Hypothesis (ETH). To that end, Bennett, Golovnev, and Stephens-Davidowitz recently showed quantitative hardness results for the Closest Vector Problem in $\ell_p$ norms (${{\ensuremath{\mathrm{CVP}} }}_p$) [@BGS17], which is a close relative of ${{\ensuremath{\mathrm{SVP}} }}_p$ that is known to be at least as hard (so that this was a necessary first step towards proving similar results for ${{\ensuremath{\mathrm{SVP}} }}_p$). In particular, assuming SETH, [@BGS17] showed that there is no $2^{(1-{\varepsilon}) n}$-time algorithm for ${{\ensuremath{\mathrm{CVP}} }}_p$ or ${{\ensuremath{\mathrm{SVP}} }}_\infty$ for any ${\varepsilon}> 0$ and “almost all” $1 \leq p \leq \infty$ (*not* including $p = 2$). Under ETH, [@BGS17] showed that there is no $2^{o(n)}$-time algorithm for ${{\ensuremath{\mathrm{CVP}} }}_p$ for any $1 \leq p \leq \infty$. We prove similar results for ${{\ensuremath{\mathrm{SVP}} }}_p$ for $p > 2$ (and a conditional result for $1 \leq p\le 2$ that holds if there exists a family of lattices satisfying certain geometric conditions).
Our results
-----------
We now present our results, which are also summarized in Table \[tab:complexity\_summary\].
[| c | c | c | c | c | c|]{} & Upper Bound & &Notes\
& & SETH & Gap-ETH &
----------------
Gap-ETH +
Kissing Number
----------------
: \[tab:complexity\_summary\] Summary of known fine-grained upper and lower bounds for ${{\ensuremath{\mathrm{SVP}} }}_p$ for various $p$ under various assumptions, with new results in [blue]{}. Lower bounds in [**bold**]{} also apply for some constant approximation factor strictly greater than one. The one upper bound in parentheses is due to a heuristic algorithm. The SETH-based lower bound only applies for “almost all” $p > p_0$[, in the sense of Theorem \[thm:SETH\_hardness\_centered\_theta\](as defined in the full version)]{}. We have suppressed low-order terms for simplicity.
&\
$p_0 < p < \infty$ & $2^{O(n)}$& [ $2^{n/C_p}$]{} & [ $\mathbf{2^{\Omega(n)}}$]{}& [ $\mathbf{2^{\Omega(n)}}$]{}& See Fig. \[fig:Cp\].\
$2 < p \leq p_0$ & $2^{O(n)}$ & – & [ $\mathbf{2^{\Omega(n)}}$]{}& [ $\mathbf{2^{\Omega(n)}}$]{}& $p_0 \approx 2.1397$\
$p = 2$ & $2^{n}$ ($2^{0.3 n}$) & – & – & [ $\mathbf{2^{\Omega(n)}}$]{}&\
$1 \leq p < 2$ & $2^{O(n)}$ &– &– & [ $\mathbf{2^{\Omega(n)}}$]{}&\
$p = \infty$ & $2^{O(n)}$ & $\mathbf{2^n}$ & $\mathbf{2^{\Omega(n)}}$ & $\mathbf{2^{\Omega(n)}}$ & [@BGS17]\
#### SETH-hardness.
Our first main result essentially gives an explicit constant $C_p > 1$ for each $p > p_0 \approx 2.1397$ such that, under (randomized) SETH, there is no algorithm for ${{\ensuremath{\mathrm{SVP}} }}_p$ that runs in time better than $2^{n/C_p}$. The constants $p_0$ and $C_p$ do not have a closed form, but they are easily computable to high precision in practice. (E.g., $p_0 = 2.13972134795007\ldots$, $C_3 =
3.01717780317660\ldots$, and $C_5 = 1.3018669052709\ldots$.) We plot $C_p$ over a wide range of $p$ in Figure \[fig:Cp\]. Notice that $C_p$ is unbounded as $p$ approaches $p_0$, but it is a relatively small constant for, say, $p \gtrsim 3$.
We present this result informally here, as the actual statement is rather technical. In particular, because we use the theorem from [@BGS17] that only applies to “almost all” $p$, our result also has this property. See [ \[thm:SETH\_hardness\_centered\_theta\]full version]{} for the formal statement.
\[thm:SETH\_intro\] For “almost all” $p > p_0 \approx 2.1397$ (including all odd integers $p \geq 3$), there is no $2^{n/C_p}$-time algorithm for ${{\ensuremath{\mathrm{SVP}} }}_p$ unless (randomized) SETH is false, where $C_p > 1$ is as in Figure \[fig:Cp\]. Furthermore, $C_p \to 1$ as $p \to \infty$.
To prove this theorem, we give a (randomized) reduction from the ${{\ensuremath{\mathrm{CVP}} }}_p$ instances created by the reduction of [@BGS17] to ${{\ensuremath{\mathrm{SVP}} }}_p$ that only increases the rank of the lattice by a constant factor. As we describe in Section \[sec:techniques\], our reduction is surprisingly simple. In particular, the key step in Khot’s reduction [@Khot05svp] uses a certain “gadget” consisting of a lattice ${\mathcal{L}}^\dagger$, vector ${\ensuremath{\boldsymbol{t}}}^\dagger$, and distance $r^\dagger > 0$ to convert a provably hard ${{\ensuremath{\mathrm{CVP}} }}_p$ instance into an ${{\ensuremath{\mathrm{SVP}} }}_p$ instance. Our reduction is similar to Khot’s reduction with the simple gadget given by ${\mathcal{L}}^\dagger := {\ensuremath{\mathbb{Z}}}^{n^\dagger}$, ${\ensuremath{\boldsymbol{t}}}^\dagger := (1/2,\ldots, 1/2) \in {\ensuremath{\mathbb{R}}}^{n^\dagger}$, and $r^\dagger := n^{1/p}/2$.
We note in passing that we actually do not need the full strength of SETH. We can instead rely on the analogous assumption for Max-$k$-SAT, which is potentially weaker. (We inherit this property directly from [@BGS17]. See [ \[sec:gadget\_Zn\_all\_halves\]full version]{}.)
#### Gap-ETH-hardness.
Our second main result is the Gap-ETH-hardness of ${{\ensuremath{\mathrm{SVP}} }}_p$ for all $p > 2$.[^5] In fact, we prove this even for the problem of approximating ${{\ensuremath{\mathrm{SVP}} }}_p$ up to some fixed constant $\gamma_p > 1$ depending only on $p$ (and the approximation factor implicit in the Gap-ETH assumption). [Corollary \[cor:Gap3SAT\_to\_SVP\].]{}
\[thm:ETH\_intro\] For any $p > 2$, there is no $2^{o(n)}$-time algorithm for ${{\ensuremath{\mathrm{SVP}} }}_p$ unless (randomized) Gap-ETH is false. Furthermore, for each such $p$ there is a constant $\gamma_p > 1$ such that the same result holds even for $\gamma_p$-approximate ${{\ensuremath{\mathrm{SVP}} }}_p$.
Our reduction is again quite simple (though the proof of correctness is not). We follow Khot’s reduction from approximate Exact Set Cover, and we again use the integer lattice as our gadget (with a different target).[^6]
We note in passing that for this result (as well as Theorem \[thm:kissing\_intro\] and Corollary \[cor:GapETH\_kissing\_intro\]), we actually rule out even $2^{o(d)}$-time algorithms. However, we focus on the rank $n$ instead of the dimension $d$ for simplicity.
#### Towards $p = 2$.
We are unable to extend either Theorem \[thm:SETH\_intro\] or Theorem \[thm:ETH\_intro\] to the important case when $p = 2$. Indeed, we cannot use the integer lattice as a gadget in the Euclidean norm. However, we do show that the existence of a certain type of lattice that is believed to exist would be sufficient to show (possibly non-uniform) Gap-ETH-hardness of ${{\ensuremath{\mathrm{SVP}} }}_2$. In particular, it would suffice to show the existence of any family of lattices with exponentially large kissing number. See Theorem \[thm:kissing\_gives\_hardness\] for the precise statement, which requires the existence of a structure that might be easier to construct (and see, e.g., [@Alon97; @ConwaySloaneBook98] for discussion of the lattice kissing number).
\[thm:kissing\_intro\] There is no $2^{o(n)}$-time algorithm for ${{\ensuremath{\mathrm{SVP}} }}_2$ unless either (1) (non-uniform) Gap-ETH is false; or (2) the lattice kissing number is $2^{o(n)}$. Furthermore, there exists a constant $\gamma > 1$ such that the same result holds even for $\gamma$-approximate ${{\ensuremath{\mathrm{SVP}} }}_2$.
In fact, Regev and Rosen show that $\ell_2$ is in some sense the “easiest norm” [@RR06]. [(See Theorem \[thm:embedding\].) ]{}In particular, to show that ${{\ensuremath{\mathrm{SVP}} }}_p$ is Gap-ETH-hard for all $1 \leq p \leq 2$, it suffices to show it for $p = 2$. From this, we derive the following. (See [ \[cor:ellp\_hard\]full version of this paper]{} for the formal statement.)
\[cor:GapETH\_kissing\_intro\] There is no $2^{o(n)}$-time algorithm for ${{\ensuremath{\mathrm{SVP}} }}_p$ for any $1 \leq p \leq 2$ unless either (1) (non-uniform) Gap-ETH is false; or (2) the lattice kissing number is $2^{o(n)}$ (in the $\ell_2$ norm). Furthermore, for each $1 \leq p \leq 2$, there exists a constant $\gamma_p > 1$ such that the same result holds even for $\gamma_p$-approximate ${{\ensuremath{\mathrm{SVP}} }}_p$.
Khot’s reduction {#sec:Khot}
----------------
Before we describe our own contribution, it will be useful to review Khot’s elegant reduction from ${{\ensuremath{\mathrm{CVP}} }}_p$ to ${{\ensuremath{\mathrm{SVP}} }}_p$ [@Khot05svp]. We do our best throughout this description to hide technicalities in an effort to focus on the high-level simplicity of Khot’s reduction.[^7] (Since the hardness of ${{\ensuremath{\mathrm{SVP}} }}_p$ went unproven for many years, this simplicity is truly remarkable.)
First, some basic definitions and notation. For a lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^d$ and $1 \leq p \leq \infty$, we write $$\lambda_1^{(p)}({\mathcal{L}}) := \min_{{\ensuremath{\boldsymbol{y}}} \in {\mathcal{L}}\setminus \{{\ensuremath{\boldsymbol{0}}}\}} \|{\ensuremath{\boldsymbol{y}}}\|_p$$ for the length of the shortest non-zero vector in ${\mathcal{L}}$ in the $\ell_p$ norm. For a target vector ${\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^d$, we write $$\operatorname{dist}_p({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}}) := \min_{{\ensuremath{\boldsymbol{y}}} \in {\mathcal{L}}} \|{\ensuremath{\boldsymbol{y}}} - {\ensuremath{\boldsymbol{t}}}\|_p$$ for the distance between ${\ensuremath{\boldsymbol{t}}}$ and ${\mathcal{L}}$. For any radius $r > 0$, we write $$N_p({\mathcal{L}}, r, {\ensuremath{\boldsymbol{t}}}) := |\{ {\ensuremath{\boldsymbol{y}}} \in {\mathcal{L}}\ : \ \|{\ensuremath{\boldsymbol{y}}} - {\ensuremath{\boldsymbol{t}}}\|_p \leq r \}|$$ for the number of lattice vectors within distance $r$ of ${\ensuremath{\boldsymbol{t}}}$.
Recall that ${{\ensuremath{\mathrm{CVP}} }}_p$ is the problem that takes as input a lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^d$, target vector ${\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^d$, and distance $r > 0$ and asks us to distinguish the YES case when $\operatorname{dist}_p({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}}) \leq r$ from the NO case when $\operatorname{dist}_p({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}}) > r$. When talking about a particular ${{\ensuremath{\mathrm{CVP}} }}_p$ instance, we naturally call a lattice vector ${\ensuremath{\boldsymbol{y}}} \in {\mathcal{L}}$ with $\|{\ensuremath{\boldsymbol{y}}} - {\ensuremath{\boldsymbol{t}}}\|_p \leq r$ a *close vector*, and we notice that the number of close vectors is $N_p({\mathcal{L}}, r, {\ensuremath{\boldsymbol{t}}})$.
#### The naive reduction and sparsification.
The “naive reduction” from ${{\ensuremath{\mathrm{CVP}} }}_p$ to ${{\ensuremath{\mathrm{SVP}} }}_p$ simply takes a ${{\ensuremath{\mathrm{CVP}} }}_p$ instance consisting of a lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^d$ with basis ${\ensuremath{\mathbf{B}}}\in {\ensuremath{\mathbb{R}}}^{d \times n}$, target ${\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^d$, and distance $r > 0$ and constructs the ${{\ensuremath{\mathrm{SVP}} }}_p$ instance given by the basis of a lattice ${\mathcal{L}}'$ of the form $${\ensuremath{\mathbf{B}}}' :=
\begin{pmatrix}
{\ensuremath{\mathbf{B}}}&-{\ensuremath{\boldsymbol{t}}}\\
0 &s
\end{pmatrix}
\; ,$$ where $s > 0$ is some parameter depending on the ${{\ensuremath{\mathrm{CVP}} }}_p$ instance. Notice that, if ${\ensuremath{\boldsymbol{y}}} \in {\mathcal{L}}$ is a close vector (i.e., $\|{\ensuremath{\boldsymbol{y}}} - {\ensuremath{\boldsymbol{t}}}\| \leq r$), then $\|({\ensuremath{\boldsymbol{y}}} - {\ensuremath{\boldsymbol{t}}}, s) \|_p^p \leq r^p + s^p$. Therefore, in the YES case when there exists a vector close to ${\ensuremath{\boldsymbol{t}}}$, we will have $\lambda_1^{(p)}({\mathcal{L}}') \leq r' := (r^p + s^p)^{1/p}$.
However, in the NO case there might still be non-zero vectors ${\ensuremath{\boldsymbol{y}}}' \in {\mathcal{L}}' \setminus \{{\ensuremath{\boldsymbol{0}}}\}$ whose length is less than $r'$. These vectors must be of the form ${\ensuremath{\boldsymbol{y}}}' = ({\ensuremath{\boldsymbol{y}}} - z{\ensuremath{\boldsymbol{t}}}, zs)$ for some integer $z \neq 1$. Let us for now only consider the case $z = 0$, in which case these vectors are in one-to-one correspondence with the non-zero vectors in ${\mathcal{L}}$ of length less than $r'$. We naturally call these *short vectors*.
Khot showed that a (randomized) reduction exists if we just assume that the number of close vectors in any YES case is significantly larger than the number of short vectors in any NO case. In particular, Khot showed that we can randomly “sparsify” the lattice ${\mathcal{L}}'$ to obtain a sublattice ${\mathcal{L}}''$ such that each of the short non-zero vectors in ${\mathcal{L}}'$ stays in ${\mathcal{L}}''$ with probability $1/q$ where $q \geq 2$ is some parameter that we can choose. So, if we take $q$ to be significantly smaller than the number of close vectors in the YES case but significantly larger than the number of short vectors in the NO case, we can show that the resulting lattice will have $\lambda_1^{(p)}({\mathcal{L}}) \leq r'$ in the YES case but $\lambda_1^{(p)}({\mathcal{L}}) > r'$ in the NO case with high probability.
Unfortunately, the ${{\ensuremath{\mathrm{CVP}} }}_p$ instances produced by most hardness reductions typically have $2^{\Omega(n)}$ short vectors, and they might only have one close vector in the YES case. So, if we want this reduction to work, we will need some way to increase this ratio by an exponential factor.
#### Adding the gadget.
To increase the ratio of close vectors to short vectors, Khot uses a certain gadget that is itself a ${{\ensuremath{\mathrm{CVP}} }}_p$ instance $({\mathcal{L}}^\dagger, {\ensuremath{\boldsymbol{t}}}^\dagger, r^\dagger)$, where ${\mathcal{L}}^\dagger \subset {\ensuremath{\mathbb{R}}}^{d^\dagger}$ is a lattice with basis ${\ensuremath{\mathbf{B}}}^\dagger$, ${\ensuremath{\boldsymbol{t}}}^\dagger \in {\ensuremath{\mathbb{R}}}^{d^\dagger}$ is a target vector, and $r^\dagger > 0$ is some distance. He then takes the direct sum of the two instances. I.e., Khot considers the lattice $$\widehat{{\mathcal{L}}} := {\mathcal{L}}\oplus {\mathcal{L}}^\dagger = \{ ({\ensuremath{\boldsymbol{y}}}, {\ensuremath{\boldsymbol{y}}}^\dagger) \ : \ {\ensuremath{\boldsymbol{y}}} \in {\mathcal{L}}, {\ensuremath{\boldsymbol{y}}}^\dagger \in {\mathcal{L}}^\dagger\} \subset {\ensuremath{\mathbb{R}}}^{d + d^\dagger}$$ with basis $$\widehat{{\ensuremath{\mathbf{B}}}} := \begin{pmatrix}
{\ensuremath{\mathbf{B}}}&0 \\
0 & {\ensuremath{\mathbf{B}}}^\dagger
\end{pmatrix}
\; ,$$ the target $\widehat{{\ensuremath{\boldsymbol{t}}}} := ({\ensuremath{\boldsymbol{t}}}, {\ensuremath{\boldsymbol{t}}}^\dagger) \in {\ensuremath{\mathbb{R}}}^{d + d^\dagger}$, and the distance $\widehat{r} := (r^p + (r^\dagger)^{p})^{1/p}$. We wish to apply the sparsification-based reduction described above to this new lattice. So, we proceed to make some observations about ${\mathcal{L}}^\dagger$ to deduce some properties that it must have in order to make this reduction sufficient to derive our hardness results.
First, we simply notice that the rank of $\widehat{{\mathcal{L}}} = {\mathcal{L}}\oplus {\mathcal{L}}^\dagger$ is the sum of the ranks of ${\mathcal{L}}$ and ${\mathcal{L}}^\dagger$. To prove the kind of fine-grained hardness results that we are after, we are only willing to increase the rank by a constant factor, so *the rank of ${\mathcal{L}}^\dagger$ must be at most $O(n)$*. (Of course, prior work did not have this restriction.)
Next, we notice that any $\widehat{{\ensuremath{\boldsymbol{y}}}} = ({\ensuremath{\boldsymbol{y}}}, {\ensuremath{\boldsymbol{y}}}^\dagger) \in \widehat{{\mathcal{L}}}$ with $\|{\ensuremath{\boldsymbol{y}}} - {\ensuremath{\boldsymbol{t}}}\|_p \leq r$ and $\|{\ensuremath{\boldsymbol{y}}}^\dagger - {\ensuremath{\boldsymbol{t}}}^\dagger\|_p \leq r^\dagger$ satisfies $\|\widehat{{\ensuremath{\boldsymbol{y}}}} - \widehat{{\ensuremath{\boldsymbol{t}}}} \|_p \leq \widehat{r}$. We call these *good vectors*, and we notice that there are at least $N_p({\mathcal{L}}^\dagger, r^\dagger, {\ensuremath{\boldsymbol{t}}}^\dagger)$ good vectors in the YES case.
Now, we worry about short vectors in $\widehat{{\mathcal{L}}}$ in the NO case, i.e., non-zero $\widehat{{\ensuremath{\boldsymbol{y}}}} = ({\ensuremath{\boldsymbol{y}}}, {\ensuremath{\boldsymbol{y}}}^\dagger)$ with $\|\widehat{{\ensuremath{\boldsymbol{y}}}}\|_p \leq \widehat{r}$. Clearly, $\widehat{{\ensuremath{\boldsymbol{y}}}}$ will be short if $\|{\ensuremath{\boldsymbol{y}}}\|_p \leq r$ and $\|{\ensuremath{\boldsymbol{y}}}^\dagger\|_p \leq r^\dagger$. Therefore, the number of short vectors is at least $$N_p({\mathcal{L}}, r, {\ensuremath{\boldsymbol{0}}}) \cdot N_p({\mathcal{L}}^\dagger, r^\dagger, {\ensuremath{\boldsymbol{0}}}) \geq 2^{\Omega(n)} \cdot N_p({\mathcal{L}}^\dagger, r^\dagger, {\ensuremath{\boldsymbol{0}}}) \geq 2^{\Omega(n^\dagger)} \cdot N_p({\mathcal{L}}^\dagger, r^\dagger, {\ensuremath{\boldsymbol{0}}})
\; ,$$ where we have used the fact that $n^\dagger = O(n)$ and the fact that the input ${{\ensuremath{\mathrm{CVP}} }}_p$ instances that interest us have $2^{\Omega(n)}$ short vectors. (This is not true in general, but it is true of most ${{\ensuremath{\mathrm{CVP}} }}_p$ instances resulting from hardness proofs.) Since the number of good vectors in the YES case is potentially only $N_p({\mathcal{L}}^\dagger, r^\dagger, {\ensuremath{\boldsymbol{t}}}^\dagger)$, *our gadget lattice must satisfy* $$\label{eq:more_close_than_short_intro}
N_p({\mathcal{L}}^\dagger, r^\dagger, {\ensuremath{\boldsymbol{t}}}^\dagger) \geq 2^{\Omega(n^\dagger)} \cdot N_p({\mathcal{L}}^\dagger, r^\dagger, {\ensuremath{\boldsymbol{0}}})
\; .$$ Though this in itself is not sufficient to make our reduction work, it is the most important feature that a gadget lattice must have. Indeed, we show in Corollary \[cor:gadget\_to\_hardness\] that a slightly stronger condition is sufficient to prove Gap-ETH hardness. (This property and various variants are sometimes called *local density*, and they play a key role in many hardness proofs for ${{\ensuremath{\mathrm{SVP}} }}_p$.)
However, short vectors are no longer our only concern. We also have to worry about close vectors that are not good vectors, i.e., vectors $\widehat{{\ensuremath{\boldsymbol{y}}}} = ({\ensuremath{\boldsymbol{y}}}, {\ensuremath{\boldsymbol{y}}}^\dagger)$ in the NO case such that $\|\widehat{{\ensuremath{\boldsymbol{y}}}} - \widehat{{\ensuremath{\boldsymbol{t}}}}\|_p \leq \widehat{r}$ but $\|{\ensuremath{\boldsymbol{y}}} - {\ensuremath{\boldsymbol{t}}}\|_p > r$. We call such vectors *impostors*. Impostors certainly can exist in general, but our sparsification procedure will work on them just like any other vector. So, as long as our gadget lattice is chosen such that the number of impostors in the NO case is significantly lower than the number of good vectors in the YES case, they will not trouble us.
Our techniques {#sec:techniques}
--------------
We learned in the previous section that, in order to make our reduction work, it is necessary (though not always sufficient) that our gadget $({\mathcal{L}}^\dagger, {\ensuremath{\boldsymbol{t}}}^\dagger, r^\dagger)$ has exponentially more close vectors than short vectors. I.e., we need to find a family of gadgets that satisfies Eq. (\[eq:more\_close\_than\_short\_intro\]). Furthermore, we must somehow ensure that the the number of impostors in the NO case is exponentially lower than the number of good vectors in the YES case.
#### The integer lattice, $\Theta_p$, and SETH-hardness.
To prove Theorem \[thm:SETH\_intro\], we take ${\mathcal{L}}^\dagger := {\ensuremath{\mathbb{Z}}}^{n^\dagger}$, ${\ensuremath{\boldsymbol{t}}}^\dagger := (1/2,\ldots, 1/2) \in {\ensuremath{\mathbb{R}}}^{n^\dagger}$, and $r^\dagger := \operatorname{dist}_p({\ensuremath{\boldsymbol{t}}}^\dagger, {\ensuremath{\mathbb{Z}}}^{n^\dagger}) = (n^\dagger)^{1/p}/2$. Notice that, by taking $r^\dagger = \operatorname{dist}_p({\ensuremath{\boldsymbol{t}}}^\dagger, {\mathcal{L}}^\dagger)$, we ensure that there simply are no impostors in the NO instance (i.e., when $\|{\ensuremath{\boldsymbol{y}}} - {\ensuremath{\boldsymbol{t}}} \|_p > r$, we can never have $\|({\ensuremath{\boldsymbol{y}}}, {\ensuremath{\boldsymbol{y}}}^\dagger)-({\ensuremath{\boldsymbol{t}}}, {\ensuremath{\boldsymbol{t}}}^\dagger)\|_p^p \leq r^p + (r^\dagger)^p$).[^8]
To prove that our reduction works, we wish to show that the ratio $$\frac{N_p({\ensuremath{\mathbb{Z}}}^{n^\dagger}, r^\dagger, {\ensuremath{\boldsymbol{t}}}^\dagger)}{N_p({\ensuremath{\mathbb{Z}}}^{n^\dagger}, r^\dagger, {\ensuremath{\boldsymbol{0}}})}$$ is (exponentially) large. Of course, the numerator is easy to calculate. It is $|\{ 0,1\}^{n^\dagger}| = 2^{n^\dagger}$. So, we wish to prove that $$\label{eq:fewer_than_2n}
N_p({\ensuremath{\mathbb{Z}}}^{n^\dagger}, r^\dagger, {\ensuremath{\boldsymbol{0}}}) \ll 2^{n^\dagger}
\; .$$
Unfortunately, Eq. does not hold for all $\ell_p$ norms. For example, for $p = 2$, consider the points in $\{-1,0,1\}^{n^\dagger}$ with $n^\dagger/4$ non-zero coordinates, which have $\ell_2$ norm $r^\dagger$. There are $$2^{n^\dagger/4} \cdot \binom{n^\dagger}{n^\dagger/4} \approx 2^{n^\dagger/4} \cdot 4^{n^\dagger/4} \cdot (4/3)^{3n^\dagger/4} \approx 2.0867^{n^\dagger}$$ such points. (In fact, this is a reasonable estimate for the exact value of $N_2({\ensuremath{\mathbb{Z}}}^{n^\dagger}, r^\dagger ,{\ensuremath{\boldsymbol{0}}})$, which is $C^{n^\dagger + O(\sqrt{n^\dagger})}$ for $C = 2.0891\ldots$, as we show in [ \[sec:integer\_points\]full version of this paper]{}.) However, $N_p({\ensuremath{\mathbb{Z}}}^{n^\dagger}, (n^\dagger)^{1/p}/2, {\ensuremath{\boldsymbol{t}}}^\dagger)$ is decreasing in $p$. So, one might hope that Eq. holds for slightly larger $p$.
To prove this, we wish to find a good upper bound on the number of integer points in a centered $\ell_p$ ball, $N_p({\ensuremath{\mathbb{Z}}}^{n^\dagger}, r^\dagger, {\ensuremath{\boldsymbol{0}}})$. A very nice way to do this uses the function $$\Theta_p(\tau) := \sum_{z \in {\ensuremath{\mathbb{Z}}}} \exp(- \tau |z|^p)
\;$$ for $\tau > 0$ [@MO90; @EOR91].[^9] Notice that $$\Theta_p(\tau)^{n^\dagger} = \sum_{z_1, z_2,\ldots , z_n \in {\ensuremath{\mathbb{Z}}}} \exp(-\tau (|z_1|^p + \cdots + |z_{n^\dagger}|^p)) = \sum_{{\ensuremath{\boldsymbol{z}}} \in {\ensuremath{\mathbb{Z}}}^{n^\dagger}} \exp(-\tau \|{\ensuremath{\boldsymbol{z}}}\|_p^p)
\; .$$ In particular, $$\Theta_p(\tau)^{n^\dagger} \geq \sum_{\stackrel{{\ensuremath{\boldsymbol{z}}} \in {\ensuremath{\mathbb{Z}}}^{n^\dagger}}{\|{\ensuremath{\boldsymbol{z}}}\|_p \leq r^\dagger}} \exp(-\tau \|{\ensuremath{\boldsymbol{z}}}\|_p^p) \geq \exp(-\tau (r^\dagger)^p) \cdot N_p({\ensuremath{\mathbb{Z}}}^{n^\dagger}, r^\dagger, {\ensuremath{\boldsymbol{0}}})
\; .$$ Rearranging and taking the infimum over $\tau$, we see that $$\label{eq:Theta_upper_bound}
N_p({\ensuremath{\mathbb{Z}}}^{n^\dagger}, r^\dagger, {\ensuremath{\boldsymbol{0}}}) \leq \inf_{\tau > 0} \exp(\tau (r^\dagger)^p) \Theta_p(\tau)^{n^\dagger}
\; .$$ We can relatively easily compute this value numerically and see that it is less than $2^{n^\dagger}$ for $p > p_0 \approx 2.1397$. (Indeed, we will see below that there is a nearly matching lower bound in a more general context. So, Eq. is quite tight.)
To prove Theorem \[thm:SETH\_intro\], we can plug this very simple gadget into Khot’s reduction described in Section \[sec:Khot\] to reduce the SETH-hard instances of ${{\ensuremath{\mathrm{CVP}} }}_p$ from [@BGS17] to ${{\ensuremath{\mathrm{SVP}} }}_p$. To make the constant $C_p$ as tight as we can, we exploit the structure of these SETH-hard ${{\ensuremath{\mathrm{CVP}} }}_p$ instances. In particular, we observe that these instances themselves actually look quite a bit like our gadget, in that they are in some sense “small perturbations” of the integer lattice with the all one-halves point as the target. ([Section \[sec:gadget\_Zn\_all\_halves\]. ]{}This is in fact quite common for the ${{\ensuremath{\mathrm{CVP}} }}_p$ instances resulting from hardness proofs.) This allows us to analyze the direct sum resulting from Khot’s reduction very accurately in this case.
#### More $\Theta_p$ for $p > 2$, and Gap-ETH hardness.
To extend our hardness results to all $p >2$, we need to construct a gadget with exponentially more close vectors than short vectors for such $p$. We again choose our gadget lattice as ${\ensuremath{\mathbb{Z}}}^{n^\dagger}$, but we now take ${\ensuremath{\boldsymbol{t}}}^\dagger = (t,t,\ldots, t) \in {\ensuremath{\mathbb{R}}}^{n^\dagger}$ for some $t \in (0,1/2]$, and we take $r^\dagger = C (n^\dagger)^{1/p}$ for some constant $C > 0$.
Our previous gadget was quite convenient in that it was very easy to count the number of close vectors, but for arbitrary $t$ and $r^\dagger$, it is no longer clear how to do this. Fortunately, $\Theta_p$ can be used for this purpose. In particular, we define $$\Theta_p(\tau; t) := \sum_{z \in {\ensuremath{\mathbb{Z}}}} \exp(-\tau |z-t|^p)
\; .$$ By the same argument as before, we see that $$N_p({\ensuremath{\mathbb{Z}}}^{n^\dagger}, r^\dagger, {\ensuremath{\boldsymbol{t}}}^\dagger) \leq \inf_{\tau > 0} \exp(\tau (r^\dagger)^p) \Theta_p(\tau; t)^{n^\dagger} = \big( \inf_{\tau > 0} \exp(\tau C^p) \Theta_p(\tau; t) \big)^{n^\dagger}
\; .$$ But, we need a *lower bound* on $N_p({\ensuremath{\mathbb{Z}}}^{n^\dagger}, r^\dagger, {\ensuremath{\boldsymbol{t}}}^\dagger)$. To that end, we show that the above is actually quite tight. In particular, $$\label{eq:Theta_approx_intro}
N_p({\ensuremath{\mathbb{Z}}}^{n^\dagger}, r^\dagger, {\ensuremath{\boldsymbol{t}}}^\dagger) = \big( \inf_{\tau > 0} \exp(\tau C^p) \Theta_p(\tau; t) \big)^{n^\dagger} \cdot 2^{-O(\sqrt{n^\dagger})}
\; .$$ I.e., $\Theta_p$ tells us the number of integer points in an $\ell_p$ ball up to lower-order terms. (Eq. was already proven for $p=2$ by Mazo and Odlyzko [@MO90] and for all $p$ by Elkies, Odlyzko, and Rush [@EOR91].[See Section \[sec:integer\_points\] for the proof.]{})
It follows that there exists a ${\ensuremath{\boldsymbol{t}}}^\dagger$ and $r^\dagger$ with exponentially more close integer vectors than short integer vectors in the $\ell_p$ norm if and only if there exists a $\tau > 0$ and $t \in (0,1/2]$ such that $\Theta_p(\tau; t) > \Theta_p(\tau; 0)$. Furthermore, this holds if and only if $p > 2$.[See Section \[sec:integer\_points\] for the proof.]{}
So, to prove Theorem \[thm:ETH\_intro\], we start with the observation that approximating the Exact Set Cover problem is Gap-ETH-hard for some constant approximation factor $\eta < 1$. We then plug our gadget into Khot’s reduction from constant-factor-approximate Exact Set Cover to ${{\ensuremath{\mathrm{SVP}} }}_p$. (This reduction uses ${{\ensuremath{\mathrm{CVP}} }}_p$ as an intermediate problem.) The above discussion explains why Eq. is satisfied. And, like Khot, we exploit the approximation factor $\eta$ to show that the number of impostors in a NO instance is much smaller than the number of good vectors in a YES instance.
#### Building gadgets in $\ell_2$ from lattices with high kissing number.
While we are not able to construct a gadget that satisfies Eq. in the $\ell_2$ norm, we show the existence of such a gadget under the reasonable assumption that for any $n^\dagger$, there exists a lattice ${{\mathcal L}}^\dagger$ of rank $n^\dagger$ with exponentially many non-zero vectors of minimal $\ell_2$ norm. I.e., we show that such a gadget exists if there is a family of lattices with exponentially large kissing number. (We actually show that something potentially weaker suffices. See [ \[thm:kissing\_gives\_hardness\]full version of this paper]{}.)
To prove this, we show how to choose a ${\ensuremath{\boldsymbol{t}}}^\dagger$ and $r^\dagger < \lambda_1({\mathcal{L}}^\dagger)$ such that $N_p({\mathcal{L}}^\dagger, r^\dagger, {\ensuremath{\boldsymbol{t}}}^\dagger) \geq 2^{\Omega(n^\dagger)}$. Indeed, we show that if we choose the vector ${\ensuremath{\boldsymbol{t}}}^\dagger$ uniformly at random from vectors of an appropriate length, then the expected number of lattice vectors within distance $r^\dagger$ from ${\ensuremath{\boldsymbol{t}}}^\dagger$ is exponential in $n^\dagger$. And, we again exploit the fact that there is a constant-factor gap between the YES and the NO instances to show that the number of impostors in the NO instances is exponentially smaller than the number of good vectors in the YES instances.
Direction for future work {#sec:future}
-------------------------
Our dream result would be an explicit $2^{C n}$-time lower bound on approximate ${{\ensuremath{\mathrm{SVP}} }}_2$ for the approximation factors most relevant to cryptography (e.g., ${\mathrm{poly}}(n)$) for some not-too-small explicit constant $C > 0$, under a reasonable complexity-theoretic assumption. This seems very far out of reach. There are even complexity-theoretic barriers towards achieving this result, since ${{\ensuremath{\mathrm{SVP}} }}$ with these approximation factors cannot be NP-hard unless the polynomial-time hierarchy collapses [@AharonovR04; @Peikert08]. So, any proof of something this strong would presumably have to use a non-standard reduction (e.g., a non-deterministic reduction). Nevertheless, we can still dream of such a result and take more modest steps to at least get results closer to this dream.
One obvious such step would be to extend our hardness results to the $p = 2$ case, i.e., to show that there is no $2^{o(n)}$-time algorithm for ${{\ensuremath{\mathrm{SVP}} }}_2$ under reasonable purely complexity-theoretic assumptions (as opposed to our geometric assumption). We provide one potential route towards proving this in Theorem \[thm:kissing\_intro\] (or its more general version in [ \[thm:kissing\_gives\_hardness\]full version]{}), but this would require resolving an older open problem in the geometry of numbers. Perhaps a different approach will prove to be more fruitful?
Alternatively, one could try to improve the approximation factor given by Theorem \[thm:ETH\_intro\]. The currently known hardness of approximation proofs for ${{\ensuremath{\mathrm{SVP}} }}_p$ with large approximation factor (e.g., a large constant or superconstant) work by “boosting” the approximation factor via repeatedly taking the tensor product [@Khot05svp; @HRsvp]. I.e., given a family of lattices ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^d$ for which we know that ${{\ensuremath{\mathrm{SVP}} }}_p$ is hard to approximate to within some small constant factor $\gamma > 1$, we argue that it is hard to approximate ${{\ensuremath{\mathrm{SVP}} }}_p$ to within a factor of $\gamma^k$ on the tensor product ${\mathcal{L}}^{\otimes k}$ for some $k \geq 2$. Unfortunately, even a single tensor product increases the rank of the lattice quadratically. So, we cannot afford to use this technique to prove reasonable fine-grained hardness of approximation results. We therefore need a new technique.
Yet another direction would be to try to improve the constant $C_p$ in Theorem \[thm:SETH\_intro\]. Perhaps the simple gadget that we use is not the best possible.
Finally, in a completely different direction, we note that Theorem \[thm:SETH\_intro\] provides some additional incentive to study algorithms for ${{\ensuremath{\mathrm{SVP}} }}_p$ for $p \neq 2$ to improve the hidden (very large) constant in the $2^{O(n)}$ running time of existing algorithms. In particular, it would be interesting to see how close we can get to the lower bound given by Theorem \[thm:SETH\_intro\].
Acknowledgments {#acknowledgments .unnumbered}
---------------
The authors thank Huck Bennett, Vishwas Bhargav, Noam Elkies, Sasha Golovnev, Pasin Manurangsi, Priyanka Mukhopadhyay, and Oded Regev for helpful discussions. In particular, we thank Noam Elkies for pointing us to [@EOR91] and Oded Regev for observing that the gadgets that we need are related to lattices with high kissing number.
Preliminaries {#sec:prelims}
=============
We denote column vectors ${\ensuremath{\boldsymbol{x}}} \in {\ensuremath{\mathbb{R}}}^d$ by bold lower-case letters. Matrices ${\ensuremath{\mathbf{B}}}\in {\ensuremath{\mathbb{R}}}^{d \times n}$ are denoted by bold upper-case letters, and we often think of a matrix as a list of column vectors. For ${\ensuremath{\boldsymbol{x}}} \in {\ensuremath{\mathbb{R}}}^{d_1}, {\ensuremath{\boldsymbol{y}}} \in {\ensuremath{\mathbb{R}}}^{d_2}$, we abuse notation a bit and write $({\ensuremath{\boldsymbol{x}}}, {\ensuremath{\boldsymbol{y}}}) \in {\ensuremath{\mathbb{R}}}^{d_1+d_2}$ when we should technically write $({\ensuremath{\boldsymbol{x}}}^T, {\ensuremath{\boldsymbol{y}}}^T)^T$. For $x \in {\ensuremath{\mathbb{R}}}$, we write $$\exp(x) := e^x = 1 + x + x^2/2 + x^3/6 + \cdots
\; .$$ Logarithms are base $e$.
Throughout this paper, we consider computational problems over ${\ensuremath{\mathbb{R}}}^d$. Formally, we should specify a method of representing arbitrary real numbers, and our running times should depend in some way on the bit length of these representations and the cost of doing arithmetic in this representation. For convenience, we ignore these issues (in particular assuming that basic arithmetic operations always have unit cost), and we simply note that all of our reductions remain efficient when instantiated with any reasonable representation of ${\ensuremath{\mathbb{R}}}$. When we say that something is efficiently computable as a function of a dimension $d$, rank $n$, or cardinalities $m$, we mean that it is computable in time ${\mathrm{poly}}(d)$, ${\mathrm{poly}}(n)$, or ${\mathrm{poly}}(m)$, respectively (as opposed to polynomial in the logarithm of these numbers).
Lattice problems
----------------
For any $1 \leq p \leq \infty$ and any $\gamma \geq 1$, *the $\gamma$-approximate Shortest Vector Problem in the $\ell_p$ norm* (${{\ensuremath{\mathrm{SVP}} }}_{p, \gamma}$) is the promise problem defined as follows. The input is a (basis for a) lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^d$ and a length $r > 0$. It is a YES instance if $\lambda_1^{(p)}({\mathcal{L}}) \leq r$ and a NO instance if $\lambda_1^{(p)}({\mathcal{L}}) > \gamma r$.
For any $1 \leq p \leq \infty$ and any $\gamma \geq 1$, *the $\gamma$-approximate Closest Vector Problem in the $\ell_p$ norm* (${{\ensuremath{\mathrm{CVP}} }}_{p, \gamma}$) is the promise problem defined as follows. The input is a (basis for a) lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^d$, a target ${\ensuremath{\boldsymbol{t}}} \in{\ensuremath{\mathbb{R}}}^d$, and a distance $r > 0$. It is a YES instance if $\operatorname{dist}_p({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}}) \leq r$ and a NO instance if $\operatorname{dist}_p({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}}) > \gamma r$.
When $\gamma = 1$, we simply write ${{\ensuremath{\mathrm{SVP}} }}_p$ and ${{\ensuremath{\mathrm{CVP}} }}_p$. We will need the following (simplified version of a) celebrated result, due to Figiel, Lindenstrauss, and Milman [@FLM76].
\[thm:embedding\] For any ${\varepsilon}\in (0,1)$, $1 \leq p \leq 2$, and any positive integers $n$ and $m$ with $m \geq n/{\varepsilon}^2$, there exists a linear map $f : {\ensuremath{\mathbb{R}}}^n \to {\ensuremath{\mathbb{R}}}^{m}$ such that for any ${\ensuremath{\boldsymbol{x}}} \in{\ensuremath{\mathbb{R}}}^n$, $$(1-{\varepsilon})\|{\ensuremath{\boldsymbol{x}}}\|_2 \leq \|f({\ensuremath{\boldsymbol{x}}})\|_p \leq (1+{\varepsilon}) \|{\ensuremath{\boldsymbol{x}}}\|_2
\; .$$
Regev and Rosen showed how theorems like this can be applied to obtain reductions between lattice problems in different norms [@RR06]. Here, we only need the following immediate consequence of the above theorem. (The non-uniform reduction can be converted into an efficient randomized reduction and a similar result holds for $p > 2$, but we do not need this for our use case.)
\[cor:embedding\_reduction\] For any constants $\gamma_1 > \gamma_2 > 1$ and $1 \leq p \leq 2$, there is an efficient rank-preserving non-uniform reduction from ${{\ensuremath{\mathrm{SVP}} }}_{2,\gamma_1}$ in dimension $d$ to ${{\ensuremath{\mathrm{SVP}} }}_{p,\gamma_2}$ in dimension $O(d)$.
Sparsification {#sec:sparsification}
--------------
A lattice vector ${\ensuremath{\boldsymbol{y}}} \in {\mathcal{L}}$ is *non-primitive* if ${\ensuremath{\boldsymbol{y}}} = k {\ensuremath{\boldsymbol{x}}}$ for some scalar $k > 1$ and lattice vector ${\ensuremath{\boldsymbol{x}}} \in {\mathcal{L}}$. Otherwise, ${\ensuremath{\boldsymbol{y}}}$ is *primitive*. (Notice that ${\ensuremath{\boldsymbol{0}}}$ is non-primitive.) For a radius $r > 0$, we write $$\xi_p({\mathcal{L}}, r) := |\{ {\ensuremath{\boldsymbol{y}}} \in {\mathcal{L}}\ : \ \text{${\ensuremath{\boldsymbol{y}}}$ is primitive and } \|{\ensuremath{\boldsymbol{y}}}\|_p \leq r\}|/2$$ for the number of primitive lattice vectors of length at most $r$ in the $\ell_p$ norm (counting $\pm {\ensuremath{\boldsymbol{y}}}$ only once). We will use the following generalization of a sparsification theorem from [@DGStoSVP] to all $\ell_p$ norms.
\[thm:sparsify\] There is an efficient algorithm that takes as input (a basis for) a lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^d$ of rank $n$ and a prime $q \geq 101$ and outputs a sublattice ${\mathcal{L}}' \subseteq {\mathcal{L}}$ of rank $n$ such that for any radius $r < q \cdot \lambda_1^{(p)}({\mathcal{L}})$ and any $1 \leq p \leq \infty$, $$\frac{N}{q} - \frac{N^2}{q^2} \leq \Pr[\lambda_1^{(p)}({\mathcal{L}}') \leq r] \leq \frac{N}{q}
\; ,$$ as long as $N \leq q/(20 \log q)$, where $N := \xi_p({\mathcal{L}}, r)$ is the number of primitive lattice vectors of length $r$ in the $\ell_p$ norm (up to the sign). Furthermore, if $r \geq q \lambda_1^{(p)}({\mathcal{L}})$, then $\lambda_1^{(p)}({\mathcal{L}}') \leq r$ always.
We note in passing that the algorithm works by taking a random linear equation $\inner{{\ensuremath{\boldsymbol{z}}}, {\ensuremath{\boldsymbol{a}}}} \equiv 0 \bmod q$ for uniformly random ${\ensuremath{\boldsymbol{z}}} \in {\ensuremath{\mathbb{Z}}}_q^n$ and setting ${\mathcal{L}}'$ to be the set of lattice vectors whose coordinates in some arbitrary fixed basis satisfy this linear equation. (This idea was originally introduced by Khot.)
Fine-grained assumptions {#sec:fine-grained_prelims}
------------------------
Recall that, for integer $k \geq 2$, a $k$-SAT formula is the conjunction of clauses, where each clause is the disjunction of $k$ literals. I.e., $k$-SAT formulas have the form $\bigwedge_{i=1}^m \bigvee_{j=1}^k b_{i,j}$, where $b_{i,j} = x_k$ or $b_{i,j} = \neg x_k$ for some boolean variable $x_k$.
For any $k \geq 2$, the decision problem $k$-SAT is defined as follows. The input is a $k$-SAT formula. It is a YES instance if there exists an assignment to the variables that makes the formula evaluate to true and a NO instance otherwise.
For any $k \geq 2$, the decision problem Max-$k$-SAT is defined as follows. The input is a $k$-SAT formula and an integer $S \geq 1$. It is a YES instance if there exists an assignment to the variables such that at least $S$ of the clauses evaluate to true and a NO instance otherwise.
Notice that $k$-SAT is a special case of Max-$k$-SAT.
Impagliazzo and Paturi introduced the following celebrated and well-studied hypothesis concerning the fine-grained complexity of $k$-SAT [@IP1999].
The (randomized) *Strong Exponential Time Hypothesis* ((randomized) SETH) asserts that, for every constant ${\varepsilon}> 0$, there exists a constant $k \geq 3$ such that there is no $2^{(1-{\varepsilon})n}$-time (randomized) algorithm for $k$-SAT formulas with $n$ variables.
For $\eta \in (0,1)$ and $k \geq 2$, the promise problem Gap-$k$-${{\ensuremath{\mathrm{SAT}} }}_\eta$ is defines as follows. The input is a $k$-SAT formula with $m$ clauses. It is a YES instance if the formula is satisfiable, and it is a NO instance if the maximal number of simultaneously satisfiable clauses is strictly less than $\eta m$.
Dinur [@journals/eccc/Dinur16] and Manurangsi and Raghavendra [@MR17] recently introduced the following natural assumption, called Gap-ETH. We also consider a non-uniform variant.
The (randomized) Gap-Exponential Time Hypothesis ((randomized) Gap-ETH) asserts that there exists a constant $\eta \in (0,1)$ such that there is no (randomized) $2^{o(n)}$-time algorithm for Gap-$3$-${{\ensuremath{\mathrm{SAT}} }}_\eta$ over $n$ variables.
Non-uniform Gap-ETH asserts that there is no circuit family of size $2^{o(n)}$ for Gap-$3$-${{\ensuremath{\mathrm{SAT}} }}_\eta$ over $n$ variables.
For $\eta \in (0,1)$, $k \geq 2$, and $C \geq 2$, the promise problem Gap-$k$-${{\ensuremath{\mathrm{SAT}} }}_\eta^{\le C}$ is defined as follows. The input is a $k$-SAT formula such that each variable appears in at most $C$ clauses. It is a YES instance if the formula is satisfiable, and it is a NO instance if the maximal number of simultaneously satisfiable clauses is at most $\eta m$.
We will need the following result due to Manurangsi and Raghavendra [@MR17].
Unless Gap-ETH is false, there exist constants $\eta \in (0,1)$ and $C \geq 2$ such that there is no $2^{o(n)}$-time algorithm for $\text{Gap}$-$3$-${{\ensuremath{\mathrm{SAT}} }}_\eta^{\le C}$.
For $\eta \in (0,1)$, the promise problem ${{\ensuremath{\mathrm{ExactSetCover}} }}_\eta$ is defined as follows. The input consists of sets $S_1, \cdots, S_m \subseteq U$ with $|U| = k$ and a positive integer “size bound” $d \leq m$. It is a YES instance if there exist disjoint sets $S_{i_1}, \cdots, S_{i_\ell}$ such that $\bigcup_j S_{i_j} = U$ for some $\ell \le \eta d$. It is a NO instance if for every collection of (not necessarily disjoint) sets $S_{i_1}, \cdots, S_{i_d}$, $\bigcup_j S_{i_j} \neq U$.
The following reduction is due to [@PasinPrivate].
\[thm:SAT\_to\_ESC\] For any constant $C'>0$, and $\eta' \in (0,1)$, there is a polynomial-time Karp reduction from Gap-$3$-${{\ensuremath{\mathrm{SAT}} }}_{\eta'}^{\le C'}$ on $n$ variables to ${{\ensuremath{\mathrm{ExactSetCover}} }}_\eta$ with $d := n/\eta$ and $m, k \in [n, Cn]$ for some constants $C > 1$ and $\eta \in (0,1)$ depending only on $C'$ and $\eta'$.
The reduction takes as input a set of clauses ${{\mathcal C}}_1, {{\mathcal C}}_2, \cdots, {{\mathcal C}}_t$, over a set of variables $x_1, \ldots, x_n$ where each variable is in at most $C'$ clauses. We assume without loss of generality that each variable or its negation is in at least one clause, and so $n/3 \le t \le C'n$. Define $U$ to be the set $\{{{\mathcal C}}_1, \ldots, {{\mathcal C}}_t, x_1, \ldots, x_n\}$. For each literal $b = x_i$ or $b = \neg x_i$ and for each set $S$ of clauses containing $b_i$, we create a set $S \cup \{x_i\}$ in our instance. I.e. a literal that is contained in exactly $r$ clauses will be contained in exactly $2^r$ sets. The reduction outputs YES if there exists an exact set cover of size at most $n$, and no, otherwise.
It is easy to see that the reduction is efficient and that $n \le k \le (C'+1)n$ and $n \le m \le 2^{C' + 1}n$. We now argue correctness.
Suppose the Gap-$3$-${{\ensuremath{\mathrm{SAT}} }}_{\eta'}^{\le C'}$ instance is a YES instance, i.e. the formula is satisfiable. Then there exists a satisfying assignment obtained by setting $b_1 = b_2 = \cdots = b_n = 1$, where each $b_i$ is either $x_i$ or $\neg x_i$. Thus, for all $i = 1, 2, \ldots, n$, let $S_i$ be the set of clauses containing $b_i$ but not containing any of $b_1, \ldots, b_{i-1}$. Clearly, each of these sets is disjoint, and $\cup_i S_i = \{{{\mathcal C}}_1, \ldots, {{\mathcal C}}_t\}$, since $b_1, \ldots, b_n$ is a satisfying assignment. Thus, the sets $S_i \cup \{x_i\}$ form an exact set cover of $U$ of size $n$.
Suppose, on the other hand, that the Gap-$3$-${{\ensuremath{\mathrm{SAT}} }}_{\eta'}^{\le C'}$ instance is a NO instance, i.e. any assignment satisfies at most $\eta' t$ clauses. Let $S_1, \ldots, S_{\ell}$ be a set cover of $U$, where the sets are not necessarily disjoint. We wish to show that $\ell \geq d = n/\eta$ for some constant $\eta \in (0,1)$.
Let $S(b)$ be the set of all clauses containing a literal $b$. Without loss of generality, we can assume that each set $S_i$ equals either $S(x_j) \cup \{x_j\}$ or $S(\neg x_j) \cup \{x_j\}$ for some $j$. The total number of variables for which $S(x_i) \cup \{x_i\}$ and $S(\neg x_i) \cup \{x_i\}$ are both in the set cover is at most $\ell - n$. Thus, the total number of clauses covered by $S_1, \ldots, S_\ell$ is at most $\eta' t + C' (\ell - n)$, so we must have $\eta' t + C'(\ell - n) \ge t$. This implies that $$\ell \ge \frac{t}{C'}(1 - \eta') + n \geq
\left(1 + \frac{1 - \eta'}{3C'}\right) \cdot n
\;,$$ as needed.
A reduction from a variant of CVP to SVP
========================================
As we discussed in Section \[sec:Khot\], the “naive reduction” from ${{\ensuremath{\mathrm{CVP}} }}_{p, \gamma'}$ to ${{\ensuremath{\mathrm{SVP}} }}_{p, \gamma}$ simply takes a ${{\ensuremath{\mathrm{CVP}} }}$ instance consisting of a basis ${\ensuremath{\mathbf{B}}}\in {\ensuremath{\mathbb{R}}}^{d \times n}$ for a lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^d$, target ${\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^d$, and distance $r > 0$, and constructs the ${{\ensuremath{\mathrm{SVP}} }}$ instance given by the basis for ${\mathcal{L}}'$ of the form $${\ensuremath{\mathbf{B}}}' :=
\begin{pmatrix}
{\ensuremath{\mathbf{B}}}&-{\ensuremath{\boldsymbol{t}}}\\
0 &s
\end{pmatrix}$$ and length $r' := (r^p + s^p)^{1/p}$, where $s > 0$. Notice that, if the input is a YES instance (i.e., $\operatorname{dist}_p({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}}) \leq r$, then $\lambda_1^{(p)}({\mathcal{L}}') \leq r'$.
If the input instance is a NO instance (i.e., if $\operatorname{dist}_p({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}}) > \gamma' r$), then we call a non-zero vector ${\ensuremath{\boldsymbol{y}}}' = ({\ensuremath{\boldsymbol{y}}} - z {\ensuremath{\boldsymbol{t}}}, zs) \in {\mathcal{L}}'$ *annoying* if $\|{\ensuremath{\boldsymbol{y}}}'\|_p \leq \gamma r'$. As Khot showed, we can sparsify (as in Theorem \[thm:sparsify\]), to make this naive reduction work as long as there are significantly fewer annoying vectors than close vectors. We therefore define a rather unnatural quantity below that exactly counts the number of annoying vectors in a NO instance.
For $1 \leq p < \infty$, and $\gamma \ge 1$, a lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^d$, target ${\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^n$, and distances $r,s > 0$, we define $$A_{r, s, \gamma}^{(p)}({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}}) := \sum_{z = 0}^{\gamma (r^p/s^p + 1)^{1/p}} N_p({\mathcal{L}}, (\gamma^p r^p - (z^p - \gamma^p) s^p)^{1/p}, z {\ensuremath{\boldsymbol{t}}}) - 1
\; .$$ Notice that $A_{r,s,\gamma}^{(p)}$ does in fact count the number of annoying vectors resulting from the above reduction (up to sign). In particular, the summand $N_p({\mathcal{L}}, (\gamma^p r^p - (z^p - \gamma^p) s^p)^{1/p}, z {\ensuremath{\boldsymbol{t}}})$ is the number of vectors ${\ensuremath{\boldsymbol{y}}}' = ({\ensuremath{\boldsymbol{y}}} - z{\ensuremath{\boldsymbol{t}}}, zs) \in {\mathcal{L}}$ of length at most $\gamma r'$ for some fixed $z$.
We now define the class of ${{\ensuremath{\mathrm{CVP}} }}_p$ instances on which this sparsification-based reduction works.
For $1 \leq p < \infty$, $A = A(n) \geq 0$ (the number of annoying vectors), $G = G(n) \geq 1$ (the number of “good” or close vectors), and $\gamma = \gamma(n) \ge 1$ (the approximation factor), the promise problem $(A, G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_{p, \gamma}$ is defined as follows. The input is a (basis for a) lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^d$, target ${\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^d$, and distances $r,s > 0$. It is a YES instance if $N_p({\mathcal{L}}, r, {\ensuremath{\boldsymbol{t}}}) \geq G$. It is a NO instance if $A_{r, s, \gamma}^{(p)}({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}}) \le \alpha$.
Notice that the YES and NO instances of $(A, G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_{p,\gamma}$ are disjoint when $A < G$, since $A_{r,s,\gamma}^{(p)}({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}}) \geq N_p({\mathcal{L}}, r, {\ensuremath{\boldsymbol{t}}}) $.[^10] We drop the subscript $\gamma$ from $A_{r, s, \gamma}^{(p)}({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}})$, $(A, G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_{p, \gamma}$ and ${{\ensuremath{\mathrm{SVP}} }}_{p,\gamma}$ if $\gamma = 1$.
Having defined $(A,G)$-${{\ensuremath{\mathrm{CVP}} }}_{p, \gamma}$ specifically so that we can reduce it to ${{\ensuremath{\mathrm{SVP}} }}_{p,\gamma}$, we now present the reduction from $(A, G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_{p,\gamma}$ to ${{\ensuremath{\mathrm{SVP}} }}_{p, \gamma}$. It essentially follows from the definition of $A_{r, s,\gamma}^{(p)}$ together with Theorem \[thm:sparsify\].
\[thm:sparsification\_reduction\] For $1 \leq p < \infty$ and efficiently computable $A = A(n) \geq 1$, $G = G(n) \geq 1000A(n)$, and $\gamma = \gamma(n)\ge 1$, there is a (randomized) reduction from $(A, G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_{p, \gamma}$ on a lattice with rank $n$ in $d$ dimensions to ${{\ensuremath{\mathrm{SVP}} }}_{p, \gamma}$ on a lattice with rank $n+1$ in $d+1$ dimensions that runs in time ${\mathrm{poly}}(d, \log A, \log G)$.
On input a basis ${\ensuremath{\mathbf{B}}}$ for a lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^n$, a target vector ${\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^n$, and distances $r,s > 0$, the reduction does the following. Let ${\mathcal{L}}'$ be the lattice generated by $${\ensuremath{\mathbf{B}}}' := \begin{pmatrix}
{\ensuremath{\mathbf{B}}}& -{\ensuremath{\boldsymbol{t}}}\\
0 & s
\end{pmatrix}
\; ,$$ as above. Let $M := 10\sqrt{AG}$. The reduction does the following $\ell := \ceil{100 d \log M}$ times. It finds a prime $q$ with $10 M \log M \leq q \leq 20 M \log M$ and calls the procedure from Theorem \[thm:sparsify\], receiving as output some new lattice ${\mathcal{L}}''$. It then calls its [[$\mathrm{SVP}$ ]{}]{}oracle with input ${\mathcal{L}}''$ and $r':=(r^p + s^p)^{1/p}$. Finally, it outputs YES if and only if the ${{\ensuremath{\mathrm{SVP}} }}$ oracle returned YES more than $\delta \ell$ times, where $$\delta := \frac{M}{20 q} - \frac{M^2}{200 q^2} \geq \frac{1}{100\log M}
\; .$$
The running time is clear, as is the fact that the reduction increases both the dimension and rank by exactly one.
If the input instance is a YES instance, then the number of vectors in ${\mathcal{L}}'$ of the form $({\ensuremath{\boldsymbol{v}}} - {\ensuremath{\boldsymbol{t}}}, s)$, where ${\ensuremath{\boldsymbol{v}}} \in {\mathcal{L}}$, is $N_p({\mathcal{L}}, r, {\ensuremath{\boldsymbol{t}}}) \geq G$. These are primitive vectors in ${\mathcal{L}}'$ and have length at most $r'$ (and there is no pair $\pm {\ensuremath{\boldsymbol{y}}}$ in this collection of vectors). I.e., there are at least $M/10$ primitive lattice vectors in ${\mathcal{L}}'$ of length at most $r'$, and it follows from Theorem \[thm:sparsify\] that $$\Pr[\lambda_1^{(p)}({\mathcal{L}}'') \leq r'] \geq 2\delta
\; .$$ Then, by the Chernoff-Hoeffding bound, the oracle will output YES except with probability $\exp(-\Omega(d))$, as needed.
If the input instance is a NO instance, then notice the number of primitive vectors in ${\mathcal{L}}'$ of length at most $\gamma r'$ is at most $A_{r, s, \gamma}^{(p)}({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}}) \le A$ (up to sign). Furthermore, the total number of vectors of length at most $\gamma r'$ (including non-primitive vectors) is at most $2A_{r,s,\gamma}^{(p)}({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}})+1 \leq 2A + 1$. In particular, this implies that $\lambda_1^{(p)}({\mathcal{L}}') > \gamma r'/(A+1) > \gamma r'/q$.[^11] So, we may apply Theorem \[thm:sparsify\], and we have that $$\Pr[\lambda_1^{(p)}({\mathcal{L}}'') \leq \gamma r'] \leq \frac{A}{q} \leq \frac{\delta}{2}
\; .$$ The result again follows by the Chernoff-Hoeffding bound.
SETH-hardness of this variant of CVP (and therefore SVP) {#sec:gadget_Zn_all_halves}
========================================================
We now show that $(A, G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_p$ is SETH-hard. We first observe that the SETH-hard ${{\ensuremath{\mathrm{CVP}} }}_p$ instances from [@BGS17] “have a copy of ${\ensuremath{\mathbb{Z}}}^n$ embedded in them.” This fact will allow us to compute $A_{r,s}^{(p)}$ quite accurately.
\[thm:CVPSETH\] For any constant $k \geq 2$, the following holds for all but finitely many values of $p \geq 1$. There is a Karp reduction from Max-$k$-SAT on $n$ variables to ${{\ensuremath{\mathrm{CVP}} }}_p$ on a rank $n$ lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^d$ such that the resulting ${{\ensuremath{\mathrm{CVP}} }}_p$ instance $({\ensuremath{\mathbf{B}}}, {\ensuremath{\boldsymbol{t}}}, r)$ has the form $${\ensuremath{\mathbf{B}}}=
\begin{pmatrix}
\Phi\\
I_n
\end{pmatrix}
\; ,$$ for some matrix $\Phi \in {\ensuremath{\mathbb{R}}}^{(d- n) \times n}$; $${\ensuremath{\boldsymbol{t}}} =
\begin{pmatrix}
t_1\\
\vdots\\
t_{d-n}\\
1/2\\
\vdots\\
1/2
\end{pmatrix}
\; ,$$ for some scalars $t_i \in {\ensuremath{\mathbb{R}}}$; and $r = (n+1)^{1/p}/2$. Moreover, when $k = 2$, this holds for all $p \geq 1$, and for any $k \geq 2$, this holds for all odd integers $p \geq 1$.
We note the following easy corollary, which we can think of as either an application of Khot’s gadget reduction (as described in Section \[sec:Khot\]) or simply as a “padded” variant of Theorem \[thm:CVPSETH\].
\[cor:kSAT\_to\_beta\_CVP\] For any constant integer $k \geq 2$, the following holds for all but finitely many values of $p \geq 1$. For any efficiently computable integer $n^\dagger = n^\dagger(n) \leq {\mathrm{poly}}(n)$, there is a Karp reduction from Max-$k$-SAT on $n$ variables to $(A, G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_p$ on a rank $n+n^\dagger(n)$ lattice with $$A := \sqrt{n+n^\dagger} \cdot N_p({\ensuremath{\mathbb{Z}}}^{n+n^\dagger},(r^p + 1)^{1/p}, {\ensuremath{\boldsymbol{0}}}) \qquad \text{ and } \qquad G := 2^{n^\dagger}
\; ,$$ where $r := (n+ n^\dagger + 1)^{1/p}/2$ and $\widehat{{\ensuremath{\boldsymbol{t}}}} := (1/2, \ldots, 1/2) \in {\ensuremath{\mathbb{R}}}^{n+n^\dagger}$. Moreover, when $k = 2$, this holds for all $p \geq 1$, and for any $k \geq 2$, this holds for all odd integers $p \geq 1$.
It suffices to show how to convert the ${{\ensuremath{\mathrm{CVP}} }}_p$ instance from Theorem \[thm:CVPSETH\] into a $(A,G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_p$ instance. To do this, we simply append the matrix $I_{n^\dagger}$ to the basis and ${\ensuremath{\boldsymbol{t}}}^\dagger := (1/2,\ldots, 1/2) \in {\ensuremath{\mathbb{R}}}^{n^\dagger}$ to the target. I.e., we construct, $${\ensuremath{\mathbf{B}}}:=
\begin{pmatrix}
\Phi & 0 \\
I_n &0\\
0 & I_{n^\dagger}
\end{pmatrix}
\; ,$$ where $\Phi \in {\ensuremath{\mathbb{R}}}^{(d - n-n^\dagger) \times n}$ is as in Theorem \[thm:CVPSETH\], and $${\ensuremath{\boldsymbol{t}}} :=
\begin{pmatrix}
t_1\\
\vdots\\
t_{d-n-n^\dagger}\\
1/2\\
\vdots\\
1/2
\end{pmatrix} \in {\ensuremath{\mathbb{R}}}^{d}
\; ,$$ where $t_i \in {\ensuremath{\mathbb{R}}}$ are as in Theorem \[thm:CVPSETH\]. We simply take $s = 1$.
Let ${\mathcal{L}}:= {\mathcal{L}}({\ensuremath{\mathbf{B}}}) \subset {\ensuremath{\mathbb{R}}}^d$. Let ${\mathcal{L}}' \subset {\ensuremath{\mathbb{R}}}^{d - n^\dagger}$ be the lattice generated by the basis without the additional identity matrix, and let ${\ensuremath{\boldsymbol{t}}}' \in {\ensuremath{\mathbb{R}}}^{d - n^\dagger}$ be the target without the additional coordinates. Notice that vectors in ${\mathcal{L}}$ have the form ${\ensuremath{\boldsymbol{y}}} := ({\ensuremath{\boldsymbol{y}}}', {\ensuremath{\boldsymbol{z}}})$, where ${\ensuremath{\boldsymbol{y}}}' \in {\mathcal{L}}'$ and ${\ensuremath{\boldsymbol{z}}} \in {\ensuremath{\mathbb{Z}}}^{n^\dagger}$. In particular, $$\|{\ensuremath{\boldsymbol{y}}} - {\ensuremath{\boldsymbol{t}}} \|_p^p = \|{\ensuremath{\boldsymbol{y}}}'-{\ensuremath{\boldsymbol{t}}}'\|_p^p + \|{\ensuremath{\boldsymbol{z}}} - {\ensuremath{\boldsymbol{t}}}^\dagger\|_p^p \geq \|{\ensuremath{\boldsymbol{y}}}'-{\ensuremath{\boldsymbol{t}}}'\|_p^p + n^\dagger/2^p
\; .$$
So, if the input Max-$k$-SAT instance is a YES instance, then $\operatorname{dist}_p({\ensuremath{\boldsymbol{t}}}', {\mathcal{L}}') \leq (n+1)^{1/p}/2$, and so there are at least $2^{n^\dagger}$ distinct vectors in ${\ensuremath{\boldsymbol{y}}} \in {\mathcal{L}}$ such that $\|{\ensuremath{\boldsymbol{y}}} - {\ensuremath{\boldsymbol{t}}}\|_p \leq r$. (In particular, all vectors of the form $({\ensuremath{\boldsymbol{y}}}', {\ensuremath{\boldsymbol{z}}})$ with ${\ensuremath{\boldsymbol{z}}} \in \{0,1\}^{n^\dagger}$ and $\|{\ensuremath{\boldsymbol{y}}}' - {\ensuremath{\boldsymbol{t}}}'\|_p \leq (n+1)^{1/p}/2$ have this property.) Thus, the resulting $(A,G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_p$ instance is a YES instance.
On the other hand, if the input Max-$k$-SAT instance is a NO instance, then we have that $\operatorname{dist}_p({\ensuremath{\boldsymbol{t}}}', {\mathcal{L}}') > (n+1)^{1/p}/2$. I.e., $N_p({\mathcal{L}}, r, {\ensuremath{\boldsymbol{t}}}) = 0$. Therefore, $$\begin{aligned}
A_{r, 1}^{(p)}({\ensuremath{\boldsymbol{t}}}, {\mathcal{L}})
&= \sum_{z = 0}^{(r^p + 1)^{1/p}} N_p({\mathcal{L}}, (r^p -z^p + 1)^{1/p}, z {\ensuremath{\boldsymbol{t}}}) -1 \\
&\leq N_p({\ensuremath{\mathbb{Z}}}^{n+n^\dagger},(r^p + 1)^{1/p}, {\ensuremath{\boldsymbol{0}}}) -1 + N_p({\mathcal{L}}, r, {\ensuremath{\boldsymbol{t}}}) + \sum_{z = 2}^{(r^p + 1)^{1/p}} N_p({\ensuremath{\mathbb{Z}}}^{n+n^\dagger},(r^p - z^p + 1)^{1/p}, z \widehat{{\ensuremath{\boldsymbol{t}}}})\\
&\leq r \cdot N_p({\ensuremath{\mathbb{Z}}}^{n+n^\dagger},(r^p + 1)^{1/p}, {\ensuremath{\boldsymbol{0}}}) \; ,
\end{aligned}$$ where we have used the fact that $N_p({\ensuremath{\mathbb{Z}}}^{n+n^\dagger},(r^p - z^p + 1)^{1/p}, z \widehat{{\ensuremath{\boldsymbol{t}}}}) = 0$ for odd $z \geq 3$ and $$N_p({\ensuremath{\mathbb{Z}}}^{n+n^\dagger},(r^p - z^p + 1)^{1/p}, z \widehat{{\ensuremath{\boldsymbol{t}}}}) = N_p({\ensuremath{\mathbb{Z}}}^{n+n^\dagger},(r^p - z^p + 1)^{1/p}, {\ensuremath{\boldsymbol{0}}}) \leq N_p({\ensuremath{\mathbb{Z}}}^{n+n^\dagger},(r^p + 1)^{1/p}, {\ensuremath{\boldsymbol{0}}})$$ for even $z$. Thus, the resulting $(A,G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_p$ instance is a NO instance.
In the next section, we show that $A \ll G$ if and only if $p > p_0 \approx 2.13972$.
Finishing the proof {#sec:integer_points_centered}
-------------------
It remains to bound the number of integer points in an $\ell_p$ ball centered at the origin. As in Section \[sec:techniques\], for $1 \leq p < \infty$ and $\tau > 0$, we define $$\Theta_p(\tau) := \sum_{z \in {\ensuremath{\mathbb{Z}}}} \exp(-\tau |z|^p)
\; .$$ Notice that we can write $\Theta_p(\tau)^n$ as a summation over ${\ensuremath{\mathbb{Z}}}^n$, $$\Theta_p(\tau)^n = \sum_{{\ensuremath{\boldsymbol{z}}} \in {\ensuremath{\mathbb{Z}}}^n} \exp(-\tau \|{\ensuremath{\boldsymbol{z}}}\|_p^p)
\; .$$ In particular, for any radius $r > 0$ and $\tau > 0$, we have $$\Theta_p(\tau)^n \geq \sum_{\stackrel{{\ensuremath{\boldsymbol{z}}} \in {\ensuremath{\mathbb{Z}}}^n}{\|{\ensuremath{\boldsymbol{z}}}\|_p \leq r}} \exp(-\tau \|{\ensuremath{\boldsymbol{z}}}\|_p^p) \geq \exp(-\tau r^p) N_p({\ensuremath{\mathbb{Z}}}^n, r, {\ensuremath{\boldsymbol{0}}})
\; .$$ Rearranging and taking the minimum over all $\tau > 0$, we see that $$\label{eq:tau_bound_on_integer_points_centered}
N_p({\ensuremath{\mathbb{Z}}}^n, r, {\ensuremath{\boldsymbol{0}}}) \leq \min_{\tau >0 } \exp(\tau r^p) \Theta_p(\tau)^n
\; .$$ (It is easy to see that the minimum is in fact achieved.) In Section \[sec:integer\_points\], we will show that this upper bound is actually quite tight (even in the more general settings of shifted balls here). Here, we use this bound to prove the following theorem.
\[thm:SETH\_hardness\_centered\_theta\] For any constant integer $k \geq 2$, the following holds for all but finitely many constants $p > p_0$. There is an efficient randomized reduction from Max-$k$-SAT on $n$ variables to ${{\ensuremath{\mathrm{SVP}} }}_p$ on a lattice of rank $\ceil{C_p n + \log^2 n}$, where $$C_p := \frac{1}{1-\log_2W_p} \qquad \text{ and } \qquad W_p := \min_{\tau > 0} \exp(\tau/2^p) \Theta_p(\tau)
\; .$$ Here, $p_0 \approx 2.13972$ is the unique solution to the equation $W_{p_0} = 2$. Moreover, when $k = 2$, this holds for all $p > p_0$, and for any $k \geq 2$, this holds for all odd integers $p \geq 3$.
In particular, for every ${\varepsilon}> 0$, for all but finitely many $p > p_0$ (including all odd integers $p \geq 3$) there is no $2^{n/(C_n + {\varepsilon})}$-time algorithm for ${{\ensuremath{\mathrm{CVP}} }}_p$ unless SETH is false.
Let $n^\dagger := \ceil{C_p n + \log^2 n} - n - 1$ Then, by Corollary \[cor:kSAT\_to\_beta\_CVP\] together with Theorem \[thm:sparsification\_reduction\], it suffices to show that $$N_p({\ensuremath{\mathbb{Z}}}^{n+n^\dagger}, \widehat{r}, {\ensuremath{\boldsymbol{0}}}) \leq 2^{n^\dagger-10}/\sqrt{n+n^\dagger}$$ for sufficiently large $n$, where $\widehat{r} := (n+n^\dagger + 2^{p+1})^{1/p}/2$.
Let $\tau_p > 0$ such that $W_p := \exp(\tau_p/2^p) \Theta_p(\tau_p)$. (One can check that $\tau_p$ exists, e.g., by differentiating $\exp(\tau_p/2^p) \Theta_p(\tau_p)$ with respect to $\tau$.) By Eq. , we have $$N_p({\ensuremath{\mathbb{Z}}}^{n+n^\dagger}, \widehat{r}, {\ensuremath{\boldsymbol{0}}}) \leq \exp(\tau_p \widehat{r}^p) \Theta_p(\tau_p)^{n + n^\dagger} \leq \exp(2\tau_p) \cdot W_p^{n + n^\dagger} \leq \exp(2\tau_p + 1) \cdot 2^{n^\dagger - \log^2 n} \cdot W_p^{\log^2 n}
\; .$$ The result follows by noting that $W_p < 2$ so that for sufficiently large $n$, $(2/W_p)^{\log^2 n} \geq \exp(2\tau_p + 20) \sqrt{n+n^\dagger}$.
Finally, we compute a simple bound on $C_p$. In particular, this implies the claim that $C_p \to 1$ as $p \to \infty$.
For any $p \geq 3$, we have $$C_p < \frac{1}{1-2^{-p}(p +\log_2(3 e))}
\; .$$
We have $$W_p := \min_{\tau > 0} \exp(\tau/2^p) \Theta_p(\tau) = \min_{x > 1} x^{2^{-p}} \cdot (1 + 2x^{-1} + 2x^{-2^p} + 2x^{-4^p} + \cdots)
\; .$$ Fix $x := 3 \cdot 2^{p}$. Then, we have $$W_p \leq x^{2^{-p}} \cdot (1 + 2x^{-1} + 2x^{-2^p} + 2x^{-4^p} + \cdots ) < x^{2^{-p}} \cdot (1 + 3x^{-1}) = x^{2^{-p}} \cdot (1+ 2^{-p})
\; .$$ Therefore, $$\log_2 W_p < 2^{-p}\log_2 x + \log_2 (1+2^{-p}) < 2^{-p}(p +\log_2(3 e))
\; ,$$ so that $$C_p = \frac{1}{1-\log_2 W_p} < \frac{1}{1-2^{-p}(p +\log_2(3 e))}
\; ,$$ as needed.
Gap-ETH-hardness via a gadget {#sec:Gap-ETH}
=============================
The following theorem shows how to use a certain gadget lattice ${\mathcal{L}}^\dagger \subset {\ensuremath{\mathbb{R}}}^{d^\dagger}$ and target ${\ensuremath{\boldsymbol{t}}}^\dagger \in {\ensuremath{\mathbb{R}}}^{d^\dagger}$ with certain properties to reduce ${{\ensuremath{\mathrm{ExactSetCover}} }}_\eta$ to $(A, G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_p$. In particular, the ratio of the number of “close points” in ${\mathcal{L}}^\dagger$ to ${\ensuremath{\boldsymbol{t}}}^\dagger$ compared to the number of “short points” in ${\mathcal{L}}^\dagger$ should be larger than the total number of short points in ${\ensuremath{\mathbb{Z}}}^m$. (For $p > 2$, we construct such a gadget in Section \[sec:integer\_points\] that will be sufficient to prove the Gap-ETH-hardness of ${{\ensuremath{\mathrm{SVP}} }}_p$. For $1 \leq p \leq 2$, we do not know of such a gadget, but in Section \[sec:gapeth\_l2\], we will show that one exists under plausible conjectures.)
For $1 \leq p \leq \infty$, a lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^n$, and radius $r > 0$, we define the maximal density at radius $r$ of ${\mathcal{L}}$ as $$D_p({\mathcal{L}}, r) := \max_{{\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^n} N_p({\mathcal{L}}, r, {\ensuremath{\boldsymbol{t}}})
\; .$$ We observe the trivial fact that $D_p({\mathcal{L}}, r)$ is non-decreasing in $r$.
\[thm:ESC\_to\_CVP\] For any $p \geq 1$, constant $\eta \in (0,1)$ and $\gamma \ge 1$, there is a Karp reduction from ${{\ensuremath{\mathrm{ExactSetCover}} }}_\eta$ on $m$ sets with size bound $d$ to $(A, G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_{p,\gamma}$ on a lattice of rank $m + n^\dagger$ that requires as auxiliary input a gadget consisting of a lattice ${\mathcal{L}}^\dagger \subset {\ensuremath{\mathbb{R}}}^{d^\dagger}$ of rank $n^\dagger$, target ${\ensuremath{\boldsymbol{t}}}^\dagger \in {\ensuremath{\mathbb{R}}}^{d^\dagger}$, and distances $r \geq d^{1/p}$ and $s > 0$ for any $$A \geq N_p({\ensuremath{\mathbb{Z}}}^m, r^*, {\ensuremath{\boldsymbol{0}}}) \cdot \big(N_p({{\mathcal L}}^\dagger, r^*, {\ensuremath{\boldsymbol{0}}})
+ (r^*/s) \cdot D_{p}({\mathcal{L}}^\dagger, ((r^*)^p - d)^{1/p}) \big)
\; ,$$ and $G \leq N_p({{\mathcal L}}^\dagger, (r^p - \eta d)^{1/p}, {\ensuremath{\boldsymbol{t}}}^\dagger)$, where $r^* := \gamma(r^p + s^p)^{1/p}$.
The reduction takes as input $S_1, \cdots, S_m \subseteq U= \{u_1, \ldots, u_k\}$ with $\bigcup S_i = U $, size bound $d \leq m$, a lattice ${\mathcal{L}}^\dagger \subset {\ensuremath{\mathbb{R}}}^{d^\dagger}$ with basis ${\ensuremath{\mathbf{B}}}^\dagger$, target ${\ensuremath{\boldsymbol{t}}}^\dagger \in {\ensuremath{\mathbb{R}}}^{d^\dagger}$, and distances $r, s > 0$ and behaves as follows. We first define the intermediate ${{\ensuremath{\mathrm{CVP}} }}$ instance consisting of the lattice $\widehat{{\mathcal{L}}} := {\mathcal{L}}(\widehat{{\ensuremath{\mathbf{B}}}}) \subset {\ensuremath{\mathbb{R}}}^{m + k}$ and target $\widehat{{\ensuremath{\boldsymbol{t}}}} \in {\ensuremath{\mathbb{R}}}^{m + k}$ given by $$\widehat{{\ensuremath{\mathbf{B}}}} := (\widehat{{\ensuremath{\boldsymbol{b}}}}_1, \ldots, \widehat{{\ensuremath{\boldsymbol{b}}}}_m) =
\begin{array}{c}
\begin{array}{cccccccccc}
S_1 \cdots S_j& \cdots & \cdots & S_m & & & & &
\end{array} \\
\begin{array}{c|cccc|cc|c|c}
\cline{2-5}\cline{8-8}
u_1& & \vdots & & & & & r^* & \\
\vdots & & \vdots & & & & & r^*& \\
u_i& \cdots & r^* &\text{ if }u_i \in S_j & & & & \vdots& \\
\vdots & & 0& \text{ otherwise}& & & & \vdots& \\
\vdots & & & & & & & \vdots& =\widehat{{\ensuremath{\boldsymbol{t}}}}\\
u_k & & & & & & & r^*& \\
\cline{2-5} \cline{8-8}
& 1& & & & & & & \\
& & \ddots& & & & & & \\
& & & \ddots& & & & & \\
& & & & 1& & & & \\
\cline{2-5}\cline{8-8}
\end{array}\\
\end{array}
\;.$$ The reduction then constructs the $(A,G)\text{-}{{\ensuremath{\mathrm{CVP}} }}$ instance consisting of a lattice ${\mathcal{L}}:= {\mathcal{L}}({\ensuremath{\mathbf{B}}}) \subset {\ensuremath{\mathbb{R}}}^{m + k + d^\dagger}$, ${\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^{m + k + d^\dagger}$, and the distances $r,s$, where $${\ensuremath{\mathbf{B}}}:=
\begin{pmatrix}
\widehat{{\ensuremath{\mathbf{B}}}} & 0 \\
0 & {\ensuremath{\mathbf{B}}}^\dagger
\end{pmatrix}
\; ,$$ and $${\ensuremath{\boldsymbol{t}}} :=
\begin{pmatrix}
\widehat{{\ensuremath{\boldsymbol{t}}}}\\
{\ensuremath{\boldsymbol{t}}}^\dagger
\end{pmatrix}
\; .$$ The reduction then outputs YES if the $(A, G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_p$ oracle on input $({\ensuremath{\mathbf{B}}}, {\ensuremath{\boldsymbol{t}}}, r, s)$ outputs YES, and NO, otherwise.
Suppose the input is a YES instance, and let $i_1, \ldots, i_j$ with $j \leq \eta d$ such that the $S_{i_\ell}$ are disjoint with $\bigcup S_{i_\ell} = U$. Let $\widehat{{\ensuremath{\boldsymbol{v}}}} := \widehat{{\ensuremath{\boldsymbol{b}}}}_{i_1} + \cdots + \widehat{{\ensuremath{\boldsymbol{b}}}}_{i_j}$ be the corresponding vector in $\widehat{{\mathcal{L}}}$. Notice that $\|\widehat{{\ensuremath{\boldsymbol{v}}}} - \widehat{{\ensuremath{\boldsymbol{t}}}}\|_p^p = j \leq \eta d$. Therefore, for any ${\ensuremath{\boldsymbol{v}}}^\dagger \in {\mathcal{L}}^\dagger$ with $\|{\ensuremath{\boldsymbol{v}}}^\dagger\|_p^p \leq r^p - \eta d$, the vector ${\ensuremath{\boldsymbol{v}}} := (\widehat{{\ensuremath{\boldsymbol{v}}}}, {\ensuremath{\boldsymbol{v}}}^\dagger)$ is in ${\mathcal{L}}$ with $\|{\ensuremath{\boldsymbol{v}}}\|_p \leq r$. So, $$N_p({\mathcal{L}}, r, {\ensuremath{\boldsymbol{t}}}) \geq N_p({\mathcal{L}}^\dagger, (r^p - \eta d)^{1/p}, {\ensuremath{\boldsymbol{t}}}^\dagger) \geq G
\;,$$ as needed.
Now, suppose the input is a NO instance. Then, we wish to show that $$\sum_{\ell = 0}^{\gamma (r^p/s^p + 1)^{1/p}} N_p({{\mathcal L}}, (\gamma^p r^p - (\ell^p - \gamma^p) s^p)^{1/p}, \ell \cdot {\ensuremath{\boldsymbol{t}}}) \leq A
\; .$$ We first bound the $\ell = 0$ term as $$\begin{aligned}
N_p({{\mathcal L}}, r^*) &\leq N_p( {\ensuremath{\mathbb{Z}}}^m \oplus {{\mathcal L}}^\dagger, r^*, {\ensuremath{\boldsymbol{0}}}) \leq N_p({\ensuremath{\mathbb{Z}}}^m, r^*, {\ensuremath{\boldsymbol{0}}}) \cdot N_p({\mathcal{L}}^\dagger, r^*, {\ensuremath{\boldsymbol{0}}})
\;.
\end{aligned}$$
Turning to the $\ell \geq 1$ terms, let ${\ensuremath{\boldsymbol{v}}} = (\widehat{{\ensuremath{\boldsymbol{v}}}} - \ell \cdot \widehat{{\ensuremath{\boldsymbol{t}}}}, {\ensuremath{\boldsymbol{v}}}^\dagger - \ell \cdot {\ensuremath{\boldsymbol{t}}}^\dagger) \in {\mathcal{L}}$ with $\ell \geq 1$, and suppose that $\|{\ensuremath{\boldsymbol{v}}} - \ell \cdot {\ensuremath{\boldsymbol{t}}}\|_p^p \leq \gamma^p r^p - (\ell^p - \gamma^p)s^p$. Let $\widehat{{\ensuremath{\boldsymbol{v}}}} = \sum_{i=1}^m z_i \widehat{{\ensuremath{\boldsymbol{b}}}}_i$. First, notice that $\|(z_1,\ldots, z_m)\|_p^p \leq \gamma^p r^p - (\ell^p - \gamma^p)s^p \leq (r^*)^p$ because of the “identity matrix gadget” at the bottom of $\widehat{{\ensuremath{\mathbf{B}}}}$. Furthermore, if there are at most $d$ non-zero $z_i$’s, $z_{i_1}, \ldots, z_{i_{j}}$, then since the input is a NO instance, there must be an element $u \in U$ not contained in $S_{i_1} \cup \cdots \cup S_{i_j}$. Thus, the $u$th coordinate of $\widehat{{\ensuremath{\boldsymbol{v}}}} - \ell \cdot \widehat{{\ensuremath{\boldsymbol{t}}}}$ is at least $r^*$, and we cannot possibly have $\|{\ensuremath{\boldsymbol{v}}}\|_p^p \leq \gamma^p r^p - (\ell^p - \gamma^p) s^p$.
So, it must be the case that there are at least $d$ non-zero $z_i$. In particular, $\|(z_1,\ldots, z_m)\|_p^p \geq d$, so we must have $\|{\ensuremath{\boldsymbol{v}}}^\dagger - \ell \cdot {\ensuremath{\boldsymbol{t}}}^\dagger\|_p^p \leq \gamma^p r^p - d - (\ell^p - \gamma^p)s^p < (r^*)^p -d$. Therefore, $$\begin{aligned}
N_p({{\mathcal L}}, (\gamma^p r^p - (\ell^p - \gamma^p) s^p)^{1/p}, \ell \cdot {\ensuremath{\boldsymbol{t}}})
&\leq N_p({\ensuremath{\mathbb{Z}}}^m, r^*, {\ensuremath{\boldsymbol{0}}}) \cdot N_p({\mathcal{L}}^\dagger, ((r^*)^p -d)^{1/p}, \ell \cdot \widehat{{\ensuremath{\boldsymbol{t}}}})\\
&\leq N_p({\ensuremath{\mathbb{Z}}}^m, r^*, {\ensuremath{\boldsymbol{0}}}) \cdot D_p({\mathcal{L}}^\dagger, ((r^*)^p -d)^{1/p})
\; .
\end{aligned}$$ The result follows by noting that the total contribution of the terms with $\ell \geq 1$ is at most $r^*/s$ times this quantity (since there are at most $r^*/s$ such terms).
Our goal is now to construct a useful gadget ${\mathcal{L}}^\dagger, {\ensuremath{\boldsymbol{t}}}^\dagger, r^\dagger$ for Theorem \[thm:ESC\_to\_CVP\]. In particular we wish to find a gadget with ${\mathrm{rank}}({\mathcal{L}}^\dagger) = O(n)$ and $G \gg A$. In the following rather technical lemma, we show that such a gadget exists if there exists any lattice with “an exponential gap between the number of close vectors and the number of short vectors.”
\[lem:exp\_gap\] Suppose that for some constants $p \ge 1$, ${\varepsilon}\in (0,1/2)$, and $\beta > 1$, the following holds. For every sufficiently large integer $n$, there exists a lattice ${\mathcal{L}}_n \subset {\ensuremath{\mathbb{R}}}^{d_n}$ with ${\mathrm{rank}}({\mathcal{L}}_n) = n$ and $d_n \le {\mathrm{poly}}(n)$, target ${\ensuremath{\boldsymbol{t}}}_n \in {\ensuremath{\mathbb{R}}}^{d_n}$, and radius $r_n > 0$ such that $$\label{eq:more_close_than_short}
N_p({\mathcal{L}}_n, (1-{\varepsilon})^{1/p} \cdot r_n, {\ensuremath{\boldsymbol{t}}}_n) \geq \beta^n \cdot N_p({\mathcal{L}}_n, r_n, {\ensuremath{\boldsymbol{0}}})
\; .$$ Then, for any constants $C \geq 1$ and $\eta \in (2{\varepsilon}^2,1)$, there exist constants $\gamma > 1$, $C^\dagger > 0$ such that the following holds.
1. \[item:non-uniform\] For any sufficiently large integers $m$ and $d$ satisfying $m/C \leq d \leq C d$, there exist distances $r, s > 0$ and ${\ensuremath{\boldsymbol{t}}}^\dagger \in {\ensuremath{\mathbb{R}}}^{d_{n^\dagger}}$ for $n^\dagger := \ceil{C^\dagger m}$ such that $$\label{eq:good_gadget}
N_p({\ensuremath{\mathbb{Z}}}^m, r^*, {\ensuremath{\boldsymbol{0}}}) \cdot \big(N_p({{\mathcal L}}^\dagger, r^*, {\ensuremath{\boldsymbol{0}}})
+ (r^*/s) \cdot D_{p}({\mathcal{L}}^\dagger, ((r^*)^p - d)^{1/p}) \big) < 2^{-m} \cdot N_p({{\mathcal L}}^\dagger, (r^p - \eta d)^{1/p}, {\ensuremath{\boldsymbol{t}}}^\dagger)
\; ,$$ where $r^* := \gamma (r^p + s^p)^{1/p}$ and ${\mathcal{L}}^\dagger := \alpha {\mathcal{L}}_{n^\dagger}$ for some $\alpha > 0$.
2. \[item:uniform\] If we also have $$\label{eq:target_not_dumb}
N_p({\mathcal{L}}_n, (1-{\varepsilon})^{1/p}r_n, {\ensuremath{\boldsymbol{t}}}_n) \geq \beta^n \cdot D_p({\mathcal{L}}_n, (1-{\varepsilon}/\sqrt{\eta})^{1/p}r_n)
\; ,$$ then we can take $r = (1-{\varepsilon}/2)^{1/p}\alpha r_{n^\dagger}$, ${\ensuremath{\boldsymbol{t}}}^\dagger = \alpha {\ensuremath{\boldsymbol{t}}}_{n^\dagger}$, $s = 1$, and $\alpha = (2\eta d/({\varepsilon}r_{n^\dagger}^p))^{1/p}$.
We prove Item \[item:uniform\] first. We take ${\mathcal{L}}^\dagger = \alpha {\mathcal{L}}_{n^\dagger}$, $r = (1-{\varepsilon}/2)^{1/p}\alpha r_{n^\dagger}$, ${\ensuremath{\boldsymbol{t}}}^\dagger = \alpha {\ensuremath{\boldsymbol{t}}}_{n^\dagger}$, $s = 1$, and $\alpha = (2\eta d/({\varepsilon}r_{n^\dagger}^p))^{1/p}$, as above. Notice that $r^p = 2(1-{\varepsilon}/2)\eta d/{\varepsilon}$. We choose $$ \gamma^p = 1+ \min \{ 1/100 , \ (1/\sqrt{\eta}-1)^2/2\}\cdot {\varepsilon}> 1
\; .$$ We assume without loss of generality that $\eta d \geq 10$, and ${\varepsilon}d(1-\sqrt{\eta})^2 \geq 2\gamma^p$
Notice that $r^* = O(m^{1/p})$. Thus, there is some constant ${\widetilde {C}}$ such that $$N_p({\ensuremath{\mathbb{Z}}}^m, r^*, {\ensuremath{\boldsymbol{0}}}) \leq {\widetilde {C}}^m
\; .$$ (This follows, e.g., from Eq. .) Furthermore, notice that $$(r^*)^p = \gamma^p (r^p + 1) \leq ((1-{\varepsilon}/2) \alpha^p r_{n^\dagger}^p + 1) \cdot (1+{\varepsilon}/100) \leq \alpha^p r_{n^\dagger}^p
\; ,$$ where the last inequality uses the assumption that ${\varepsilon}\alpha^p r_{n^\dagger}^p/2 = \eta d \geq 10$. Therefore, by Eq. , $$N_p({\mathcal{L}}^\dagger, r^*, {\ensuremath{\boldsymbol{0}}}) \leq \beta^{-C^\dagger m} N_p({\mathcal{L}}_{n^\dagger}, (1-{\varepsilon})^{1/p} \cdot r_{n^\dagger}, {\ensuremath{\boldsymbol{t}}}_{n^\dagger}) = \beta^{-C^\dagger m} N_p({{\mathcal L}}^\dagger, (r^p - \eta d)^{1/p}, {\ensuremath{\boldsymbol{t}}}^\dagger)
\; ,$$ where the last equality uses the fact that $
\alpha^p (1-{\varepsilon})r_{n^\dagger}^p = r^p \cdot (1-{\varepsilon})/(1-{\varepsilon}/2) = r^p - \eta d
$. Finally, we note that $$\begin{aligned}
(r^*)^p - d
&\leq 2\eta d \cdot \frac{\cdot (1-{\varepsilon}/2) \cdot (1+ (1/\sqrt{\eta}-1)^2\cdot {\varepsilon}/2)}{{\varepsilon}} - d + \gamma^p \\
&= 2\eta d \cdot \frac{(1-{\varepsilon}/\sqrt{\eta}) }{{\varepsilon}} - \frac{{\varepsilon}d}{2} \cdot (1-\sqrt{\eta})^2 + \gamma^p\\
&\leq 2\eta d \cdot \frac{(1-{\varepsilon}/\sqrt{\eta}) }{{\varepsilon}} \\
&= (1-{\varepsilon}/\sqrt{\eta}) \cdot \alpha^p r_{n^\dagger}^p
\; .
\end{aligned}$$ Therefore, applying Eq. , we have $$D_{p}({\mathcal{L}}^\dagger, ((r^*)^p - d)^{1/p}) \leq D_p({\mathcal{L}}_{n^\dagger}, (1-{\varepsilon}/\sqrt{\eta})^{1/p}r_{n^\dagger}) \leq \beta^{-C^\dagger m} N_p({\mathcal{L}}_{n^\dagger}, (1-{\varepsilon})r_{n^\dagger}, {\ensuremath{\boldsymbol{t}}}_{n^\dagger})
\; .$$ Putting everything together, we see that it suffices to take $C^\dagger > 0$ to be a large enough constant so that ${\widetilde {C}}^m\beta^{-C^\dagger m} < 2^{-m}/(1+r^*/s)$.
We now move to proving Item \[item:non-uniform\]. By Item \[item:uniform\], it suffices to find some new family of targets ${\ensuremath{\boldsymbol{t}}}_1',{\ensuremath{\boldsymbol{t}}}_2',\ldots,$ and radii $r_1',r_2',\ldots,$ satisfying Eqs. and , perhaps for some new constants ${\varepsilon}' \in (0,1)$ and $\beta' > 1$. We would of course like to simply take ${\ensuremath{\boldsymbol{t}}}_n' = {\ensuremath{\boldsymbol{t}}}_n$ and $r_n' = r_n$, but we have to worry about the possibility that $D_{p}({\mathcal{L}}_n, (1-{\varepsilon}/\sqrt{\eta}) r_n)$ is not much smaller than $N_p({\mathcal{L}}_n, (1-{\varepsilon})^{1/p}r_n, {\ensuremath{\boldsymbol{t}}}_n)$. Intuitively, this can only happen if either (1) $D_{p}({\mathcal{L}}_n, (1-{\varepsilon})^{1/p}r_n) \gg N_p({\mathcal{L}}_n, (1-{\varepsilon})^{1/p}r_n, {\ensuremath{\boldsymbol{t}}}_n)$, in which case we should clearly replace ${\ensuremath{\boldsymbol{t}}}_n$ with ${\ensuremath{\boldsymbol{t}}}_n'$ such that $N_p({\mathcal{L}}_n, (1-{\varepsilon})^{1/p}r_n, {\ensuremath{\boldsymbol{t}}}_n') = D_p({\mathcal{L}}_n, (1-{\varepsilon})^{1/p}r_n)$; or (2) $D_p({\mathcal{L}}_n, (1-{\varepsilon})^{1/p}r_n') \approx D_p({\mathcal{L}}_n, (1-{\varepsilon})^{1/p}r_n)$, for some $r_n' < r_n$, in which case we should clearly replace $r_n$ with $r_n'$. So, intuitively, as long as ${\ensuremath{\boldsymbol{t}}}_n$ and $r_n$ are “reasonable,” we should be done.
To make this rigorous, let $N_n := N_p({\mathcal{L}}_n, r_n, {\ensuremath{\boldsymbol{0}}})$, ${\varepsilon}_0 := {\varepsilon}$, and $r_n^{(-1)} = r_n$. For $i = 1,\ldots, \ell+1$, let ${\varepsilon}_i := {\varepsilon}_{i-1}/\sqrt{\eta}$ and $r_n^{(i)} := (1-{\varepsilon}_{i-1}) \cdot r_n^{(i-1)} $. Here, $\ell$ is the largest integer such that $r_n^{(\ell)} > r_n/2$. In particular, $\ell$ is a constant. Let $N_n^{(i)} := D_p({{\mathcal L}}_n, r_n^{(i)})$. It follows that $N_{n}^{(\ell+1)} \le N_n$, since if there were $N_n+1$ distinct lattice vectors ${\ensuremath{\boldsymbol{v}}}_1, \ldots, {\ensuremath{\boldsymbol{v}}}_{N_n+1}$ at distance $r_n/2$ from any vector ${\ensuremath{\boldsymbol{t}}}$ then, by triangle inequality, there would necessary be $N_n+1$ distinct lattice vectors ${\ensuremath{\boldsymbol{v}}}_i - {\ensuremath{\boldsymbol{v}}}_1$ for $i = 1,\ldots, N_n +1$ of length at most $r_n$, contradicting the definition of $N_n$.
By Eq. , we have $N_n^{(0)} \ge \beta^{n^\dagger} \cdot N_n$. Thus, there exists an $i \in \{0, \ldots, \ell\}$ such that $$\frac{N_n^{(i)}}{N_n^{(i+1)}} \ge (\beta')^{n} \;,$$ and $$\frac{N_n^{(i)}}{N_n} \ge (\beta')^{n} \;,$$ where $\beta' := \beta^{1/(\ell+1)} > 1$. Fix such an $i$. Then, taking ${\varepsilon}' := {\varepsilon}_{i-1}$, $r_n' := r_n^{(i-1)}$, and ${\ensuremath{\boldsymbol{t}}}_n'$ to be any vector satisfying $N_p({{\mathcal L}}, r_n^{(i)}, {\ensuremath{\boldsymbol{t}}}_n') = D_p({{\mathcal L}}, r_n^{(i)})$ gives the result.
Putting everything together, we get the following conditional result.
\[cor:gadget\_to\_hardness\] Suppose that for some constants $p \ge 1$, ${\varepsilon}\in (0,1/2)$, and $\beta > 1$, the following holds. For every sufficiently large integer $n$, there exists a lattice ${\mathcal{L}}_n \subset {\ensuremath{\mathbb{R}}}^{d_n}$ with ${\mathrm{rank}}({\mathcal{L}}_n) = n$ and $d_n \le {\mathrm{poly}}(n)$, target ${\ensuremath{\boldsymbol{t}}}_n \in {\ensuremath{\mathbb{R}}}^{d_n}$, and radius $r_n > 0$ such that $$\label{eq:more_close_than_short_corollary}
N_p({\mathcal{L}}_n, (1-{\varepsilon})^{1/p} \cdot r_n, {\ensuremath{\boldsymbol{t}}}_n) \geq \beta^n \cdot N_p({\mathcal{L}}_n, r_n, {\ensuremath{\boldsymbol{0}}})
\; .$$ Then for any constant $C \geq 1$ and $\eta \in (0,1)$, there is a constant $\gamma > 1$, such that there is an efficient (non-uniform) reduction from from Gap-$3$-${{\ensuremath{\mathrm{SAT}} }}_{\eta}^{\le C}$ on $n$ variables to ${{\ensuremath{\mathrm{SVP}} }}_{p, \gamma}$ on a lattice of rank $O(n)$ and dimension $O(n)+d_{O(n)}$, for some constant $\gamma > 1$.
Furthermore, if ${\mathcal{L}}_n$, ${\ensuremath{\boldsymbol{t}}}_n$, and $r_n$ are computable in time ${\mathrm{poly}}(n)$ and for any constant $\delta \in ({\varepsilon}, 1)$, there exists a $\beta' > 1$ such that $$\label{eq:target_not_dumb_corollary}
N_p({\mathcal{L}}_n, (1-{\varepsilon})^{1/p}r_n, {\ensuremath{\boldsymbol{t}}}_n) \geq (\beta')^n \cdot D_p({\mathcal{L}}_n, (1-{\varepsilon}/\delta)^{1/p}r_n)
\; ,$$ then we may replace the non-uniform reduction with a randomized reduction.
By Theorem \[thm:SAT\_to\_ESC\], it suffices to show a reduction from ${{\ensuremath{\mathrm{ExactSetCover}} }}_{\eta'}$ with $d = n/\eta'$ and $m \in [n, C'n]$ for some constants $C' \geq 1$ and $\eta' \in (0,1)$. By Theorem \[thm:sparsification\_reduction\], it suffices to reduce this to $(A, G)\text{-}{{\ensuremath{\mathrm{CVP}} }}_{p,\gamma}$ on a rank $O(n)$ lattice with $A \leq 2^{-m} G$. By Theorem \[thm:ESC\_to\_CVP\], such a reduction exists, but it requires a gadget a lattice of rank $O(n)$ satisfying Eq. as auxiliary input.
By Item \[item:non-uniform\] of Lemma \[lem:exp\_gap\], such a gadget exists for sufficiently large $n$ if Eq. holds. Therefore, we get a non-uniform reduction that uses this gadget as advice.
To prove the “furthermore,” it suffices to show that the additional assumptions are sufficient to make this gadget efficiently computable. Indeed, by Item \[item:uniform\] of Lemma \[lem:exp\_gap\], if Eq. holds with $\delta = 1/\sqrt{\eta'}$, then we may take the gadget to be a scaling of ${\mathcal{L}}_{n^\dagger}$, ${\ensuremath{\boldsymbol{t}}}_{n^\dagger}$, and $r_{n^\dagger}$ for an appropriate choice of $n^\dagger = O(n)$. The scaling and $n^\dagger$ are clearly efficiently computable. So, if both Eqs. and hold and the family is efficiently computable, then the advice is indeed efficiently computable, as needed.
Gap-ETH Hardness of SVP\_p for p > 2 {#sec:gapeth_lp}
---------------------------------------
In order to prove Gap-ETH hardness of ${{\ensuremath{\mathrm{SVP}} }}_p$, we will need the following lemma. The proof can be found in Section \[sec:integer\_points\].
\[lem:integer\_lattice\_bound\] For any constants $p > 2$ and $\delta\in (0,1)$, there exist (efficiently computable) constants $\beta > 1$, $t \in (0,1/2]$, $C_r > 0$, and ${\varepsilon}\in (0,\delta)$, such that for any positive integer $n$, $$N_p({\ensuremath{\mathbb{Z}}}^n, (1-{\varepsilon})^{1/p} \cdot r, {\ensuremath{\boldsymbol{t}}} ) \geq \beta^n \cdot N_p({\mathcal{L}}, r, {\ensuremath{\boldsymbol{0}}})
\; ,$$ where $r := C_r n^{1/p}$ and ${\ensuremath{\boldsymbol{t}}} = (t,t,\ldots, t) \in {\ensuremath{\mathbb{R}}}^n$, and $$N_p({\ensuremath{\mathbb{Z}}}^n, (1-{\varepsilon})^{1/p}r, {\ensuremath{\boldsymbol{t}}}) \geq \beta^n \cdot D_p({\ensuremath{\mathbb{Z}}}^n, (1-{\varepsilon}/\delta)^{1/p}r)
\; .$$
\[cor:Gap3SAT\_to\_SVP\] For any constants $p > 2$ and $C' \geq 1$, there exists a constant $\gamma > 1$ such that there is an efficient (randomized) reduction from Gap-$3$-${{\ensuremath{\mathrm{SAT}} }}_{\eta'}^{\le C'}$ on $n$ variables to ${{\ensuremath{\mathrm{SVP}} }}_{p, \gamma}$ on a lattice of rank and dimension $O(n)$
In particular, for some constant $\gamma > 1$, there is no $2^{o(n)}$-time algorithm for ${{\ensuremath{\mathrm{SVP}} }}_{p, \gamma}$ unless (randomized) Gap-ETH is false.
Simply combine Corollary \[cor:gadget\_to\_hardness\] with Lemma \[lem:integer\_lattice\_bound\], with ${\mathcal{L}}_n := {\ensuremath{\mathbb{Z}}}^n$, $r_n := C_r n^{1/p}$, and ${\ensuremath{\boldsymbol{t}}}_n := (t,t,\ldots, t) \in {\ensuremath{\mathbb{R}}}^n$.
Gap-ETH-hardness of SVP\_2 under a certain assumption {#sec:gapeth_l2}
-----------------------------------------------------
We now show that, at least in the special case of the $\ell_2$ norm, we can simplify Corollary \[cor:gadget\_to\_hardness\] a bit further to get a relatively clean conditional hardness result. (See Theorem \[thm:kissing\_gives\_hardness\] below.) We focus on the $\ell_2$ norm because (1) we obtain the simplest statement in this case; and (2) hardness in the $\ell_2$ norm implies hardness in other norms. But, we mention in passing that qualitatively similar results hold for all $\ell_p$ norms.
\[lem:angle\] For any integer $n \ge 100$, let ${\ensuremath{\boldsymbol{u}}}\in {\ensuremath{\mathbb{R}}}^n$ be a fixed vector, and let ${\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^n$ be a uniformly random unit vector in the $\ell_2$ norm. Then for any $0 < \theta_1 < \theta_2 < \pi$, the probability that the angle between ${\ensuremath{\boldsymbol{u}}}$ and ${\ensuremath{\boldsymbol{t}}}$ is between $\theta_1$ and $\theta_2$ is at least $$\int_{\theta_1}^{\theta_2} \sin^{n-2} \theta \, {\rm d} \theta \;.$$
The probability density function of the angle $\theta$ is proportional to $\sin^{n-2} \theta$ [@CFJ13]. So, it suffices to show that the constant of proportionality is at least one. I.e., it suffices to show that the integral of this from $0$ to $\pi$ is at most one. Indeed, $$\begin{aligned}
\int_{0}^{\pi} \sin^{n-2} \theta \,{\rm d} \theta &= 2 \int_{0}^{\pi/2} \sin^{n-2} \theta \,{\rm d} \theta \\
&= 2 \int_{0}^{2\pi/5} \sin^{n-2} \theta \,{\rm d} \theta + 2 \int_{2\pi/5}^{\pi/2} \sin^{n-2} \theta \,{\rm d} \theta \\
&\le 2 \sin^{98}(2\pi/5) + 2 (\pi/2 - 2\pi/5) \\
&\le 1\;.
\end{aligned}$$ The result follows.
\[cor:random\_vector\_close\] For any ${\varepsilon}, \delta \in (0,1/100)$ with ${\varepsilon}\leq \sqrt{\delta}/10$ and vector ${\ensuremath{\boldsymbol{v}}} \in {\ensuremath{\mathbb{R}}}^n$ with $n \geq 100$ and $1 \leq \|{\ensuremath{\boldsymbol{v}}}\|_2^2 \leq 1+\delta$, we have $$\Pr[\|{\ensuremath{\boldsymbol{v}}} - {\ensuremath{\boldsymbol{t}}}\|_2^2 \leq 1-{\varepsilon}] \geq \frac{{\varepsilon}}{2\sqrt{\delta (1 + \delta)}} \cdot \left(\frac{1 - 2{\varepsilon}- {\varepsilon}^2/\delta}{1+\delta}\right)^{n/2}
\; ,$$ where ${\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^n$ is a vector of $\ell_2$ norm $\sqrt{\delta}$ chosen uniformly at random.
Let $\theta$ be the angle between ${\ensuremath{\boldsymbol{v}}}$ and ${\ensuremath{\boldsymbol{t}}}$. We have $$\begin{aligned}
\Pr[\|{\ensuremath{\boldsymbol{v}}} - {\ensuremath{\boldsymbol{t}}} \|^2 \le 1 - {\varepsilon}] &= \Pr[\|{\ensuremath{\boldsymbol{v}}}\|_2^2 + \|{\ensuremath{\boldsymbol{t}}}\|_2^2 - 2 \|{\ensuremath{\boldsymbol{v}}}\|_2 \|{\ensuremath{\boldsymbol{t}}}\|_2 \cos \theta \le 1 - {\varepsilon}] \\
&\ge \Pr\Big{[} \cos \theta \ge \frac{ 1+\delta + \delta - 1 + {\varepsilon}}{2 \sqrt{\delta (1 + \delta)}}\Big{]} \\
&\geq \Pr\big{[} \cos \theta \ge (2\delta + {\varepsilon})/(2\sqrt{\delta(1 + \delta)})\big{]}\\
&\geq \int_{\arccos \left(\frac{\delta + {\varepsilon}}{\sqrt{\delta(1 + \delta)}}\right)}^{\arccos\left(\frac{2\delta + {\varepsilon}}{2\sqrt{\delta(1 + \delta)}}\right)} \sin^{n-2} \theta \,{\rm d} \theta\\
&\geq \left(\arccos \left(\frac{\delta + {\varepsilon}}{\sqrt{\delta(1 + \delta)}}\right)- \arccos \left(\frac{2\delta + {\varepsilon}}{2\sqrt{\delta(1 + \delta)}}\right) \right) \cdot \left(\frac{1 - 2{\varepsilon}- {\varepsilon}^2/\delta}{1+\delta}\right)^{n/2}\\
&\geq\frac{{\varepsilon}}{2\sqrt{\delta (1 + \delta)}} \cdot \left(\frac{1 - 2{\varepsilon}- {\varepsilon}^2/\delta}{1+\delta}\right)^{n/2}
\; ,
\end{aligned}$$ where the first inequality uses the fact that $x - a/x$ is an increasing function of $x$ if $a >0$, the second-to-last inequality uses the fact that $\sin(\arccos(x)) = \sqrt{1-x^2}$, and the last inequality uses the fact that $\frac{\partial}{\partial x} \arccos(x) = -(1-x^2)^{-1/2} \leq -1$.
\[cor:kissing\_gives\_local\_density\] For any lattice ${\mathcal{L}}\subset {\ensuremath{\mathbb{R}}}^n$ with $n \geq 100$, ${\varepsilon}, \delta \in (0,1/100)$ with ${\varepsilon}\leq \sqrt{\delta}/10$, radius $r > 0$, and target ${\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^n$, there exists a vector ${\ensuremath{\boldsymbol{t}}}' \in {\ensuremath{\mathbb{R}}}^n$ such that $$N_2({\mathcal{L}}, \sqrt{1-{\varepsilon}} \cdot r, {\ensuremath{\boldsymbol{t}}}') \geq \frac{{\varepsilon}}{2\sqrt{\delta (1 + \delta)}} \cdot \left(\frac{1 - 2{\varepsilon}- {\varepsilon}^2/\delta}{1+\delta}\right)^{n/2}
\cdot N_2({\mathcal{L}}, \sqrt{1+\delta} \cdot r, {\ensuremath{\boldsymbol{t}}})
\; .$$
Take ${\ensuremath{\boldsymbol{t}}}' \in {\ensuremath{\mathbb{R}}}^n$ to be a uniformly random vector that is at $\ell_2$ distance $\sqrt{\delta} \cdot r$ away from ${\ensuremath{\boldsymbol{t}}}$ as in Corollary \[cor:random\_vector\_close\]. Then, by the corollary, the expectation of $N_2({\mathcal{L}}, \sqrt{1-{\varepsilon}} \cdot r, {\ensuremath{\boldsymbol{t}}}')$ is at least the right-hand side of the above inequality, and the result follows.
We now show that a family of lattices with “surprisingly many” points in a ball is enough to instantiate Corollary \[cor:gadget\_to\_hardness\]. In more detail, notice that we expect $N_2({\mathcal{L}}, r, {\ensuremath{\boldsymbol{t}}})$ to “typically” be proportional $r^n$, and in particular, we expect $N_2({\mathcal{L}}, r', {\ensuremath{\boldsymbol{t}}}) \lesssim (r'/r)^n \cdot N_2({\mathcal{L}}, r, {\ensuremath{\boldsymbol{0}}})$ for $r' > r$. We show that, in order to instantiate Corollary \[cor:gadget\_to\_hardness\], it suffices to find a family of lattices, radii, and targets such $N_2({\mathcal{L}}, r', {\ensuremath{\boldsymbol{t}}}) \geq \beta^n \cdot (r'/r)^n \cdot N_2({\mathcal{L}}, r, {\ensuremath{\boldsymbol{0}}})$ for some constant $\beta > 1$ and $r' \leq O(r)$. For example, by taking $r = \lambda_1^{(2)}({\mathcal{L}}) - {\varepsilon}$, $r' = \lambda_1^{(2)}({\mathcal{L}})$, and ${\ensuremath{\boldsymbol{t}}} = {\ensuremath{\boldsymbol{0}}}$, we see that it would suffice to find a family of lattices with exponentially many non-zero vectors of minimal length in the $\ell_2$ norm—i.e., a family with exponentially large kissing number.
\[thm:kissing\_gives\_hardness\] Suppose that for some constants $\beta,\alpha > 1$, the following holds. For every sufficiently large integer $n$, there exists a lattice ${\mathcal{L}}_n \subset {\ensuremath{\mathbb{R}}}^n$, radii $0 < r_n \leq r_n' \leq \alpha r_n$, and a target ${\ensuremath{\boldsymbol{t}}}_n \in {\ensuremath{\mathbb{R}}}^n$ such that $$\label{eq:kissing}
N_2({\mathcal{L}}_n, r_n', {\ensuremath{\boldsymbol{t}}}_n) \geq \beta^n \cdot (r_n'/r_n)^n \cdot N_2({\mathcal{L}}_n, r_n, {\ensuremath{\boldsymbol{0}}})
\; .$$ Then for any constant $C \geq 1$ and $\eta \in (0,1)$, there is an efficient (non-uniform) reduction from from Gap-$3$-${{\ensuremath{\mathrm{SAT}} }}_{\eta}^{\le C}$ on $n$ variables to ${{\ensuremath{\mathrm{SVP}} }}_{2, \gamma}$ on a lattice of rank and dimension $O(n)$, for some constant $\gamma > 1$.
In particular, if such a family of lattices and radii exists, then for some constant $\gamma > 1$, there is no $2^{o(n)}$-time algorithm for ${{\ensuremath{\mathrm{SVP}} }}_{2,\gamma}$ unless (non-uniform) Gap-ETH is false.
We first prove the theorem under the assumption that $r_n'/r_n < 1+1/400$. We may assume without loss of generality that $r_n'/r_n > \beta^{1/2}$. (Otherwise, we may replace $r_n'$ with $r_n \cdot \beta^{1/2}$ and $\beta$ with $\beta^{1/2}$. Then, clearly $r_n'/r_n > \beta^{1/2}$ and Eq. still holds.)
Let $s_n := r_n \cdot \sqrt{1-{\varepsilon}'}/\sqrt{1-{\varepsilon}}$ for some small constants $0 < {\varepsilon}' < {\varepsilon}< 1/100$ to be chosen later. By Corollary \[cor:gadget\_to\_hardness\], it suffices to find a ${\ensuremath{\boldsymbol{t}}}_n' \in {\ensuremath{\mathbb{R}}}^n$ such that $$\label{eq:more_close_than_short_one_more_time}
N_2({\mathcal{L}}_n, \sqrt{1-{\varepsilon}'} \cdot r_n, {\ensuremath{\boldsymbol{t}}}_n') = N_2({\mathcal{L}}_n, \sqrt{1-{\varepsilon}} \cdot s_n, {\ensuremath{\boldsymbol{t}}}_n') \geq (\beta')^n \cdot N_2({\mathcal{L}}_n, r_n, {\ensuremath{\boldsymbol{0}}})
\; ,$$ for some constant $\beta' > 1$ and sufficiently large $n$. Let $\delta := (r_n'/s_n)^2 - 1 \in (\beta^{1/2}-1,1/100)$. By Corollary \[cor:kissing\_gives\_local\_density\], as long as ${\varepsilon}< \sqrt{\beta^{1/2} - 1}/10$, we see that there exists a ${\ensuremath{\boldsymbol{t}}}_n' \in {\ensuremath{\mathbb{R}}}^n$ with $$\begin{aligned}
N_2({\mathcal{L}}_n, \sqrt{1-{\varepsilon}} \cdot s_n, {\ensuremath{\boldsymbol{t}}}_n')
&\geq \frac{{\varepsilon}}{2\sqrt{\delta (1 + \delta)}} \cdot \left(\frac{1 - 2{\varepsilon}- {\varepsilon}^2/\delta}{1+\delta}\right)^{n/2}
\cdot N_2({\mathcal{L}}_n, \sqrt{1+\delta} \cdot s_n, {\ensuremath{\boldsymbol{t}}}_n)\\
&= \frac{{\varepsilon}}{2\sqrt{\delta (1 + \delta)}} \cdot \left(\frac{1 - 2{\varepsilon}- {\varepsilon}^2/\delta}{1+\delta}\right)^{n/2}
\cdot N_2({\mathcal{L}}_n, r_n', {\ensuremath{\boldsymbol{t}}}_n)\\
&\geq \frac{{\varepsilon}}{2\sqrt{\delta (1 + \delta)}} \cdot \left(\frac{1 - 2{\varepsilon}- {\varepsilon}^2/\delta}{1+\delta}\right)^{n/2} \cdot (r_n'/r_n)^n \cdot \beta^n
\cdot N_2({\mathcal{L}}_n, r_n, {\ensuremath{\boldsymbol{0}}}) \\
&> \frac{{\varepsilon}}{2\sqrt{\delta (1 + \delta)}} \cdot (1 - 2{\varepsilon}- {\varepsilon}^2/\delta)^{n/2}\cdot \beta^n
\cdot N_2({\mathcal{L}}_n, r_n, {\ensuremath{\boldsymbol{0}}})
\; .
\end{aligned}$$ The result follows by taking ${\varepsilon}$ and $\beta'$ to be a small enough constant that $(1 - 2{\varepsilon}- {\varepsilon}^2/\delta)^{1/2}\cdot \beta > \beta' > 1$.
Now, suppose $r_n'/r_n \geq 1+1/400$. We claim that there exists some $0 < s_n \leq s_n' < (1+1/400) \cdot s_n$ such that $$\label{eq:yet_another_more_close_than_short}
D_2({\mathcal{L}}_n, s_n') \geq (\beta')^n \cdot (s_n'/s_n)^n \cdot N_2({\mathcal{L}}_n, s_n, {\ensuremath{\boldsymbol{0}}})$$ for some constant $\beta' > 1$ depending only on $\beta$ and $\alpha$. Clearly, this suffices to prove the result.
To that end, for $i = 0,\ldots, \ell := \ceil{2000 \log(r_n'/r_n)}$, let $s_n^{(i)} := (r_n'/r_n)^{i/\ell} \cdot r_n$, $D_n^{(i)} := D_2({\mathcal{L}}_n, s_n^{(i)})$, and $N_n^{(i)} := N_2({\mathcal{L}}_n, s_n^{(i)}, {\ensuremath{\boldsymbol{0}}})$. We take to be the constant $\beta' := \beta^{1/\ceil{2000 \beta \log \alpha}} \leq \beta^{1/\ell}$. We claim that it suffices to find an index $i$ such that $D_n^{(i+1)}/N_n^{(i)} \geq (r_n'/r_n)^{n/\ell} \beta^{n/\ell}$. Indeed, if such an index exists, then we can take $s_n' := s_n^{(i+1)}$ and $s_n := s_n^{(i)}$. Clearly, $s_n'/s_n = (r_n'/r_n)^{1/\ell} \leq 1+1/400$, and Eq. is indeed satisfied.
It remains to find such an index $i$. By assumption, we have $D_n^{(\ell)}/N_n^{(0)} \geq (r_n'/r_n)^n \cdot \beta^n$, and by definition, we have $D_n^{(i)} \geq N_n^{(i)}$. If there exists an index $i$ such that $D_n^{(i+1)}/D_n^{(i)} \geq (r_n'/r_n)^{n/\ell} \beta^{n/\ell}$, then we are done, since $N_n^{(i)} \leq D_n^{(i)}$. Otherwise, we have $$D_n^{(1)} \geq (r_n'/r_n)^{-(\ell - 1)n/\ell} \cdot \beta^{-(\ell -1) n/\ell} D_n^{(\ell)} \geq (r_n'/r_n)^{n/\ell} \cdot \beta^{n/\ell} N_n^{(0)}
\; ,$$ as needed.
\[cor:ellp\_hard\] Suppose that for some constants $\beta,\alpha > 1$, the following holds. For every sufficiently large integer $n$, there exists a lattice ${\mathcal{L}}_n \subset {\ensuremath{\mathbb{R}}}^n$, radii $0 < r_n \leq r_n' \leq \alpha r_n$, and a target ${\ensuremath{\boldsymbol{t}}}_n \in {\ensuremath{\mathbb{R}}}^n$ such that $$N_2({\mathcal{L}}_n, r_n', {\ensuremath{\boldsymbol{t}}}_n) \geq \beta^n \cdot (r_n'/r_n)^n \cdot N_2({\mathcal{L}}_n, r_n, {\ensuremath{\boldsymbol{0}}})
\; .$$ Then for any constants $C \geq 1$, $\eta \in (0,1)$, and $p \in [1,2]$, there is an efficient (non-uniform) reduction from from Gap-$3$-${{\ensuremath{\mathrm{SAT}} }}_{\eta}^{\le C}$ on $n$ variables to ${{\ensuremath{\mathrm{SVP}} }}_{p, \gamma}$ on a lattice of rank and dimension $O(n)$, for some constant $\gamma > 1$.
In particular, if such a family of lattices and radii exists, then for each constant $p \in [1,2]$ there exists a constant $\gamma_p > 1$, such that there is no $2^{o(n)}$-time algorithm for ${{\ensuremath{\mathrm{SVP}} }}_{p,\gamma_p}$ unless (non-uniform) Gap-ETH is false.
Combine Thereom \[thm:kissing\_gives\_hardness\] with Corollary \[cor:embedding\_reduction\].
On the number of integer points in an ell p ball {#sec:integer_points}
================================================
In this section, we prove Lemma \[lem:integer\_lattice\_bound\] by studying the function $N_p({\ensuremath{\mathbb{Z}}}^n, r, {\ensuremath{\boldsymbol{t}}})$. The results in this section were originally proven by Mazo and Odlyzko [@MO90] for $p = 2$ and Elkies, Odlyzko, and Rush [@EOR91] for arbitrary $p$. In particular, the main theorem of this section, Theorem \[thm:theta\_gives\_good\_approx\], appeared in [@MO90] for $p = 2$ and in [@EOR91] for arbitrary $p$ (and even in a more general setting), and (a variant of) Lemma \[lem:integer\_lattice\_bound\] already appeared in [@EOR91]. Our proof mostly follows that of Mazo and Odlyzko.
Approximation by Theta
----------------------
We extend our definition of $\Theta_p(\tau)$ to “shifted sums” as follows. For $1 \leq p < \infty$, $\tau > 0$, and $t \in {\ensuremath{\mathbb{R}}}$, let $$\Theta_p(\tau; t) := \sum_{z \in {\ensuremath{\mathbb{Z}}}} \exp(-\tau |z - t|^p)
\; .$$ We then further extend this definition to vectors ${\ensuremath{\boldsymbol{t}}} = (t_1,\ldots, t_n) \in {\ensuremath{\mathbb{R}}}^n$ by $$\Theta_p(\tau; {\ensuremath{\boldsymbol{t}}}) := \prod_{i=1}^n \Theta_p(\tau; t_i)$$ We will often assume without loss of generality that $t \in [0,1/2]$ and ${\ensuremath{\boldsymbol{t}}} \in [0,1/2]^n$.
We have $$\Theta_p(\tau; {\ensuremath{\boldsymbol{t}}}) = \sum_{{\ensuremath{\boldsymbol{z}}} \in {\ensuremath{\mathbb{Z}}}^n} \exp(-\tau \|{\ensuremath{\boldsymbol{z}}} - {\ensuremath{\boldsymbol{t}}}\|_p^p)
\; .$$ It follows that for any radius $r > 0$ and any $\tau > 0$, $$\label{eq:basic_upper_bound_theta}
N_p({\ensuremath{\mathbb{Z}}}^n, r, {\ensuremath{\boldsymbol{t}}}) \leq \exp(\tau r^p) \Theta_p(\tau; {\ensuremath{\boldsymbol{t}}})
\; .$$ We wish to show that this inequality is quite tight if $\tau$ satisfies $\mu_p(\tau; {\ensuremath{\boldsymbol{t}}}) = r^p$, where $$\mu_p(\tau; {\ensuremath{\boldsymbol{t}}}) := \sum_{i=1}^n \operatorname*{\mathbb{E}}_{X \sim D_p(\tau; t_i)}[|X|^p]
\; ,$$ and $D_p(\tau; t)$ is the probability distribution over ${\ensuremath{\mathbb{Z}}}- t$ that assigns to each $x \in {\ensuremath{\mathbb{Z}}}-t$ probability $\exp(-\tau |x|^p)/\Theta_p(\tau; t)$.[^12] Indeed, the main theorem that we prove in this section is the following (which, again, was originally proven in [@MO90; @EOR91]).
\[thm:theta\_gives\_good\_approx\] For any constants $p \geq 1$ and $\tau > 0$, there is another constant $C^* > 0$ such that $$\exp(\tau \mu_p(\tau; {\ensuremath{\boldsymbol{t}}}) -C^*\sqrt{n}) \cdot \Theta_p(\tau; {\ensuremath{\boldsymbol{t}}}) \leq N_p({\ensuremath{\mathbb{Z}}}^n, \mu_p(\tau; {\ensuremath{\boldsymbol{t}}})^{1/p}, {\ensuremath{\boldsymbol{t}}}) \leq \exp(\tau \mu_p(\tau; {\ensuremath{\boldsymbol{t}}}) ) \cdot \Theta_p(\tau; {\ensuremath{\boldsymbol{t}}})
\; ,$$ for any ${\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^n$ and any positive integer $n$.
\[lem:theta\_derivatives\] For any $1 \leq p < \infty$, $\tau > 0$, and $t \in {\ensuremath{\mathbb{R}}}$, $$\frac{\partial }{\partial \tau } \log \Theta_p(\tau; t) = -\mu_p(\tau; t) < 0
\; ,$$ and $$\frac{\partial^2 }{\partial \tau^2} \log \Theta_p(\tau; t) = \operatorname*{\mathbb{E}}_{X \sim D_p(\tau; t) }[|X|^{2p}] - \mu_p(\tau; t)^2 > 0
\; .$$
We have $$\begin{aligned}
\frac{\partial }{\partial \tau } \log \Theta_p(\tau; t) &= \frac{1}{\Theta(\tau; t)} \cdot \sum_{z \in {\ensuremath{\mathbb{Z}}}} \frac{\partial }{\partial \tau} \exp(-\tau |z-t|^p)\\
&= -\frac{1}{\Theta(\tau; t)} \cdot \sum_{z \in {\ensuremath{\mathbb{Z}}}} |z-t|^p \exp(-\tau |z-t|^p)\\
&= -\mu_p(\tau; t)
\; .
\end{aligned}$$ The second derivative follows from a similar computation.
To use $\Theta_p(\tau; {\ensuremath{\boldsymbol{t}}})$ to get a lower bound on $N_p({\ensuremath{\mathbb{Z}}}^n, r, {\ensuremath{\boldsymbol{t}}})$, we wish to “isolate” the contribution of vectors of length roughly $r$ to the sum $\Theta_p(\tau; {\ensuremath{\boldsymbol{t}}})$. To do so, we define for $\delta > 0$ the (rather unnatural) function $$H_p(\tau, \delta; {\ensuremath{\boldsymbol{t}}}) := \Theta_p(\tau + \delta; {\ensuremath{\boldsymbol{t}}}) - \exp(-\delta \mu_p(\tau; {\ensuremath{\boldsymbol{t}}})) \Theta_p(\tau; {\ensuremath{\boldsymbol{t}}}) - \exp(\delta \mu_p(\tau + 2\delta; {\ensuremath{\boldsymbol{t}}})) \Theta_p(\tau + 2\delta; {\ensuremath{\boldsymbol{t}}})
\; .$$ The following lemma shows why we are interested in $H_p$.
\[lem:H\_gives\_lower\_bound\] For any $1 \leq p < \infty$, $\tau > 0$, $\delta > 0$, and ${\ensuremath{\boldsymbol{t}}} \in [0,1/2]^n$, $$N_p({\ensuremath{\mathbb{Z}}}^n, \mu_p(\tau; {\ensuremath{\boldsymbol{t}}})^{1/p}, {\ensuremath{\boldsymbol{t}}}) \geq \exp(\tau \mu_p(\tau + 2\delta; {\ensuremath{\boldsymbol{t}}}) ) H_p(\tau, \delta; {\ensuremath{\boldsymbol{t}}})
\; .$$
We can write $$H_p(\tau, \delta; {\ensuremath{\boldsymbol{t}}}) = \sum_{{\ensuremath{\boldsymbol{z}}} \in {\ensuremath{\mathbb{Z}}}^n} \exp(-(\tau + \delta) \|{\ensuremath{\boldsymbol{z}}} - {\ensuremath{\boldsymbol{t}}}\|_p^p) \cdot \big(1 - f_1({\ensuremath{\boldsymbol{z}}}) - f_2({\ensuremath{\boldsymbol{z}}})\big)
\; ,$$ where $f_1({\ensuremath{\boldsymbol{z}}}) := \exp(\delta (\|{\ensuremath{\boldsymbol{z}}} - {\ensuremath{\boldsymbol{t}}}\|_p^p - \mu_p(\tau; {\ensuremath{\boldsymbol{t}}})))$, and $f_2({\ensuremath{\boldsymbol{z}}}) := \exp(\delta (\mu_p(\tau + 2\delta; {\ensuremath{\boldsymbol{t}}}) - \|{\ensuremath{\boldsymbol{z}}} - {\ensuremath{\boldsymbol{t}}}\|_p^p))$. In particular, the summand is negative if $\|{\ensuremath{\boldsymbol{z}}} - {\ensuremath{\boldsymbol{t}}}\|_p^p \geq \mu_p(\tau; {\ensuremath{\boldsymbol{t}}})$ or $\|{\ensuremath{\boldsymbol{z}}} - {\ensuremath{\boldsymbol{t}}}\|_p^p \leq \mu_p(\tau + 2\delta; {\ensuremath{\boldsymbol{t}}})$. Therefore, there are at most $N_p({\ensuremath{\mathbb{Z}}}^n, \mu_p(\tau; {\ensuremath{\boldsymbol{t}}})^{1/p}, {\ensuremath{\boldsymbol{t}}})$ positive summands, and each has magnitude at most $$\exp(-(\tau + \delta)\mu_p(\tau + 2\delta; {\ensuremath{\boldsymbol{t}}})) \leq \exp(-\tau \mu_p(\tau + 2\delta; {\ensuremath{\boldsymbol{t}}}))
\; .$$ The result follows.
We may assume without loss of generality that ${\ensuremath{\boldsymbol{t}}} \in [0,1/2]^n$. Let $\delta > 0$ with $\delta = O(1)$ to be chosen later. By taking the Taylor expansion of $\log \Theta_p(\tau; {\ensuremath{\boldsymbol{t}}})$ around $\tau$, we see by Lemma \[lem:theta\_derivatives\] that $$\log \Theta_p(\tau; {\ensuremath{\boldsymbol{t}}}) - \log \Theta_p(\tau + \delta; {\ensuremath{\boldsymbol{t}}}) \leq \delta \mu_p(\tau; {\ensuremath{\boldsymbol{t}}}) - \frac{\delta^2}{2} \inf_{\tau \leq \tau' \leq \tau + \delta} \sum_{i=1}^n V(\tau'; t_i)
\; ,$$ where $$V(\tau'; t_i) := \operatorname*{\mathbb{E}}_{X \sim D_p(\tau'; t_i) }[|X|^{2p}] - \mu_p(\tau'; t_i)^2
\; .$$ Notice that $V(\tau'; t_i)$ is a continuous positive function. Therefore, it has a positive lower bound on the compact set $\tau \leq \tau' \leq \tau + 2\delta$ and $0 \leq t_i \leq 1/2$. (We have deliberately taken the upper limit on $\tau'$ to be $\tau + 2\delta$ rather than just $\tau + \delta$.) Let $C_{\min} > 0$ be such a bound. We therefore have $$\log \Theta_p(\tau; {\ensuremath{\boldsymbol{t}}}) - \log \Theta_p(\tau + \delta; {\ensuremath{\boldsymbol{t}}}) \leq \delta \mu_p(\tau; {\ensuremath{\boldsymbol{t}}}) - C_{\min}n \delta^2 /2
\; .$$ By an essentially identical argument, we have, $$\log \Theta_p(\tau + 2\delta; {\ensuremath{\boldsymbol{t}}})-\log \Theta_p(\tau + \delta; {\ensuremath{\boldsymbol{t}}}) \leq -\delta \mu_p(\tau + 2\delta; {\ensuremath{\boldsymbol{t}}}) - C_{\min} n\delta^2 /2
\; .$$ It follows that $$H_p(\tau, \delta; {\ensuremath{\boldsymbol{t}}}) \geq \Theta_p(\tau + \delta; {\ensuremath{\boldsymbol{t}}}) \cdot (1-2\exp(-C_{\min}n\delta^2 /2))
\; .$$
Plugging this in to Lemma \[lem:H\_gives\_lower\_bound\], we see that $$\begin{aligned}
N_p({\ensuremath{\mathbb{Z}}}^n, \mu_p(\tau; {\ensuremath{\boldsymbol{t}}})^{1/p}, {\ensuremath{\boldsymbol{t}}}) \geq \exp(\tau \mu_p(\tau + 2\delta; {\ensuremath{\boldsymbol{t}}}) ) \Theta_p(\tau + \delta; {\ensuremath{\boldsymbol{t}}}) \cdot (1-2\exp(-C_{\min}n\delta^2 /2))
\; .
\end{aligned}$$ By a similar argument, we see that there is some constant $C_{\max} > 0$ such that $\mu_p(\tau; t_i) - \mu_p(\tau + 2\delta; t_i) \leq C_{\max} \delta$ *and* $\log \Theta_p(\tau; t_i) - \log \Theta_p(\tau + \delta; t_i) \leq C_{\max} \delta$. Therefore, $$N_p({\ensuremath{\mathbb{Z}}}^n, \mu_p(\tau; {\ensuremath{\boldsymbol{t}}})^{1/p}, {\ensuremath{\boldsymbol{t}}}) \geq \exp(\tau \mu_p(\tau; {\ensuremath{\boldsymbol{t}}}) ) \Theta_p(\tau; {\ensuremath{\boldsymbol{t}}}) \cdot \exp(-C_{\max}\delta n(\tau + 1)) (1-2\exp(-C_{\min}n\delta^2 /2))
\; .$$ The result follows by taking $\delta = C^\dagger/\sqrt{n}$ for a sufficiently large constant $C^\dagger > 0$.
Dense shifted balls and the proof of Lemma \[lem:integer\_lattice\_bound\]
--------------------------------------------------------------------------
We now wish to show that for $p > 2$, there exist shifts ${\ensuremath{\boldsymbol{t}}} \in {\ensuremath{\mathbb{R}}}^n$ such that $N_p({\ensuremath{\mathbb{Z}}}^n, r, {\ensuremath{\boldsymbol{t}}})$ is exponentially larger than $N_p({\ensuremath{\mathbb{Z}}}^n, r, {\ensuremath{\boldsymbol{0}}})$. (Again, this result is not original to us, as it was already proven in [@EOR91].) By Theorem \[thm:theta\_gives\_good\_approx\], it suffices to show that there exists $\tau > 0$ and $t \in (0,1/2]$ such that $\Theta_p(\tau; t) > \Theta_p(\tau; 0)$.
\[lem:local\_minimum\_theta\] For any $p > 2$ and $\tau \geq 1-1/p$, there exists a $t \in (0,1/2]$ such that $$\Theta_p(\tau; t) > \Theta_p(\tau; 0)
\; .$$
We have $\frac{\partial }{\partial t} \Theta_p(\tau; t) |_{t = 0} = 0$, and $$\label{eq:second_deriv_Theta_t}
\frac{\partial^2 }{\partial t^2} \Theta_p(\tau; t) \big|_{t = 0}
= p\tau \sum_{z \in {\ensuremath{\mathbb{Z}}}} \exp(-\tau |z|^p) |z|^{p-2}(p \tau |z|^p-(p-1))
\; .$$ (Here, we have used the fact that $\exp(-\tau |t|^p)$ is twice differentiable at $t = 0$ for $p > 2$, with first and second derivative both zero. Notice that this is false for $p \leq 2$.) For $\tau \geq 1-1/p$, all of the summands are non-negative, so that $0$ is a local minimum of the function $t \mapsto \Theta_p(\tau; t)$. Therefore, for sufficiently small $t > 0$, $\Theta_p(\tau; t) > \Theta_p(\tau; 0)$, as needed.
Choose $t \in [0,1/2]$ to maximize $\Theta_p(1; t)$. (Since $\Theta_p(1;t)$ is a continuous function and $[0,1/2]$ is a compact set, such a maximizer is guaranteed to exist.) By Lemma \[lem:local\_minimum\_theta\], we must have $\Theta_p(1; t) > \Theta_p(1; 0)$, and in particular $t > 0$. Let ${\varepsilon}\in (0,\delta)$ be a constant to be chosen later. Let $C_r := \mu_p(1;t)^{1/p}/(1-{\varepsilon})^{1/p}$.
Let ${\varepsilon}\in (0,\delta)$ be a constant to be chosen later, and let $r := C_r n^{1/p}$ and ${\ensuremath{\boldsymbol{t}}} := (t,t,\ldots, t) \in {\ensuremath{\mathbb{R}}}^n$. Notice that $(1-{\varepsilon}) r^p = \mu_p(1; {\ensuremath{\boldsymbol{t}}})$. By Theorem \[thm:theta\_gives\_good\_approx\], we have $$\label{eq:t_bound}
N_p({\ensuremath{\mathbb{Z}}}^{n}, (1-{\varepsilon})^{1/p} \cdot r, {\ensuremath{\boldsymbol{t}}}) \geq \exp(-C^*\sqrt{n}) \cdot \exp((1-{\varepsilon}) r^p) \cdot \Theta_p(1; t)^{n}
\;$$ for some constant $C^* > 0$.
We have $$N_p({\ensuremath{\mathbb{Z}}}^{n}, r, {\ensuremath{\boldsymbol{0}}}) \leq \exp(r^p) \Theta_p(1; 0)^{ n}
\; .$$ Let $\alpha := \Theta_p(1; t)/\Theta_p(1; 0) > 1$. Then, combining the above with Eq. , we see that $$\frac{N_p({\ensuremath{\mathbb{Z}}}^{n}, (1-{\varepsilon})^{1/p} r, {\ensuremath{\boldsymbol{t}}})}{N_p({\ensuremath{\mathbb{Z}}}^{n}, r, {\ensuremath{\boldsymbol{0}}})}
\geq \alpha^{n} \cdot \exp(-{\varepsilon}C_r^p n)
\; .$$ We therefore take ${\varepsilon}\in (0,\delta)$ to be any constant small enough to make $\alpha \exp(-{\varepsilon}C_r^p) > 1$.
Now, for any ${\ensuremath{\boldsymbol{t}}}' = (t_1',\ldots, t_{n}') \in {\ensuremath{\mathbb{R}}}^{n}$, we may repeat the above argument to show that $$\frac{N_p({\ensuremath{\mathbb{Z}}}^{n}, (1-{\varepsilon})^{1/p} r, {\ensuremath{\boldsymbol{t}}})}{N_p({\ensuremath{\mathbb{Z}}}^{n}, (1-{\varepsilon}/\delta) r, {\ensuremath{\boldsymbol{t}}}')} \geq \exp( {\varepsilon}C_r^p n \cdot (1/\delta - 1)) \cdot \Theta_p(1; t)^n/\Theta_p(1;{\ensuremath{\boldsymbol{t}}}')$$ Recall that, by definition, $\Theta_p(1; {\ensuremath{\boldsymbol{t}}}') = \prod_i \Theta_p(1; t_i') \leq \Theta_p(1; t)^{n}$, where the last inequality follows from our choice of $t$. Therefore, $$\frac{N_p({\ensuremath{\mathbb{Z}}}^{n}, (1-{\varepsilon})^{1/p} r, {\ensuremath{\boldsymbol{t}}})}{D_p({\ensuremath{\mathbb{Z}}}^n,(1-{\varepsilon}/\delta) r)} \geq \exp( {\varepsilon}C_r^p n \cdot (1/\delta - 1))
\; .$$ The result follows by taking $\beta := \min\{ \alpha \exp(-{\varepsilon}C_r^p),\ \exp({\varepsilon}C_r^p (1/\delta - 1))\} > 1$.
[WLTB11]{}
Erdem Alkim, L[é]{}o Ducas, Thomas P[ö]{}ppelmann, and Peter Schwabe. Post-quantum key exchange — [A]{} new hope. In [*USENIX Security Symposium*]{}, 2016.
Divesh Aggarwal, Daniel Dadush, Oded Regev, and Noah Stephens[-]{}Davidowitz. Solving the [S]{}hortest [V]{}ector [P]{}roblem in $2^n$ time via discrete [G]{}aussian sampling. In [*STOC*]{}, 2015.
Vikraman Arvind and Pushkar S Joglekar. Some sieving algorithms for lattice problems. In [*FSTTCS*]{}, pages 25–36, 2008.
Miklós Ajtai. The [S]{}hortest [V]{}ector [P]{}roblem in [L2]{} is [NP]{}-hard for randomized reductions. In [*STOC*]{}, 1998.
Mikl[ó]{}s Ajtai. Generating hard instances of lattice problems. In [*Complexity of computations and proofs*]{}, volume 13 of [ *Quad. Mat.*]{}, pages 1–32. Dept. Math., Seconda Univ. Napoli, Caserta, 2004. Preliminary version in STOC’96.
Miklós Ajtai, Ravi Kumar, and D. Sivakumar. A sieve algorithm for the shortest lattice vector problem. In [*STOC*]{}, pages 601–610, 2001.
Noga Alon. Packings with large minimum kissing numbers. , 175(1):249 – 251, 1997.
Dorit Aharonov and Oded Regev. Lattice problems in [NP]{} intersect [coNP]{}. , 52(5):749–765, 2005. Preliminary version in FOCS’04.
Divesh Aggarwal and Noah Stephens[-]{}Davidowitz. Just take the average! an embarrassingly simple $2^n$-time algorithm for [SVP]{} (and [CVP]{}), 2017. <http://arxiv.org/abs/1709.01535>.
Joppe W. Bos, Craig Costello, L[é]{}o Ducas, Ilya Mironov, Michael Naehrig, Valeria Nikolaenko, Ananth Raghunathan, and Douglas Stebila. Frodo: Take off the ring! [P]{}ractical, quantum-secure key exchange from [LWE]{}. In [*CCS*]{}, 2016.
Anja Becker, L[é]{}o Ducas, Nicolas Gama, and Thijs Laarhoven. New directions in nearest neighbor searching with applications to lattice sieving. In [*SODA*]{}, 2016.
Huck Bennett, Alexander Golovnev, and Noah Stephens[-]{}Davidowitz. On the quantitative hardness of [CVP]{}. In [*FOCS*]{}, 2017.
Johannes Bl[ö]{}mer and Stefanie Naewe. Sampling methods for shortest vectors, closest vectors and successive minima. , 410(18):1648–1665, 2009.
Tony Cai, Jianqing Fan, and Tiefeng Jiang. Distributions of angles in random packing on spheres. , 14(1):1837–1864, 2013.
J-Y Cai and Ajay Nerurkar. Approximating the [SVP]{} to within a factor $(1+1/\dim^{\varepsilon})$ is [NP]{}-hard under randomized conditions. In [*CCC*]{}. IEEE, 1998.
John Conway and Neil J.A. Sloane. . Springer New York, 1998.
Irit Dinur. Mildly exponential reduction from gap [3SAT]{} to polynomial-gap label-cover. , 23:128, 2016.
Daniel Dadush, Chris Peikert, and Santosh Vempala. Enumerative lattice algorithms in any norm via [M]{}-ellipsoid coverings. In [*FOCS*]{}, 2011.
N. D. Elkies, A. M. Odlyzko, and J. A. Rush. On the packing densities of superballs and other bodies. , 105(1):613–639, Dec 1991.
T. Figiel, J. Lindenstrauss, and V. D. Milman. The dimension of almost spherical sections of convex bodies. , 82(4):575–578, 07 1976.
Nicolas Gama and Phong Q. Nguyen. Finding short lattice vectors within [M]{}ordell’s inequality. In [*[STOC]{}*]{}, 2008.
Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan. Trapdoors for hard lattices and new cryptographic constructions. In [*STOC*]{}, 2008.
Ishay Haviv and Oded Regev. Tensor-based hardness of the [S]{}hortest [V]{}ector [P]{}roblem to within almost polynomial factors. , 8(23):513–531, 2012. Preliminary version in STOC’07.
Russell Impagliazzo and Ramamohan Paturi. On the complexity of $k$-[SAT]{}. In [*CCC*]{}, pages 237–240, 1999.
Antoine Joux and Jacques Stern. Lattice reduction: A toolbox for the cryptanalyst. , 11(3):161–185, 1998.
Ravi Kannan. Minkowski’s convex body theorem and integer programming. , 12(3):415–440, 1987.
Subhash Khot. Hardness of approximating the [S]{}hortest [V]{}ector [P]{}roblem in lattices. , 52(5):789–808, September 2005. Preliminary version in FOCS’04.
Thijs Laarhoven. Sieving for shortest vectors in lattices using angular locality-sensitive hashing. In [*CRYPTO*]{}, 2015.
H. W. Lenstra, Jr. Integer programming with a fixed number of variables. , 8(4):538–548, 1983.
A. K. Lenstra, H. W. Lenstra, Jr., and L. Lov[á]{}sz. Factoring polynomials with rational coefficients. , 261(4):515–534, 1982.
Mingjie Liu, Xiaoyun Wang, Guangwu Xu, and Xuexin Zheng. Shortest lattice vectors in the presence of gaps. , 2011:139, 2011.
Pasin Manurangsi, 2017. Personal communication.
Daniele Micciancio and Shafi Goldwasser. , volume 671 of [*The Kluwer International Series in Engineering and Computer Science*]{}. Kluwer Academic Publishers, Boston, Massachusetts, March 2002.
Daniele Micciancio. The [S]{}hortest [V]{}ector [P]{}roblem is [NP]{}-hard to approximate to within some constant. , 30(6):2008–2035, March 2001. Preliminary version in FOCS 1998.
Daniele Micciancio. Inapproximability of the [S]{}hortest [V]{}ector [P]{}roblem: Toward a deterministic reduction. , 8(22):487–512, 2012.
J. E. Mazo and A. M. Odlyzko. Lattice points in high-dimensional spheres. , 110(1):47–61, 1990.
Pasin Manurangsi and Prasad Raghavendra. A birthday repetition theorem and complexity of approximating dense [CSP]{}s. , 2016.
Daniele Micciancio and Panagiotis Voulgaris. Faster exponential time algorithms for the [S]{}hortest [V]{}ector [P]{}roblem. In [*SODA*]{}, 2010.
Daniele Micciancio and Michael Walter. Practical, predictable lattice basis reduction. In [*Eurocrypt*]{}, 2016.
post-quantum standardization call for proposals. <http://csrc.nist.gov/groups/ST/post-quantum-crypto/cfp-announce-dec2016.html>, 2016. Accessed: 2017-04-02.
Phong Q Nguyen and Jacques Stern. The two faces of lattices in cryptology. In [*Cryptography and lattices*]{}, pages 146–180. Springer, 2001.
Phong Q. Nguyen and Thomas Vidick. Sieve algorithms for the [S]{}hortest [V]{}ector [P]{}roblem are practical. , 2(2):181–207, 2008.
Andrew M Odlyzko. The rise and fall of knapsack cryptosystems. , 42:75–88, 1990.
Chris Peikert. Limits on the hardness of lattice problems in $\ell_p$ norms. , 17(2):300–351, May 2008. Preliminary version in CCC 2007.
Chris Peikert. An efficient and parallel [G]{}aussian sampler for lattices. In [*[CRYPTO]{}*]{}. 2010.
Chris Peikert. A decade of lattice cryptography. , 10(4):283–424, 2016.
Xavier Pujol and Damien Stehl[é]{}. Solving the [S]{}hortest [L]{}attice [V]{}ector [P]{}roblem in time $2^{2.465
n}$. , 2009:605, 2009.
Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. , 56(6):Art. 34, 40, 2009.
Oded Regev and Ricky Rosen. Lattice problems and norm embeddings. In [*[STOC]{}*]{}, 2006.
C.P. Schnorr. A hierarchy of polynomial time lattice basis reduction algorithms. , 53(23):201 – 224, 1987.
Adi Shamir. A polynomial-time algorithm for breaking the basic [M]{}erkle-[H]{}ellman cryptosystem. , 30(5):699–704, 1984.
Noah Stephens[-]{}Davidowitz. Discrete [G]{}aussian sampling reduces to [CVP]{} and [SVP]{}. In [*SODA*]{}, 2016.
Peter [van Emde Boas]{}. Another [NP]{}-complete problem and the complexity of computing short vectors in a lattice. Technical report, University of Amsterdam, Department of Mathematics, Netherlands, 1981. Technical Report 8104.
Xiaoyun Wang, Mingjie Liu, Chengliang Tian, and Jingguo Bi. Improved [N]{}guyen-[V]{}idick heuristic sieve algorithm for shortest vector problem. In [*ASIACCS*]{}, 2011.
[^1]: Supported by the Singapore Ministry of Education and the National Research Foundation, also through the Tier 3 Grant “Random numbers from quantum processes" MOE2012-T3-1-009.
[^2]: Supported by the Simons Collaboration on Algorithms and Geometry.
[^3]: Unlike all other algorithms mentioned here, this $2^{n/2 + o(n)}$-time algorithm does not actually find a short vector; it only outputs a length. In the exact case, these two problems are known to be equivalent under an efficient rank-preserving reduction [@MicciancioBook], but this is not known to be true in the approximate case.
[^4]: All of these reductions for finite $p$ are randomized, as are ours. An unconditional deterministic reduction would be a major breakthrough. See [@Mic01svp; @Mic12] for more discussion and even a conditional deterministic reduction that relies on a certain number-theoretic assumption.
[^5]: Gap-ETH is the assumption that there is no $2^{o(n)}$-time algorithm that distinguishes a satisfiable $3$-SAT formula from one in which at most a constant fraction of the clauses are simultaneously satisfiable.[See Section \[sec:fine-grained\_prelims\].]{}
[^6]: We note that Khot claimed in Section 8 of [@Khot05svp] that he had discovered a linear reduction from $\gamma'$-approximate ${{\ensuremath{\mathrm{CVP}} }}_p$ to $2^{1-3/p}$-approximate ${{\ensuremath{\mathrm{SVP}} }}_p$ for $p \geq 4$ and some unspecified constant $\gamma'$. However, it is not clear whether $\gamma'$ is a small enough constant to yield an alternate proof of Theorem \[thm:ETH\_intro\] for $p \ge 4$. In particular, one would need to show Gap-ETH-hardness of $\gamma'$-approximate ${{\ensuremath{\mathrm{CVP}} }}_p$.
[^7]: Khot’s primary motivation for his reduction was to prove hardness of approximating ${{\ensuremath{\mathrm{SVP}} }}_p$ to within any constant factor, by showing a reduction that is well-behaved under a certain tensor product. We are not interested in taking tensor products (since they produce lattices of superlinear rank), so we ignore this issue entirely.
[^8]: We note that any gadget that allows us to use $r^\dagger = \operatorname{dist}_p({\ensuremath{\boldsymbol{t}}}^\dagger, {\mathcal{L}}^\dagger)$ must satisfy quite rigid requirements. We need exponentially many vectors that are all *exact* closest vectors, and we still must satisfy Eq. .
[^9]: One can think of this as a variant of Jacobi’s Theta function. In particular, with $p = 2$, this is Jacobi’s Theta function with a slightly different parametrization. Computer scientists might be more familiar with the closely related function $\rho_s({\ensuremath{\mathbb{Z}}}) := \Theta_2(\pi/s^2)$.
[^10]: We find it convenient to define the problem even for $A \geq G$ because it will not always be clear which of the two values is larger. Our results will always be vacuous when $A \geq G$. E.g., Theroem \[thm:sparsification\_reduction\] requires $G \gg A$.
[^11]: Suppose that $\lambda_1^{(p)}({\mathcal{L}}') \leq \gamma r'/(A+1)$, and let ${\ensuremath{\boldsymbol{v}}} \in {\mathcal{L}}\setminus \{{\ensuremath{\boldsymbol{0}}}\}$ with $\|{\ensuremath{\boldsymbol{v}}}\|_p \leq \gamma r'/(A+1)$. Then, for every $z \in \{-A-1,-A,\ldots, A,A+1\}$, $z {\ensuremath{\boldsymbol{v}}}$ is a distinct lattice vector with $\|z {\ensuremath{\boldsymbol{v}}}\|_p \leq \gamma r'$, which contradicts the fact that there are at most $2A+1$ such vectors.
[^12]: It is easy to see that there exists a $\tau$ satisfying $\mu_p(\tau; {\ensuremath{\boldsymbol{t}}}) = r^p$ if and only if there is a lattice point inside the *open* $\ell_p$ ball of radius $r$ around ${\ensuremath{\boldsymbol{t}}}$. So, we do not lose much by assuming that $r^p = \mu_p(\tau; {\ensuremath{\boldsymbol{t}}})$ for some $\tau > 0$.
|
---
author:
- 'Daniel A. Spielman$^1$'
- 'Shang-Hua Teng$^2$'
- Alper Üngör$^3$
bibliography:
- 'parDelRef\_short.bib'
date: |
$^1$Department of Mathematics, Massachusetts Institute of Technology, spielman@math.mit.edu\
$^2$Department of Computer Science, Boston University and Akamai Technologies Inc., steng@cs.bu.edu\
$^3$ Department of Computer Science, Duke University, ungor@cs.duke.edu
---
Introduction
============
Delaunay refinement is a popular and practical technique for generating well-shaped unstructured meshes [@LiTeng01; @Ruppert93; @Shewchuk98]. The first step of a Delaunay refinement algorithm is the construction of a constrained or conforming Delaunay triangulation of the input domain. This initial Delaunay triangulation need not be well-shaped. Delaunay refinement then iteratively adds new points to the domain to improve the quality of the mesh and to make the mesh respect the boundary of the input domain. A sequential Delaunay refinement algorithm typically adds one new vertex per iteration, although sometimes one may prefer to insert more than one new vertex at each iteration. Each new point or set of points is chosen from a set of potential candidates — the circumcenters of poorly conditioned simplices (to improve mesh quality) and the diameter-centers of boundary simplices (to conform to the domain boundary). Ruppert [@Ruppert93] was the first to show that the proper application of Delaunay refinement produces well-shaped meshes in two dimensions whose size is within a small constant factor of the best possible. Ruppert’s result was then extended to three dimensions by Shewchuk [@Shewchuk98] and Li and Teng [@LiTeng01]. Efficient sequential Delaunay refinement software has been developed both in two [@Ruppert93; @Shewchuk96] and three dimensions [@Shewchuk98]. Chrisochiedes and Nave [@ChrisN99] and Okusanya and Peraire [@OkusanyaP96] developed parallel software using Delaunay refinement, for which they have reported good performance. Recently, Nave [*et al.*]{} [@NaveCC02] presented a parallel Delaunay refinement algorithm and proved that it produces well-shaped meshes. The complexity of their algorithm as well as the size of the mesh it outputs remains unanalyzed.
In this paper, we study the parallel complexity of a natural parallelization of Delaunay refinement. One of the main ingredients of our parallel method is a notion of independence among potential candidates for Delaunay insertion at each iteration. Our parallel Delaunay method performs the following steps during each iteration.
1. Generate an independent set of points for parallel insertion;
2. Update the Delaunay triangulation in parallel.
Our independent sets have the following properties:
- Their insertion can be realized sequentially by Ruppert’s method in 2D and Shewchuk’s in 3D. Hence, an algorithm that inserts all their points in parallel will inherit the guarantees of Ruppert’s and Shewchuk’s methods that the output mesh be well-shaped and have size optimal up to a constant.
- The independent sets can be generated efficiently in parallel. In addition, they are “large enough” so that the number of parallel iterations needed is $O(\log^2(L/s))$, where $L$ and $s$ are the diameter of the domain and the smallest edge in the output mesh, respectively.
- When a quasi-uniform mesh is desired as in Chew’s method, the number of iterations can be reduced to $O(\log(L/s))$.
We should emphasize here that our analysis focuses on the number of parallel iterations of Delaunay refinement. The independence of the new points do not necessarily imply a straightforward parallel insertion scheme at each iteration. There are several existing parallel Delaunay triangulation algorithms that we can employ at each iteration. For example, in 2D we can use the divide-and-conquer parallel algorithm developed by Blelloch [*et al.*]{} [@BlellochHMT99] for Delaunay triangulation. Their algorithm uses $O(n \log n)$ work and $O(\log^3 n)$ parallel time. We can alternatively use the randomized parallel algorithms of Reif and Sen [@ReifS99], or by Amato [*et al.*]{} [@AmatoGR94], in both two and three dimensions. Both of these randomized parallel Delaunay triangulation algorithms have expected parallel running time $O (\log n)$. Using one of these adds a logarithmic factor to our worst-case total parallel time complexity analysis. To the best of our knowledge, these are the first provably polylog$(L/s)$ parallel time Delaunay meshing algorithms that generate well-shaped meshes of size optimal to within a constant.
Motivation and Related Work
---------------------------
This work is motivated by the observation that both sequential and parallel implementations of Delaunay refinement algorithms seem to produce the best meshes in practice. However, improvements in the speed of parallel numerical solvers are creating the need for comparable speedups in meshing software: [Löhner]{} and Cebral [@LohnerC99] have reported that improvements in parallel numerical solvers [@ToppingK95] have resulted in the simulation time of numerous practical systems being dominated by the meshing process.
Quadtree-based methods are an alternative to Delaunay refinement. They also generate well-shaped meshes whose size is within a constant factor of the best possible [@BernEG94; @MitchellV92]. In practice, however, they often generate meshes larger than Delaunay refinement on the same input. The parallel complexity of the quadtree-based methods is nevertheless better understood.
Several parallel mesh generation algorithms have been developed. On the theoretical extreme, Bern, Eppstein and Teng [@BernET99] gave a parallel $O(\log n)$ time algorithm using $K/\log n$ processors to compute a well-shaped quadtree mesh, where $K$ is the final mesh size. There is also a simple level-by-level quadtree-based method that is used in practice [@ShephardFBCOS97; @SpielmanT02]. One can easily show that this level-by-level based method takes $O (\log (L/s)+ K/p)$ parallel time, using $p$ processors [@SpielmanT02].
Building upon [@BernET99], Miller [*et al.*]{} [@MillerTTW95] developed a parallel sphere-packing based Delaunay meshing algorithm that generates well-shaped Delaunay meshes of optimal size in $O (\log n)$ parallel time using $K/\log n$ processors. Their method uses a parallel maximal independent set algorithm [@Luby86] to directly generate the set of final mesh points, and then constructs the Delaunay mesh using parallel Delaunay triangulation. As this algorithm has not been implemented, we do not know how the meshes it produces will compare.
Various parallel Delaunay refinement methods have been implemented and been seen to have good performance [@ChrisN99; @LiTeng98; @LohnerC99; @OkusanyaP96]. These methods address some important issues such as how to partition the domain so as to minimize the communication cost among the processors. Our new analysis on the number of parallel iterations of Delaunay refinement could potentially provide provable bounds on their parallel complexity.
Our work also helps explain the performance of some sequential implementations of Delaunay refinement, especially those which use a Delaunay triangulator as a black-box. In such situations, it is often desirable to minimize the number of calls to the black-box Delaunay triangulator by inserting multiple points at each iteration. Our bounds on the number of iterations provide a bound on the number of calls to the Delaunay triangulator.
We omit the proofs of Lemmas \[lem:rupertConservation\], \[lem:upgrade\], \[lem:preprocessing\], \[lem:techruppert\], and \[lem:lfs\_ratio\] and Theorems \[thm:sequential\_parallel\], \[thm:parallelchew\], \[thm:sequential\_parallel\_periodic\_3D\], and \[thm:parallelShewchuk\] in this version due to page limitation. A full version of the paper is available at <http://www.cs.duke.edu/~ungor/abstracts/parallelDelRef.html>.
Preliminaries {#sec:pre}
=============
Input Domain {#sec:input}
------------
In 2D, the input domain $\Omega $ is represented as a [*planar straight line graph*]{} (PSLG) [@Ruppert93] — a proper planar drawing in which each edge is mapped to a straight line segment between its two endpoints. The segments express the [*boundaries*]{} of $\Omega $ and the endpoints are the [*vertices*]{} of $\Omega $. The vertices and boundary segments of $\Omega $ will be referred to as [*input features*]{} of $\Omega$. A vertex is incident to a segment if it is one of the endpoints of the segment. Two segments are incident if they share a common vertex. In general, if the domain is given as a collection of vertices only, then the boundary of its convex hull is taken to be the boundary of the input.
Miller [*et al.*]{} [@MillerTTWW96] presented a natural extension of PSLGs, called [*piecewise linear complexes*]{} (PLCs), to describe domains in three and higher dimensions. In three dimensions, the domain $\Omega$ is a collection of vertices, segments, and facets where (i) all lower dimensional elements on the boundary of an element in $\Omega$ also belongs to $\Omega$, and (ii) if any two elements intersect, then their intersection is a lower dimensional element in $\Omega$. In other words, a PLC in $d$ dimensions is a cell complex with polyhedral cells from $0$ to $d$ dimensions.
Delaunay Triangulation
----------------------
Let $P$ be a point set in $\RR^d$. A simplex $\tau$ formed by a subset of $P$ points is a [*Delaunay simplex*]{} if there exists a circumsphere of $\tau $ whose interior does not contain any points in $P$. The [Delaunay triangulation]{} of $P$, denoted $Del(P)$, is a PLC that contains all Delaunay simplices. If the points are in general position, that is, if no $d+2$ points in $P$ are co-spherical, then $Del(P)$ is a simplicial complex.
The Delaunay triangulation of a point set can be constructed in $O (n\log n)$ time in 2D [@ClarksonS89; @GuibasKS90; @Fortune92] and in $O(n^{\lceil d/2 \rceil})$ time in $d$ dimensions [@ClarksonS89; @Seidel92]. A nice survey of these algorithms can be found in [@Fortune92].
One way to obtain a triangulation that conforms to the boundary of a PSLG domain is to use a [*Constrained Delaunay triangulation*]{}. Let $P$ be the set of vertices of a PSLG $\Omega$. Two points $p$ and $q$ in $P$ are said to be visible from each other if the line segment $pq$ does not intersect the interior of any segment in $\Omega$. Three points form a constrained Delaunay triangle if the interior of their circumcircle contains no point from $P$ that is visible from all three points. The union of all constrained Delaunay triangles forms a [*constrained Delaunay triangulation*]{} $CDT(\Omega)$. Chew developed an algorithm for computing constrained Delaunay triangulations [@Chew89b].
A Delaunay triangulation $T$ of input and Steiner points is a [*conforming Delaunay triangulation*]{} of a PLC $\Omega $ if every face of $\Omega$ is a union of faces of $T$. In 2D, Edelsbrunner and Tan proved that $O(n^3)$ additional points are sufficient to generate a conforming triangulation of a PSLG of complexity $n$ [@EdelsbrunnerT93]. A 2D solution proposed by Saalfeld [@Saalfeld91] is extended to 3D by Murphy [*et al.*]{} [@MurphyMG00] and Cohen-Steiner [*et al.*]{} [@CohenVY02]. However, it remains open whether the size of their output is polynomial in the input size or local feature size. The definition of local feature size will be given in Section \[sec:DelaunayRefinement\]. When the angle between the faces of a PLC is bounded from below, say for example by $\pi /2$, then one can apply Delaunay refinement to generate well-shaped conforming triangulations whose size is close to optimal both in two [@Chew89; @Ruppert93] and three dimensions [@Chew97; @LiTeng01; @Shewchuk98].
2D Sequential Delaunay Refinement {#sec:DelaunayRefinement}
=================================
In this section, we recall Ruppert’s and Chew’s algorithms for constructing Delaunay meshes of PLSGs in 2D. Following Ruppert [@Ruppert93], we assume that the angle between two adjacent input segments is at least $\pi /2$. Boundary treatments that relax this assumption are discussed in [@Ruppert93; @Shewchuk02].
In the process of Delaunay refinement, one could either maintain a constrained Delaunay triangulation, or one just keeps track of the set of input segments that are not respected. The first approach does not extend to three dimensions because, in 3D, some PLCs do not have a constrained Delaunay triangulation. We therefore use the second approach.
At each iteration, we choose a new point for insertion from a set of candidate points. There are two kinds of candidate points: (1) the circumcenters of existing triangles, and (2) the midpoints of existing boundary segments.
Let the diametral circle of a segment be the circle whose diameter is the segment. A point is said to [*encroach*]{} a segment if it is inside the segment’s diametral circle. (See Figure \[fig:candidates\].)
[c]{}
At iteration $i$, the circumcenter of a triangle is a [*potential candidate*]{} for insertion if the triangle is [*poorly shaped*]{}. For example, in Ruppert’s algorithm, a triangle is considered poorly shaped if the ratio of its circumradius to the length of its shortest side is larger than a pre-specified constant $\beta_{R} \geq \sqrt{2}$. Let $\dot{\CC}^{(i)}$ denote the set of all potential candidate circumcenters that do not encroach any segment. Let $\CC^{(i)}$ denote their corresponding circumcircles. Similarly, let $\dot{\BB}^{(i)}$ denote the set of all potential candidate circumcenters that do encroach some segment. Let $\BB^{(i)} $ denote their corresponding circumcircles.
The midpoint of a boundary segment is a [*candidate*]{} for insertion if (1) the segment is not part of the current Delaunay triangulation, that is, its diametral circle is encroached by some existing mesh points, or (2) its segment is encroached by a circumcenter in $\dot{\BB} $. In the latter case, this potential circumcenter candidate is [*rejected*]{} from insertion. Let $\dot{\DD}^{(i)}_{T}$ be all midpoint candidates of type (1) and let $\dot{\DD}^{(i)}_{\BB}$ be all midpoint candidates of type (2).
A PSLG domain $\Omega$ in $\RR^2$ Let $T$ be the Delaunay triangulation of the vertices of $\Omega$. Let $i = 0$ and compute $\BB^{(i)}$, $\CC^{(i)}$, $\DD^{(i)}_{T}$, and $\DD^{(i)}_{\BB }$; Choose a point $q$ from $\dot{\CC}^{(i)} \union
\dot{\DD}^{(i)}_{T} \cup \dot{\DD}^{(i)}_{\BB}$ and insert $q$ into the triangulation. If $q$ is a midpoint of a segment $s$, remove $s$ from the segment list and replace it with two segments from $q$ to each endpoint of $s$; Update the Delaunay triangulation $T$; $i=i+1$; Compute $\dot{\BB}^{(i)}$, $\dot{\CC}^{(i)}$, $\dot{\DD}^{(i)}_{T}$, and $\dot{\DD}^{(i)}_{\BB}$.
The points inserted by the Delaunay refinement are often called [*Steiner points*]{}.
If a quasi-uniform mesh, such as that produced by Chew’s method, is desired [@Chew89], then we use the following notion of poorly shaped triangle: A triangle is [*poorly-shaped*]{} if the ratio of its circumradius to the length of the shortest edge in the current Delaunay triangulation $T$ is more than a pre-specified constant $\beta_{C} \ge \sqrt{2}$.
----- -----
(a) (b)
----- -----
Figure \[fig:ChewRuppert\] shows the output of the Delaunay refinement illustrating the difference between Chew’s and Ruppert’s refinement. We call these two variants of the Delaunay refinement algorithm [*Chew’s algorithm*]{} and [*Ruppert’s algorithm*]{}.
In their original papers [@Chew89; @Ruppert93], Chew and Ruppert presented their Delaunay refinement algorithms as particular variations of Algorithm \[alg:sequential\] —they specified how to choose the next point at each iteration from the set of candidates. In this paper, we will consider the following variation of Algorithm \[alg:sequential\] which is more aggressive in adding boundary points — we choose this variation to parallelize because its analysis is relatively simpler to present.
In this variation, $\BB^{(i)} $, $\CC^{(i)} $, and $\DD^{(i)}_{T}$ are the same as in Algorithm \[alg:sequential\]. The set $\DD^{(i)}_{\BB}$ is built incrementally. At iteration $i$, we compute $\BB^{(i)} $ first and let $\DD^{(i)}$ be the set of diametral circles that are encroached by some circumcenters of $\BB^{(i)} $. We then set $\DD^{(i)}_{\BB } = \DD^{(i-1)}_{\BB } \cup \DD^{(i)} $.
In other words, if a segment is encroached by a circumcenter of a poorly-shaped Delaunay triangle, its midpoint will be added to the set of candidate midpoints and remains candidate thereafter. This is in contrast with Algorithm \[alg:sequential\], in which an encroached midpoint is added to the set of candidate midpoints only for the next iteration. If another candidate is chosen that is in a circumcircle whose center encroaches the segment, the circumcircle will no longer be in the Delaunay triangulation at the end of the iteration, and hence the segment might not be encroached in the triangulation at the end of the iteration. So, its midpoint might not be a candidate in future iterations.
Assuming that the angle between two adjacent input segments is at least $\pi /2$, Chew’s algorithm terminates with well-shaped quasi-uniform meshes, while Ruppert’s algorithm [@Ruppert93] terminates with a well-shaped Delaunay mesh of the input domain whose elements adapt to the local geometry of the domain. The number of triangles in the mesh generated by Ruppert’s algorithm is asymptotically optimal up to a constant. The proofs of Ruppert’s and Chew’s [@Chew89; @Ruppert93] that their algorithms terminate with a well-shaped mesh of size within a constant factor of optimal can be easily extended to our variation of Algorithm \[alg:sequential\] discussed above. We refer interested readers to [@Ruppert93] and [@Shewchuk02]. Here we give a high level argument and introduce an important concept that will be used in Section \[sec:preprocessing\] for preprocessing an input domain in parallel.
Given a domain $\Omega $, the [*local feature size*]{} of each point $x$ in $\Omega $, denoted by $\lfs_{\Omega} (x)$, is the radius of the smallest disk centered at $x$ that touches two non-incident input features. Ruppert showed that every Delaunay triangle in the final mesh is well-shaped and that the length of the longest edge in each Delaunay triangle is within a constant factor of $\lfs_{\Omega} (x)$ for each $x$ in the interior of the triangle.
Suppose $M$ is a mesh generated by our variation of Algorithm \[alg:sequential\]. Let $\Omega '$ be the domain obtained from $\Omega $ by adding to $\Omega $ all mesh points in $M$ that are on the boundary segments of $\Omega $. Then we can show (i) for all $x$ in $\Omega$, $\lfs_{\Omega} (x)$ and $\lfs_{\Omega'} (x)$ are within a small constant factor of each other; and (ii) $M$ can be obtained by applying Ruppert’s (or Chew’s) variations of Algorithm \[alg:sequential\] to $\Omega '$. Therefore, the mesh produced by our variation of Algorithm \[alg:sequential\] has size within a small constant factor of the one generated by Ruppert’s (or Chew’s) refinement method.
Parallel 2D Delaunay Refinement {#sec:parallelDelaunayRefinement}
===============================
To better illustrate our analysis of parallel Delaunay refinement, we first focus on the case in which the input is a periodic point set (PPS) as introduced by Cheng [*et al.*]{} [@ChengDEFT99]. See also [@Edelsbrunner01]. We will then extend our results to produce boundary conforming meshes when the input domain is a PSLG.
Input Domain: Periodic Point Sets {#sec:periodic}
---------------------------------
If $P$ is a finite set of points in the half open unit square $[0,1)^2$ and $\ZZ^2$ is the two dimensional integer grid, then $S= P + \ZZ^2$ is a periodic point set [@Edelsbrunner01]. The periodic set $S$ contains all points $p+v$, where $p \in P$ and $v$ is an integer vector. The Delaunay triangulation of a periodic point set is also periodic.
As $P$ is contained in the unit square, the diameter of $P$ is $L \leq \sqrt{2}$. When we refer to the diameter of a periodic point set, we will mean the diameter of $P$.
### A generic parallel algorithm (PPS) {#sec:genericParallelDelaunayRefinement}
For a periodic point set, the only candidates for insertion are the circumcenters of poorly shaped triangles. We need a rule for choosing a large subset of the candidates with the property that a sequential Delaunay refinement algorithm would insert each of the points in the subset. Our rule is derived from the following notion of [*independence*]{} among candidates.
\[def:conflict\] Two circumcenters $\dot{c}$ and $\dot{c}'$ (and also the corresponding circles $c$ and $c'$) are [*conflicting*]{} if both $c$ and $c'$ contain each other’s center. Otherwise, $\dot{c}$ and $\dot{c}'$ (respectively $c$ and $c'$) are said to be [*independent*]{}.
If two candidates conflict, at most one of them can be inserted. Our rule is to insert a maximal independent set (MIS) of candidates at each iteration. We will show that if an algorithm follows this rule, then it will terminate after a polylogarithmic number of rounds.
A periodic point set $P$ in $\RR^2$
Let $T$ be the Delaunay triangulation of $P$ Compute $\dot{\CC}$, the set of all candidate circumcenters in $T$ Let $\II$ be an independent subset of $\dot{\CC}$ Insert all the points in $\II$ in parallel Update $T$ and $\dot{\CC}$
In the next few subsections, we will discuss how to generate the independent sets used by the algorithm. But first, we prove that regardless of how one chooses the independent set, our parallel algorithm can be sequentialized. This implies that the algorithm inherits the guarantee of its sequential counterpart that it generates a well-shaped mesh of size that is within a constant factor of optimal.
\[thm:sequential\_parallel\_periodic\] Suppose $M$ is a mesh produced by an execution of the Generic Parallel Delaunay Refinement algorithm. Then $M$ can be obtained by some execution of one of the sequential Delaunay refinement algorithms discussed in Section \[sec:DelaunayRefinement\].
Let $\II_1, \II_2, \hdots, \II_k$ be the sets of vertices inserted by the parallel algorithm above at iterations $1, \hdots, k$, respectively. We describe a sequential execution that inserts all the points in $\II_i$ before any point of $\II_j$ for $i<j$. For each independent set $\II_i$, we insert the candidates according to their circumradius in the order from largest to smallest. For any two circumcenters $\dot{a},\dot{b} \in \II_i$, assume that the radius of $a$ is larger than the radius of $b$. This implies that $\dot{a}$ can not be in the circumcircle of $\dot{b}$, because $\dot{a}$ and $\dot{b}$ are independent. Therefore, the insertion of $\dot{a}$ will not eliminate the triangle of $\dot{b}$.
Furthermore, observe that in any sequential execution, the insertion of point $\dot{p} \in \II_i$ can not eliminate the triangle corresponding to $\dot{q} \in \II_j$ for any $i < j$, for otherwise, $\dot{q}$ would not exist in the $j^{th}$ iteration of the parallel execution.
Therefore, the parallel and sequential executions terminate with the same Delaunay mesh.
To minimize the number of iterations, intuitively, we should choose a maximal independent set of candidates at each iteration. In Section \[sec:MIS\], we will give a geometric algorithm that computes a maximal independent set of candidates efficiently in parallel. Our algorithm makes use of the following observation.
\[lem:conflict\_reject\_periodic\] Suppose $c_{a}$ and $c_{b}$ are two conflicting circumcircles at iteration $i$, and let $r_{a}$ and $r_{b}$ be their circumradii. Then $r_b/2 < r_a < 2r_b$.
### Parallelizing Chew’s Refinement (PPS) {#sec:chew_periodic}
In this section, we show that our parallel implementation of Chew’s refinement only needs $O (\log (L/s))$ iterations. The basic argument is very simple — we will show that the radius of the largest Delaunay circle reduces by a factor of 3/4 after some constant number (e.g., 98) of iterations. Because the largest circumradius initially is $O (L)$ and the largest circumradius in the final mesh is $\Omega (s)$, the iteration bound of $O (\log (L/s))$ follows immediately.
\[lem:chew2d\_constant\_iterations\_periodic\] For all $i$, let $r_i$ be the largest circumradius of a triangle in the Delaunay triangulation at the end of iteration $i$. For all $k\geq 98$, $r_{k} \le 3r_{k-98}/4$.
We assume by way of contradiction that $r_{k} > 3 r_{k-98}/4$. Let $i = k-98$. Let $c_k$ be a circumcircle with radius $r_{k}$ after iteration $k$. Let $\dot{c}_k$ be the center of $c_{k}$.
For $j \leq k$, it is clear that $c_{k}$ is also an empty circle in iteration $j$, because the refinement process only adds new points. But, $c_{k}$ might not be a circumcircle at iteration $j$. We now show that for each iteration $j$, where $ i \leq j \le k$, there exists a circumcircle $c'_j$ with center $\dot{c}'_j$, and radius $r'_j$ such that (1) $ ||\dot{c}'_j -\dot{c}_{k}|| \le 3r_i/4$ and (2) $r'_j \geq 3r_i/4$.
Let $p_{k}$, $q_{k}$ and $t_{k}$ be the vertices of the Delaunay triangle at iteration $k$ that defines $c_{k}$. We will alter $c_{k}$ in three stages to produce a suitable $c'_{j}$ that is the circumcircle of three points that exist at stage $j$: $p_j$, $q_j$ and $t_j$.
1. Dilate $c_{k}$ until it touches a mesh point $p_j$. Note that $p_j$ might well be $p_{k}$, $q_{k}$, or $t_{k}$, so, $c_k$ might not actually expand at all during this step.
2. Grow the circle by moving its center away from $p_{j}$ along the ray $\overrightarrow{p_j\dot{c}_{k}}$, and maintaining the property that $p_{j}$ lies on the boundary of the circle, until it touches a mesh point $q_{j}$.
3. Continue to grow the circle, maintaining its contact with $p_{j}$ and $q_{j}$, moving its center away from the chord $\overline{p_j q_j}$, until it touches a vertex $t_j$.
The resulting circle $c'_j$ is a circumcircle of a Delaunay triangle $p_j q_j t_j$ at iteration $j$. Moreover, $p_j q_j t_j$ is a poorly-shaped triangle because its circumradius $r'_j$ is at least $r_{k}$. Thus, its center $\dot{c}'_j$ is a candidate at iteration $j$. Note also that $r'_j \geq r_{k} \geq 3r_i/4$.
Consider the triangle $p_j\dot{c}_{k}\dot{c}'_j$, which is non-acute at vertex $\dot{c}_{k}$. Let $x=|\dot{c}_{k}p_j|$ and $y=|\dot{c}_{k}\dot{c}'_j|$. Since $|\angle{\dot{c}'_j\dot{c}_{k}p_j}|$ is non-acute $(r'_j)^2 \ge x^2 + y^2$. As $r'_{j}$ is the radius of a Delaunay triangle and $j \geq i$, $r'_{j} \leq r_{i}$. Combining this fact with $x \ge r_{k} > 3r_i/4$, we find $r'_j < x + r_i/4$. So we can write, $(x+r_i/4)^2 \ge x^2 + y^2$. By simplifying this inequality to $x r_i/2+ r_i^2/16 \ge y^2$ and substituting $x \le r_i$, we derive $9r_i^2/16 \ge y^2$. Hence, $y =||\dot{c}'_j -\dot{c}_{k}|| \le 3r_i/4$.
Because $c'_{j}$ is empty at the end of iteration $j$, we know $\dot {c'_{j}}$ was not chosen during iteration $j$. Because the independent set of candidates that we select is maximal, there must be another circumcircle $c''_{j}$ chosen in iteration $j$ that conflicts with $c'_{j}$. By Lemma \[lem:conflict\_reject\_periodic\], the radius of $c''_j$ is at least one half of the radius of $c'_j$, and so is at least $|r'_j|/2 \geq 3r_i/8$. Moreover, the radius of $c''_j$ is at most $r'_j$ and hence at most $r_{i}$. So $||\dot{c}''_j - \dot{c}'_j|| \leq r_{i}$. Hence, $$||\dot{c}''_j -\dot{c}_{k}|| \leq
||\dot{c}''_j -\dot{c}'_j|| + ||\dot{c}'_j - \dot{c}_{k}||
\leq r_i + 3r_i/4 \leq 7r_i/4.$$
Let $\dot{C}''=\{ \dot{c}''_{i+1}, \dot{c}''_{i+2},
\hdots, \dot{c}''_{j}, \hdots, \dot{c}''_{k} \}$, and let $C''$ be the corresponding set of circumcircles. As $\dot{c}''_{l}$ is inserted during round $l$ for each $l$, each circle $c''_{j} \in C''$ is empty of all the centers $\dot{c}''_{l}$ for $l<j$. So the centers in $\dot{C}''$ are pairwise at least $3r_i/8$ away from each other. Thus, one can draw disjoint circles of radius $3 r_{i}/ 8$ around each of these points. The annulus containing these disjoint circles has area at most $\pi(7/4 + 3/16)^2r_i^2 - \pi(3/4 - 3/16)^2 r_i^2$. So, one can pack at most $$\Floor{\frac{\pi[(7/4 + 3/16)^2 - (3/4 - 3/16)^2]r_i^2}
{\pi(3/16)^2r_i^2}} = 97$$ disjoint circles of radius $3 r_{i} / 16$ in this region. This implies $|C''| = k-i \leq 97$, a contradiction.
\[thm:parallelchew\_periodic\] Our parallel implementation of Chew’s refinement algorithm takes at most $\Ceil{98\log_{4/3}(L/s)}$ iterations.
### Parallel Computation of MIS {#sec:MIS}
One can use the parallel maximal independent set algorithm of Luby [@Luby86] to compute a parallel independent set of candidates for each iteration in $O (\log^{2} n)$ parallel time. In this section, we will explain how we can exploit the geometric structure of the independence relation to compute a maximal independent set in a constant parallel time.
We will make extensive use of the result of Lemma \[lem:conflict\_reject\_periodic\] that two circumcircles are conflicting at iteration $j$ only if their radii are within a factor of 2 of each other.
\[lem:constant\] At iteration $j$, if there are $n_{j}$ circumcircles, then a maximal independent set of candidates for Delaunay refinement can be computed in constant parallel time using $n_{j}$ processors.
Let $C^{j}_{h}$ be the set of circumcircles of radius more than $L/2^{h+1}$ and less than or equal to $L/2^{h}$, where $h$ ranges from $0$ to $\log (L/s_{j})$ and $s_{j}$ is the smallest circumradius at iteration $j$. Note that a circumcircle in $C^{j}_{h}$ does not conflict with any circumcircle in $C^{j}_{l}$ if $l > h + 1$.
To compute a maximal set of non-conflicting candidates, we first in parallel find a maximal independent sets of circumcircles in $C^{j}_{h}$, independently for all even $h$. We will show below that a maximal independent set of circumcircles in $C_{h}^{j}$ can be computed in constant time in parallel. Let $I^{j}_{even}$ be the set of independent circumcircles computed. Then in one parallel step, we can eliminate all conflicting circumcircles in $\cup_{h:odd} C^{j}_{h}$. We then compute a maximal independent set for remaining circumcircles in $C_{h}^{j}$ for all odd $h$. Let this set be $I^{j}_{odd}$. Then $I^{j}_{even} \cup I^{j}_{odd}$ is a maximal independent set of circumcircles for iteration $j$.
Note that all circumcircles in $C^{j}_{h}$ have radius between $L/2^{h+1}$ and $L/2^{h}$. If we divide the square containing all circumcenters into a $2^{h}$-by-$2^{h}$ grid, then any circumcenter that is conflict with a circumcenter in the grid box $(x,y)$ must lie either in grid box $(x,y)$ or one of its eight grid neighbors.
We color grid boxes $(x,y)$ with color $(x\mod 3, y\mod 3)$. We then cycle through the 9 color classes and, by a method we will explain momentarily, find a maximal independent set of the candidates in each grid-box of the current color in parallel. We then eliminate in parallel the conflicting circumcenters that are in the color classes that have not yet been processed.
Finally, we explain how to compute a maximal independent set among the candidates that lie in a given grid-box. First notice that any maximal independent set of candidates in a grid-box can have at most a constant number of members, and hence a maximal independent set can be found by a constant number of parallel selection-elimination operations: choose a center that has not been eliminated, and in parallel eliminate any centers with which it conflicts.
In a parallel system that supports primitives such fetch\_and\_add, test\_and\_set, or parallel scan, we can use such a primitive to select in constant time a candidate in a grid box. The processor that holds this candidate becomes a “leader” in that round and broadcasts its candidate so that the conflicting candidates can be eliminated. With these primitives, our algorithm can be implemented in parallel constant time. However, if the parallel system does not support these primitives, then for each grid cell we can emulate parallel scan to select a leader in $O (\log n)$ time, where $n$ is the number of candidate centers in the cell.
In general, many grid cells are empty and there is no need to generate them at all. We can use hashing to select grid cells that are not empty. The idea is very simple, each candidate center can compute the coordinates of its grid cell from the coordinates of its center and its radius. We can hash grid cells using their coordinates and therefore, all candidate centers belonging to a grid cell can independently generate the hash identity of the cell. We can then use parallel primitives discussed in the paragraph above to support the computation of a maximal independent set of candidates for all non-empty grid cells.
### Parallelizing Ruppert’s Refinement (PPS) {#sec:parallelruppert}
In this section, we show that our parallelization of Ruppert’s method for periodic point sets in 2D takes $O (\log^{2} (L/s))$ iterations. For simplicity, we give an analysis for the case $\beta_R = \sqrt{2}$, although our analysis can be easily extended to the case when $\beta _R = 1 + \epsilon$, for any $\epsilon > 0$. We recall that $\beta_{R}$ is the threshold of the ratio of the radius to shortest edge-length defining a poorly shaped triangle. Thus, for $\beta_{R} = \sqrt{2}$, inserting the circumcenter of a poorly shaped triangle whose shortest edge is $h$ introduces new Delaunay edges of length at least $\sqrt{2}h$.
A periodic point set $P$ in $\RR^2$
Let $T$ be the Delaunay triangulation of $P$
Let $\dot{\CC}$ be the set of all circumcenters of poorly-shaped triangles who are in class $\EE_i$ Let $\II$ be a maximal independent subset of $\dot{\CC}$ Insert all the points in $\II$ in parallel Update the Delaunay triangulation and $\dot{\CC}$
Let $s$ be the length of the shortest edge in the initial Delaunay triangulation. At each iteration, we assign an edge to class $\EE_{i}$ if its length is in $\IntervalCO{ \sqrt{2}^{i-1}s, \sqrt{2}^{i}s }$. Similarly, we assign a Delaunay triangle to $\EE_{i}$ if its shortest edge has length in $\IntervalCO{ \sqrt{2}^{i-1}s, \sqrt{2}^{i}s }$. There are at most $\lceil \log_{\sqrt{2}}(L/s) \rceil$ of such classes. Using this definition, we can state and analyze the Parallel Ruppert’s Refinement Algorithm.
\[thm:parallelruppert\] Given a periodic point set in 2D of diameter $L$, the Parallel Ruppert’s Refinement Algorithm takes $O(\log^{2}(L/s))$ iterations.
Lemmas \[lem:rupertConservation\] and \[lem:upgrade\] prove that after the $i$th iteration of the outer loop, each Delaunay triangle touching an edge in class $\EE _i$ will be well-shaped, and successive iterations cannot degrade the shape of the Delaunay triangles touching that edge. Lemma \[lem:techruppert\] implies that during each iteration of the outer loop, the inner loop of the algorithm will execute at most $O(\log(L/s))$ times. As the outer loop is executed $O(\log(L/s))$ times, the whole algorithm takes at most $O(\log^{2}(L/s))$ iterations.
\[lem:rupertConservation\] During the $i$th iteration of the outer loop of the Parallel Ruppert’s Refinement Algorithm, no Delaunay edges are added to or removed from class $\EE _i$.
\[lem:upgrade\] Suppose $e$ is an edge in $\EE_{j}$ where $j\leq i$. Then during the $i$th outer loop, the radius-edge ratio of triangles containing $e$ does not increase.
\[lem:techruppert\] Let $e \in \EE_{i}$, and let $r_{l}$ be the radius of the larger of the two circumcircles containing $e$ at the end of the $l^{th}$ iteration of the inner loop during the $i$th iteration of the outer loop. Then, at the end of iteration $k = l+81$ of the inner loop, either (1) both Delaunay triangles containing $e$ are well-shaped, or (2) $r_{k} \le 3r_{l}/4$ where $r_{k}$ is the radius of the larger of the two circumcircles containing $e$ after iteration $k$.
Input Domain: [PSLG]{} {#sec:pslg}
----------------------
In this subsection, we extend our parallel algorithm for generating a Delaunay mesh from a domain given by a periodic point set to a domain defined by a planar straight-line graph. Following Ruppert, we assume that the angle between two adjacent input segments is at least $\pi /2$. A key step in Delaunay refinement for a domain specified by a PSLG is to properly add points to the boundary segments so that the Delaunay mesh is conforming to the boundary. In our parallel algorithm, we make our mesh conform to the boundary in two steps: First, we give, in Section \[sec:preprocessing\], an $O (\log L/s)$ time parallel preprocessing algorithm to insert points to input segments so that the initial Delaunay mesh is conforming to the boundary and no diametral circle intersects any other non-incident input features. Second when a segment is encroached during parallel Delaunay refinement, we include its midpoint as candidate for insertion.
The preprocessing step might not be needed to implement our parallel algorithm: one could probably add points to the boundary as needed. However, the preprocessing step simplifies our analysis in this Section by greatly reducing the number of cases in the analysis.
### A generic parallel algorithm (PSLG)
After applying the preprocessing step, the initial Delaunay triangulation is conforming to the boundary and no diameter circle contains any point of the triangulation. We will maintain this invariant in our algorithm.
In order to perform parallel refinement, as in Section \[sec:genericParallelDelaunayRefinement\], we need a rule of [*independence*]{} among candidates for refining boundary segments and poorly shaped triangles. We first recall the set of candidates for insertion defined in Section \[sec:DelaunayRefinement\].
Let $\BB$ be the set of circumcircles of poorly shaped triangles whose centers $\dot{\BB}$ encroach some boundary segments. Let $\CC$ be the set of circumcircles of poorly shaped triangles whose centers $\dot{\CC}$ don’t encroach any boundary segments. Let $\DD$ is the set of diametral circles that are encroached by some centers in $\dot{\BB }$. So, $\dot{\CC} \cup \dot{\DD} $ are candidate points for insertion.
We will still apply Definition \[def:conflict\] to determine whether two circumcenters from $\dot{\CC} $ are independent. Because the angle between two adjacent input segments is at least $\pi /2$, after preprocessing, any two diametral circles from $\dot{\DD }$ are not overlapping. Every two diametral centers from $\dot{\BB }$ are independent.
We will use the following definition of independence between a diametral center in $\dot{\DD }$ and a circumcenter in $\dot{\CC }$. Note that because a circumcenter in $\dot{\CC}$ does not encroach any boundary segment, a diametral circle of $\DD$ does not contain any center in $\dot{\CC }$.
\[def:conflict\_diametral\] A circumcenter $\dot{c} \in \dot{\CC}$ and a diametral center $\dot{d} \in \dot{\DD}$ are [*conflicting*]{} if (i) $\dot{d}$ is inside $c$; and (ii) the radius of $c$ is smaller than $\sqrt{2}$ times the radius of $d$. Otherwise, $\dot{c}$ and $\dot{d}$ (also $c$ and $d$) are [*independent*]{}.
This definition of independence is motivated by the following lemma first proved by Ruppert [@Ruppert93].
\[lem:encroach-ratio\] If a circumcircle $c$ of radius $r_{c}$ encroaches a diametral circle $d$ of radius $r_{d}$, then $r_{d} \geq r_{c}/\sqrt{2}$.
A domain $\Omega$ given by a PSLG in $\RR^2$ Apply the parallel preprocessing algorithm of Section \[sec:preprocessing\] Let $T$ be the initial Delaunay triangulation. Compute $\dot{\BB \CC}$, an independent subset of $\dot{\BB} \union \dot{\CC}$ Let $\dot{\DD}$ be the set of centers of diametral circles encroached by the centers in $\dot{\BB \CC}$ Let $\II$ be an independent subset of $(\dot{\BB \CC} \cap \dot{\CC}) \union \dot{\DD}$ Insert all the points in $\II$ in parallel Update the Delaunay triangulation Update $\dot{\BB}$, $\dot{\CC}$, $\dot{\BB \CC}$ and $\dot{\DD}$
The following theorem extends Theorem \[thm:sequential\_parallel\_periodic\] for domains given by PSLGs.
\[thm:sequential\_parallel\] For a domain $\Omega $ specified by a PSLG, suppose $M$ is a mesh produced by an execution of the parallel algorithm above. Then $M$ can be obtained by some execution of one of the sequential Delaunay refinement algorithms discussed in Section \[sec:DelaunayRefinement\].
### Parallelizing Chew’s Refinement (PSLG) {#sec:chew}
To parallelize Chew’s algorithm for domain defined by a PSLG, we apply Algorithm \[alg:genericParallelDelaunayRefinementPSLG\] and use a maximal independent set of the candidates at each iteration. In addition, because each pair of diametral centers in $\dot{\DD }$ is independent, we include all centers $\dot{\DD }$ in the independent set. The parallel algorithm of Section \[sec:MIS\] can be used to construct the maximal independent set.
\[thm:parallelchew\] Our parallel implementation of Chew’s refinement algorithm takes $O(\log (L/s))$ iterations for a domain given by a PSLG, where $L$ is the diameter of the domain and $s$ is smallest local feature size.
### Parallel Preprocessing {#sec:preprocessing}
In the algorithm and proof presented in the last subsection, we assume that the boundary of the domain has been preprocessed to satisfy the following property.
\[def:weakly\] A domain $\Omega$ specified by a PSLG is [*strongly conforming*]{} if no diametral circle contains any vertex or intersects any other non-incident input features.
Clearly, if $\Omega $ is strongly conforming, then the Delaunay triangulation of the vertices of $\Omega $ is conforming to $\Omega $.
We will use the following parallel method to preprocess a domain $\Omega $ to make it strongly conforming. This method repeatedly adds midpoints to boundary segments whose diametral circles intersect non-incident input features.
A PSLG domain $\Omega$ in $\RR^2$ Let $\GG$ be the set of segments in $\Omega$ whose diametral circles intersect non-incident input features. Split all the segments in $\GG$ in parallel by midpoint insertion and update $\GG$.
\[lem:preprocessing\] Parallel Boundary Preprocessing terminates in $O(\log(L/\ss))$ iterations.
In the scheme above, we can grow a quadtree level by level to support the query of whether the diametral circle of a segment intersects another non-incident feature. The number of levels of the quadtree that we need to grow is at most $\log (L/s)$. As shown in [@BernEG94; @BernET99], one can use balanced quadtree to approximate local feature size function of $\Omega $ to within a constant factor. Therefore, using a balanced quadtree as [@BernEG94; @BernET99], we can preprocess the domain in $\log (L/s)$ parallel time so that the preprocessed domain is [*strongly feature conforming*]{} as defined below.
\[def:strongly\] Let $\alpha >2 $ be a constant. A domain $\Omega$ specified by a PSLG is [*strongly feature conforming*]{} with parameter $\alpha $ if it is strongly conforming, and in addition, the length of each segment is no more than $\alpha
$ times the local feature size of its midpoint.
In the next subsection, we will present a parallel implementation of Ruppert’s algorithm for domains that are strongly feature conforming and show that it terminates in $O (\log^{2} (L/s))$ iterations.
We use the following lemma to show that the size optimality of our results are not affected much by the preprocessing.
\[lem:lfs\_ratio\] Let $\Omega$ and $\Omega'$ denote the input before and after preprocessing, respectively. Then, for any point $x$ in these domains, $\lfs_{\Omega}(x)/3 \le \lfs_{\Omega'}(x) \le \lfs_{\Omega}(x)$.
### Parallelizing Ruppert’s Refinement (PSLG)
In this section, we show that our parallelization of Ruppert’s method for a domain given by a PSLG takes $O (\log^{2} (L/s))$ iterations. Again, for simplicity, we will only give an analysis for the case when $\beta_R = \sqrt{2}$.
The parallel algorithm follows basic steps of the parallel Ruppert’s Refinement presented earlier in Section \[sec:parallelruppert\]. But first, we apply the parallel preprocessing algorithm of Section \[sec:preprocessing\] so that the preprocessed domain is strongly feature conforming. So below we can assume that $\Omega $ is strongly conforming.
Let $s$ be smallest local feature of $\Omega $. At each iteration, we assign an edge to class $\EE_{i}$ if its length is in $\IntervalCO{ \sqrt{2}^{i-1}s, \sqrt{2}^{i}s }$. Similarly, we assign a Delaunay triangle to $\EE_{i}$ if its shortest edge has length in $\IntervalCO{ \sqrt{2}^{i-1}s, \sqrt{2}^{i}s }$. There are at most $\lceil \log_{\sqrt{2}}(L/s) \rceil$ of such classes.
A domain $\Omega$ given by a PSLG that is strongly feature conforming. Let $T$ be the initial Delaunay triangulation.
Let $\dot{\BB}$ be encroaching candidate circumcenters and $\dot {\CC }$ be the non-encroaching candidate circumcenters whose triangles is in class $\EE_i$. Compute $\dot{\BB \CC}$, an independent set of $\dot{\BB} \union \dot{\CC}$. Let $\dot{\DD}$ be the set of centers of diametral circles encroached by the centers in $\dot{\BB \CC}$
Let $\II$ be an maximal independent subset of $(\dot{\BB \CC} \cap \dot{\CC}) \union \dot{\DD}$ Insert all the points in $\II$ in parallel Update the Delaunay triangulation Update $\dot{\BB}$, $\dot{\CC}$, $\dot{\BB \CC}$ and $\dot{\DD}$
\[thm:parallelruppertPSLG\] Given a domain specified by a PSLG, the Parallel Ruppert’s Refinement Algorithm takes $O(\log^{2}(L/s))$ iterations.
The proof of Theorem \[thm:parallelruppertPSLG\] is essentially the same as the proof of Theorem \[thm:parallelruppert\] where we need to address the following two issues.
1. The center of a circumcircle could potentially encroach a boundary segment whose length is much larger than that the circumradius.
2. The insertion of a midpoint on the boundary could potentially introduce smaller edges.
To address the first issue, we apply parallel processing algorithm of Section \[sec:preprocessing\] and hence assume $\Omega $ is strongly feature conforming. Hence if a circumcenter encroaches a boundary segment, the circumradius and the length of the segment are with a constant factor of each other. In addition, because each boundary segment can only be split at most a constant times in the refinement, it can not introduce smaller edges too many times.
3D Delaunay Refinement {#sec:3d}
======================
A 3D domain is specified by a PLC (see Section \[sec:input\]). In this section, we assume that the angle between any two intersecting elements, when one is not contained in the other, is at least $90^\circ$. There are three kinds of spheres associated with a 3D Delaunay mesh that we are interested: the circumspheres, the diametral spheres, and the equatorial sphere given below.
The [*equatorial sphere*]{} of a triangle in 3D is the smallest sphere that passes through its vertices. A triangular subfacet of a PLC is [*encroached*]{} if the equatorial sphere is not empty.
Chew’s algorithm extends naturally to 3D. In [@Shewchuk98], Shewchuk developed a 3D extension of Ruppert’s algorithm. In Shewchuk’s refinement, given below, a tetrahedron is [*bad*]{} if the ratio of its circumradius to its shortest edge, referred as the radius-edge ratio, is more than a pre-specified constant $\beta_S \ge 2$.
A PLC domain $\Omega$ in $\RR^3$
Compute $T$, the Delaunay triangulation of the points of $\Omega$ Let $\dot{C}$ be the set of non-encroaching circumcenters of the bad tetrahedra Let $\dot{D}$ be the set of non-encroaching equatorial centers of the encroached triangular subfacets Let $\dot{E}$ be the set of diametral centers of the encroached subsegments.
Insert $a$ and update the Delaunay triangulation Update $\dot{C}$, $\dot{D}$, and $\dot{E}$
Parallel 3D Delaunay Refinement {#sec:parallel3D}
-------------------------------
In this subsection, we show that our results for a domain given by a periodic point set can be extended from two dimensions to three dimensions to parallelize both Chew’s and Shewchuk’s algorithm. So far, we have not completed the analysis for domains specified by PLCs, although we think similar results can be obtained.
The following is a parallel Delaunay refinement algorithm for domains specified by 3D periodic point sets.
A periodic point set $P$ in $\RR^3$
Let $T$ be the Delaunay triangulation of $P$ Compute $\dot{\CC}$, the set of circumcenters of bad tetrahedra in $T$ Let $\II$ be an independent subset of $\dot{\CC}$ Insert all the points in $\II$ in parallel Update $\dot{\CC}$
To parallelize Chew’s 3D refinement, we use a [*maximal*]{} independent set of candidate centers in Algorithm \[alg:genericParallelDelaunayRefinement3D\]. With almost the same proof as we have presented in Section \[sec:chew\_periodic\], we can show that the number of iterations needed is $1076\log (L/s)$.
\[thm:sequential\_parallel\_periodic\_3D\] Suppose $M$ is a mesh produced by an execution of the 3D Generic Parallel Delaunay Refinement algorithm. Then $M$ can be obtained by some execution of the sequential Delaunay refinement algorithm.
### Parallelizing Shewchuk’s Refinement
We will present our analysis for the case when $\beta_R = \sqrt{2}$, although our analysis can be easily extended to the case when $\beta _R = 1 + \epsilon$, for any $\epsilon > 0$. Thus, for $\beta_{R} = \sqrt{2}$, inserting the circumcenter of a poorly shaped triangle whose shortest edge is $h$ introduces new Delaunay edges of length at least $\sqrt{2}h$.
Let $s$ be the length of the shortest edge in the initial Delaunay triangulation. At each iteration, we assign an edge to class $\EE_{i}$ if its length is in $\IntervalCO{ \sqrt{2}^{i-1}s, \sqrt{2}^{i}s }$. Similarly, we assign a Delaunay tetrahedra to $\EE_{i}$ if its shortest edge has length in $\IntervalCO{ \sqrt{2}^{i-1}s, \sqrt{2}^{i}s }$. There are at most $\lceil \log_{\sqrt{2}}(L/s) \rceil$ of such classes.
Our parallel implementation of Shewchuk’s algorithm is analogous to our parallel implementation of Ruppert’s algorithm. In addition, our proof in 3D is also analogous to the proof in 2D.
\[thm:parallelShewchuk\] For a given periodic point set $P$ in $\RR^3$ of diameter at most $L$, if the length of the shortest edge in the mesh generated by Shewchuk’s refinement is $s$, then parallel Shewchuk refinement takes $O(\log^{2} (L/s))$ iterations to generate a bounded radius-edge ratio mesh.
Discussion {#sec:discussion}
==========
Polylogarithmic upper bounds on the number of parallel iterations presented in Sections \[sec:parallelDelaunayRefinement\] and \[sec:parallel3D\] constitutes the main component of the analyses of our parallel algorithms. At each iteration, our algorithms perform two main operations: i) compute a maximal independent set of points for parallel insertion; ii) update the Delaunay triangulation inserting all these points. For the first one, we proposed a new constant time parallel algorithm. For the second, we suggested to use an existing logarithmic time parallel Delaunay triangulation algorithm. These immediately imply polylogarithmic total time complexity for our parallel Delaunay refinement algorithms.
We opted for simplicity in our analyses. So, the constants in lemmas \[lem:chew2d\_constant\_iterations\_periodic\] and \[lem:techruppert\] are probably not optimal and likely to be much smaller in practice than 98 and 81.
The 3D extension of Chew’s and Shewchuk’s algorithms do not always guarantee that the resulting mesh has an aspect-ratio bounded by a constant. However, they both guarantee a constant bound on the ratio of the circumradius to the length of the shortest edge (the radius-edge ratio) of any tetrahedra in the final mesh. So, the meshes these two algorithms generate might potentially contains slivers, which are elements with close to zero aspect-ratio but with a constant radius-edge ratio. Several quality enhancing and guaranteeing meshing algorithms [@ChengDEFT99; @Chew97; @EdelsbrunnerLMSTTUW00; @LiTeng01] have been developed recently. Cheng [*et al.*]{} [@ChengDEFT99] and Edelsbrunner [*et al.*]{} [@EdelsbrunnerLMSTTUW00] have already given parallel complexity of their sliver removal algorithms. Our framework can be used to analyze parallel complexity of the other two algorithms, by Chew [@Chew97] and Li and Teng [@LiTeng01].
We conclude the paper with two conjectures.
- There is a parallel implementation of Ruppert’s [@Ruppert93] and Shewchuk’s [@Shewchuk98] algorithm that runs in $O(\log (L/s))$ iterations.
- There is a parallel Ruppert’s [@Ruppert93] and Shewchuk’s [@Shewchuk98] algorithm that runs in $O(\log n)$ time where $n$ is the input complexity. Notice that Bern [*et al.*]{} [@BernET99] showed that the quadtree algorithm can be implemented in $O(\log n)$ time with $K$ processors.
We would also like to see results that establish the parallel complexity of other mesh generation algorithms such as sink insertion [@EdelsbrunnerG01].
Acknowledgments {#sec:ack}
===============
We thank Jeff Erickson, Sariel Har-Peled and Dafna Talmor for helpful conversations and comments on the paper. We also thank Jonathan Shewchuk for helpful emails about boundary assumptions and boundary preprocessing for Ruppert’s algorithm.
|
---
abstract: |
The problem of supervised classification (or discrimination) with functional data is considered, with a special interest on the popular $k$-nearest neighbors ($k$-NN) classifier.
First, relying on a recent result by Cérou and Guyader (2006), we prove the consistency of the $k$-NN classifier for functional data whose distribution belongs to a broad family of Gaussian processes with triangular covariance functions.
Second, on a more practical side, we check the behavior of the $k$-NN method when compared with a few other functional classifiers. This is carried out through a small simulation study and the analysis of several real functional data sets. While no global “uniform” winner emerges from such comparisons, the overall performance of the $k$-NN method, together with its sound intuitive motivation and relative simplicity, suggests that it could represent a reasonable benchmark for the classification problem with functional data.
*Key words and phrases. Supervised classification, functional data, projections method, nearest neighbors, discriminant analysis.*
*AMS 2000 subject classification. Primary 62G07; secondary 62G20.*
author:
- |
Amparo Baíllo[^1] and Antonio Cuevas[^2]\
Departamento de Análisis Económico: Economía Cuantitativa, Univ. Autónoma de Madrid, Spain\
Departamento de Matemáticas, Univ. Autónoma de Madrid, Spain
title: 'Supervised functional classification: A theoretical remark and some comparisons'
---
**1. Introduction**
*1.1 Some background on supervised classification*
Supervised classification is the modern name for one of the oldest statistical problems in experimental science: to decide whether an individual, from which just a random measurement $X$ (with values in a “feature space” ${\cal F}$ endowed with a metric $D$) is known, either belongs to the population $P_0$ or to $P_1$. For example, in a medical problem $P_0$ and $P_1$ could correspond to the group of “healthy” and “ill” individuals, respectively. The decision must be taken from the information provided by a “training sample” $\mathcal X_n = \{ (X_i,Y_i), 1\leq i\leq n \}$, where $X_i$, $i=1,\ldots,n$, are independent replications of $X$, measured on $n$ randomly chosen individuals, and $Y_i$ are the corresponding values of an indicator variable which takes values 0 or 1 according to the membership of the $i$-th individual to $P_0$ or $P_1$. Thus the mathematical problem is to find a “classifier” $g_n(x)=g_n(x;\mathcal X_n)$, with $g_n:{\cal F}\rightarrow
\{0,1\}$, that minimizes the classification error $P\{g_n(X)\neq Y\}$.
The term “supervised” refers to the fact that the individuals in the training sample are supposed to be correctly classified, typically using “external” non statistical procedures, so that they provide a reliable basis for the assignation of the new observation. This problem, also known as “statistical discrimination” or “pattern recognition”, is at least 70 years old. The origin goes back to the classical work by Fisher (1936) where, in the $d$-variate case ${\cal F}={\mathbb R}^d$, a simple “linear classifier” $g_n(x)={\mathbbm 1}_{\{x: w^\prime x+w_0>0\}}$ was introduced (${\mathbbm 1}_A$ stands for the indicator function of a set $A\subset {\cal F}$).
A deep insightful perspective of the supervised classification problem can be found in the book of Devroye et al (1996). Other useful textbooks are Hand (1997) and Hastie et al. (2001). All of them focus on the standard multivariate case ${\cal F}={\mathbb R}^d$.
It is not difficult to prove (e.g., Devroye et al., 1996, p. 11) that the optimal classification rule (often called “Bayes rule”) is $$\label{opt}
g^*(x)={\mathbbm 1}_{\{\eta(x)>1/2\}},$$ where $\eta(x)=E(Y|X=x)$. Of course, since $\eta$ is unknown the exact expression of this rule is usually unknown, and thus different procedures have been proposed in order to approximate it. In particular, it can be seen that Fisher’s linear rule is optimal provided that the conditional distributions of $X|Y=0$ and $X|Y=1$ are both normal with identical covariance matrix. While these conditions look quite restrictive, and it is straightforward to construct problems where any linear rule has a poor performance, Fisher’s classifier is still by far the most popular choice among users.
A simple non-parametric alternative is given by the $k$-nearest neighbors ($k$-NN) method which is obtained by replacing the unknown regression function $\eta(x)$ in (\[opt\]) with the regression estimator $$\label{RegEstkNN}
\eta_n(x) = \frac{1}{k} \sum_{i=1}^n {\mathbbm 1}_{\{ X_i\in k(x) \}} Y_i$$ where $k=k_n$ is a given (integer) smoothing parameter and “$X_i\in k(x)$” means that $X_i$ is one of the $k$ nearest neighbours of $x$. More concretely, if the pairs $(X_i,Y_i)_{1\leq i \leq n}$ are re-indexed as $(X_{(i)},Y_{(i)})_{1\leq i \leq n}$ so that the $X_{(i)}$’s are arranged in increasing distance from $x$, $D(x,X_{(1)}) \leq D(x,X_{(2)}) \leq \ldots \leq D(x,X_{(n)})$, then $k(x) = \{ X_{(i)},1\leq i \leq k \}$. This leads to the $k$-NN classifier $g_n(x) = {\mathbbm 1}_{ \{ \eta_n(x) > 1/2 \}}$.
It is well-known that, in addition to this simple classifier, several other alternative methods (kernel classifiers, neural networks, support vector machines,...) have been developed and extensively analyzed in the latest years. However, when used in practice with real data sets, the performance of Fisher’s rule is often found to be very close to that of the best one among all the main alternative procedures. On these grounds, Hand (2006) has argued in a provocative paper about the “illusion of progress” in supervised classification techniques. The central idea would be that the study of new classification rules often fails to take into account the structure of real data sets and it tends to overlook the fact that, in spite of the its theoretical limitations, Fisher’s rule is quite satisfactory in many practical applications. This, together with its conceptual simplicity, explains its popularity over the years.
*1.2 The purpose and structure of this paper*
We are concerned here with the problem of (binary) supervised classification with functional data. That is, we consider the general framework indicated above but we will assume throughout that the space $({\cal F},D)$ where the random elements $X_i$ take values is a separable metric space of functions. For some theoretical results (Theorem 2) we will impose a more specific assumption by taking ${\cal F}$ as the space $C[a,b]$ of real continuous functions defined in a closed finite interval $[a,b]$, with the usual supremum norm $\Vert \; \Vert_\infty$.
The study of discrimination techniques with functional data is not as developed as the corresponding finite-dimensional theory but, clearly, is one of the most active research topics in the booming field of functional data analysis (FDA). Two well-known books including broad overviews of FDA with interesting examples are Ferraty and Vieu (2006) and Ramsay and Silverman (2005). Other recent more specific references will be mentioned below.
There are of course several important differences between the theory and practice of supervised classification for functional data and the classical development of this topic in the finite-dimensional case, where typically the data dimension $d$ is much smaller than the sample size $n$ (the “high-dimensional” case where $d$ is “large”, and usually $d>n$, requires a separate treatment). A first important practical difference is the role of Fisher’s linear discriminant method as a “default” choice and a benchmark for comparisons. As we have mentioned, this holds for the finite dimensional cases with “small” values of $d$ but it is not longer true if functional (or high-dimensional) data are involved. To begin with, there is no obvious way to apply in practice Fisher’s idea in the infinite-dimensional case, as it requires to invert a linear operator which is not in general a straightforward task in functional spaces; see, however, James and Hastie (2001) for an interesting adaptation of linear discrimination ideas to a functional setting. Then, the question is whether there exists any functional discriminant method, based on simple ideas, which could play a reference role similar to that of Fisher’s method in the finite dimensional case. The results in this paper suggest (as a partial, not definitive, answer) that the $k$-NN method could represent a “default standard” in functional settings.
Another difference, particularly important from the theoretical point of view, concerns the universal consistency of the $k$-NN classifier. A classical result by Stone (1977) establishes that in the finite-dimensional case (with $X_i\in{\mathbb R}^d$) the conditional error of the $k$-NN classifier $$\label{CondProbErr}
L_n=P \{ g_n(X)\neq Y |\mathcal X_n\},$$ converges in probability (and also in mean) to that of the Bayes (optimal) rule $g^*$, that is, $E(L_n)\rightarrow L^*=P \{ g^*(X)\neq Y \}$, provided that $k_n\to\infty$ and $k_n/n\to 0$ as $n\to\infty$. This result holds universally, that is, irrespective of the distribution of the variable $(X,Y)$. The interesting point here is that this universal consistency result is no longer valid in the infinite-dimensional setting. As recently proved by Cérou and Guyader (2006), if the space ${\cal F}$ where $X$ takes values is a general separable metric space, a non-trivial condition must be imposed on the distribution of $(X,Y)$ in order to ensure the consistency of the $k$-NN classifier.
The aim of this paper is twofold, with a common focus on the $k$-NN classifier and in close relation with the above mentioned two differences between the classification problem in finite and infinite settings. First, on the theoretical side, we have a further look at the consistency theorem in Cérou and Guyader (2006) by giving concrete non-trivial examples where their consistency condition is fulfilled. Second, from a more practical viewpoint, we will carry out numerical comparisons (based both on Monte Carlo studies and real data examples) to assess the performance of different functional classifiers, including $k$-NN.
This paper is organized as follows. In Section 2 the consistency of the functional $k$-NN classifier is established, as a consequence of Theorem 2 in Cérou and Guyader (2006), for a broad class of Gaussian processes. In Section 3 other functional classifiers recently considered in the literature are introduced and briefly commented. They are all compared through a simulation study (based on two different models) as well as six real data examples, very much in the spirit of Hand’s (2006) paper, where the performance of the classical Fisher’s rule was assessed in terms of its discrimination capacity in several randomly chosen data sets.
**2. On the consistency of the functional $k$-NN classifier**
In the functional classification problem several auxiliary devices have been used to overcome the extra difficulty posed by the infinite dimensional nature of the feature space. They include dimension reduction techniques (e.g., James and Hastie 2001, Preda [*et al.*]{} 2007), random projections combined with data-depth measures projections use of data-depth measures (Cuevas [*et al.*]{} 2007) and different adaptations to the functional framework of several non-parametric and regression-based methods, including kernel classifiers (Abraham et al. 2006, Biau et al. 2005, Ferraty and Vieu 2003), reproducing kernel procedures (Preda 2007), logistic regression (Müller and Stadtmüller 2005) and multilayer perceptron techniques with functional inputs (Ferré and Villa 2006).
*2.1 On the consistency of the functional $k$-NN classifier*
The functional $k$-NN classifier belongs also to the class of procedures adapted from the usual non-parametric multivariate setup. Nevertheless, unlike most of the above mentioned functional methodologies, the $k$-NN procedure works according to exactly the same principles in the finite and infinite-dimensional cases. It is defined by $g_n(x) = {\mathbbm 1}_{ \{ \eta_n(x) > 1/2 \}}$, where $\eta_n$ is the $k$-NN regression estimator (\[RegEstkNN\]), whose definition is formally identical to that of the finite-dimensional case. The intuitive interpretation is also the same in both cases. No previous data manipulation, projection or dimension reduction technique is required in principle, apart from the discretization process necessarily involved in the practical handling of functional data. In the present section we offer some concrete examples where the $k$-NN functional classifier is weakly consistent. As we have mentioned in the previous section, this is a non-trivial point since the $k$-NN classifier is no longer universally consistent in the case of infinite-dimensional inputs $X$.
Throughout this section the feature space where the variable $X$ takes values is a separable metric space $({\cal
F},D)$. We will denote by $P_X$ the distribution of $X$ defined by $P_X (B) = P \{ X\in B \} \quad \mbox{for } B\in\mathcal B_{\mathcal
F}$, where $\mathcal B_{\mathcal F}$ are the Borel sets of $\mathcal
F$.
Let us now consider the following regularity assumption on the regression function $\eta(x)=E(Y|X=x)$
(BC) Besicovitch condition:
: $$\lim_{\delta\to 0} \frac{1}{P_X(B_{X,\delta})} \int_{B_{X,\delta}} \eta(z) dP_X(z) = \eta(X)
\quad \mbox{in probability},$$ where $B_{x,\delta} := \{ z\in \mathcal F: D(x,z)\leq \delta \}$ is the closed ball with center $x$ and radius $\delta$.
Under **(BC) Cérou and Guyader (2006, Th. 2) get the following consistency result.**
*Denote by $L_n$ and $L^*$, respectively, the conditional error associated with the above defined $k$-NN classifier and the Bayes (optimal) error for the problem at hand. If $({\cal F},D)$ is separable and condition **(BC) *is fulfilled then the $k$-NN classifier is weakly consistent, that is $E(L_n)\rightarrow L^*$, as $n\to\infty$, provided that $k\to\infty$ and $k/n\to 0$.****
Besicovich condition plays an important role also in the consistency of kernel rules (see Abraham et al. 2006).
Cérou and Guyader (2006) have also considered the following more convenient condition (called $P_X$-continuity) that ensures **(BC): For every $\epsilon>0$ and for $P_X$-a.e. $x\in \mathcal F$ $$\lim_{\delta\to 0} P_X \{ z\in \mathcal F: |\eta(z)-\eta(x)|>\epsilon | D(x,z)<\delta \} = 0.$$ However, for our purposes, it will be sufficient to observe that the continuity ($P_X$-a.e.) of $\eta(x)$ implies also [**(BC)**]{}. We are interested in finding families of distributions of $(X,Y)$ under which the regression function $\eta(x)$ is continuous ($P_X$-a.e.) and hence **(BC) holds.****
From now on we will use the following notation. Let $\mu_i$ be the distribution of $X$ conditional on $Y=i$, that is, $\mu_i(B) = P \{ X\in B|Y=i \}$, for $B\in \mathcal B_{\mathcal F}$ and $i=0,1$. We denote by $S_i \subset \mathcal F$ the support of $\mu_i$, for $i=0,1$, and $S=S_0\cap S_1$. The expression $\mu_0 << \mu_1$ will denote that $\mu_0$ is absolutely continuous with respect to $\mu_1$. Also we will assume that $p=P\{Y=0\}$ fulfills $p\in(0,1)$.
The following theorem shows that the property of continuity (resp. $P_X$-continuity) of $\eta(x)$, and hence the weak consistency of the $k$-NN classifier, follows from the continuity (resp $P_X$-continuity) of the Radon-Nikodym derivative of $\mu_0$ with respect to $\mu_1$ provided that it exists.
[Theorem 1:]{}
*Assume that $P_X(\partial S)=0$ and that $\mu_0 << \mu_1$ and $\mu_1 << \mu_0$ on $S$. Then the following inequality holds for $P_X$-a.e. $x,z\in{\cal F}$. $$|\eta(z)-\eta(x)| \leq \frac{p}{1-p} \left|\frac{d\mu_0}{d\mu_1}(x) -
\frac{d\mu_0}{d\mu_1}(z)\right|,$$ where $d\mu_0/d\mu_1$ denotes the Radon-Nikodym derivative of $\mu_0$ with respect to $\mu_1$. When $S_0=S_1=S$ the assumption $P_X(\partial S)=0$ may be dropped.*
In particular, $\eta$ is continuous $P_X$-a.e. (resp. $P_X$-continuous) whenever $d\mu_0/d\mu_1$ is continuous $P_X$-a.e. (resp. $P_X$-continuous). Of course, a similar result holds by interchanging the sub-indices 0 and 1 and replacing $p$ by $1-p$.
[Proof:]{} Define $\mu=\mu_0+\mu_1$. Then $\mu_i << \mu$, for $i=0,1$, and we can define the Radon-Nikodym derivatives $f_i = d\mu_i/d\mu$, for $i=0,1$. From the definition of the conditional expectation we know that $\eta(x)=E(Y|X=x)=P(Y=1|X=x)$ can be expressed by $$\label{etaBayes}
\eta(x) = \frac{f_1(x)(1-p)}{f_0(x) p + f_1(x)(1-p)}.$$ Observe that $\mu \lvert_{S^c\cap S_i} = \mu_i\lvert_{S^c\cap S_i}$ and thus $f_i \lvert_{S^c\cap S_i} = \mathbbm{1}_{S^c\cap S_i}$, for $i=0,1$. Since $\mu_0 << \mu_1$ and $\mu_1 << \mu_0$ on $S$ then, on this set, we can define the Radon-Nikodym derivatives $d\mu_0/d\mu_1$ and $d\mu_1/d\mu_0$. In this case, it also holds that $\mu\lvert_S << \mu_i\lvert_S$, for both $i=0,1$ and $$\frac{d\mu}{d\mu_i}(x) = 1 + \frac{d\mu_{1-i}}{d\mu_i} (x)
\qquad \mbox{for any } x\in S.$$ Then (see, e.g., Folland 1999), for $i=0,1$ and for $P_X$-a.e. $x\in S$, $$\label{DRN}
f_i(x) = \frac{d\mu_i}{d\mu}(x) = \left( \frac{d\mu}{d\mu_i}(x) \right)^{-1}
= \frac{1}{1 + \frac{d\mu_{1-i}}{d\mu_i} (x) }$$ Substituting (\[DRN\]) into expression (\[etaBayes\]) we get $$\begin{aligned}
\eta(x) & = & \left\{ \begin{array}{l}
0 \quad \mbox{if } x\in S_0\cap S^c \\
1 \quad \mbox{if } x\in S_1\cap S^c \\
\displaystyle \frac{1-p}{p \frac{d\mu_0}{d\mu_1}(x) + 1-p} \quad \mbox{if } x\in S .
\end{array} \right.\label{etax}\end{aligned}$$ Using this last expression we can see that if $P_X(\partial S)=0$ and if $d\mu_0/d\mu_1$ is continuous $P_X$-a.e. (resp. $P_X$-continuous) on $S$ then $\eta$ is also continuous $P_X$-a.e. (resp. $P_X$-continuous) on $S$. To see this it suffices to observe that, for $P_X$-a.e. $x,z\in \mbox{int}(S)$, $$\begin{aligned}
|\eta(z)-\eta(x)| & & = \left| \frac{1-p}{p \frac{d\mu_0}{d\mu_1}(z) + 1-p} -
\frac{1-p}{p \frac{d\mu_0}{d\mu_1}(x) + 1-p} \right| \\
& & \leq \frac{p}{1-p} \left|\frac{d\mu_0}{d\mu_1}(x) - \frac{d\mu_0}{d\mu_1}(z)\right| .\end{aligned}$$ To derive the last inequality we have used that, as $\mu_i$, $i=0,1$, are positive measures, the Radon-Nikodym derivative $d\mu_0/d\mu_1$ is also non-negative.
In order to be able to combine Theorem 1 and the consistency result in Cérou and Guyader (2006, Th. 2), we are interested in finding distributions $\mu_0,\mu_1$ of an infinite-dimensional random element $X$ such that $\mu_0 << \mu_1$ and $\mu_1 << \mu_0$ with continuous Radon-Nikodym derivatives. Measures $\mu_0$ and $\mu_1$ satisfying that $\mu_0 << \mu_1$ and $\mu_1 << \mu_0$ on $S$ are said to be [*equivalent*]{} on $S$.
Let us denote by $(C[a,b],\|\;\|_\infty)$ the metric space of continuous real-valued functions $x$ defined on the interval $[a,b]$, endowed with the supremum norm, $\| x\|_\infty=\sup\{|x(t)|:t\in [a,b]\}$. Also let $C^{2}[a,b]$ be the space of twice continuously differentiable functions defined on $[a,b]$.
In the next theorem we show a broad class of Gaussian processes fulfilling the conditions of Theorem 2 in Cérou and Guyader (2006). Thus the consistency of the $k$-NN classifier is guaranteed for them. A key element in the proof are the results by Varberg (1961) and Jørsboe (1968) providing explicit expressions for the Radon-Nikodym derivative of a Gaussian measure with respect to another one. From the gaussianity assumption, the model is completely determined by giving the mean and covariance functions. For the sake of a more clear and systematic presentation the statement is divided into three parts: The first one applies to the case where the mean function in both functional populations, with distributions $\mu_0$ and $\mu_1$ (corresponding to $X|Y=0$ and $X|Y=1$), is common and the difference between both processes lies in the covariance functions (which however keep a common structure). The second part considers the dual case where the difference lies in the mean functions and the covariance structure is common. Finally, the third part of the theorem generalizes the previous two statements by including the case of different mean and covariance functions.
[Theorem 2:]{}
*Let $(\mathcal F,D) = (C[a,b],\| \; \|_\infty)$ with $0\leq a<b<\infty$.*
1. Assume that $X|Y=i$, for $i=0,1$, are Gaussian processes on $[a,b]$, whose mean function is zero and with covariance functions $\Gamma_i(s,t) = u_i(\min(s,t)) \, v_i(\max(s,t))$, for $s,t\in[a,b]$, where $u_i,v_i$, for $i=0,1$, are positive functions in $C^{2}[a,b]$. Assume also that $v_i$, for $i=0,1$, and $v_1u_1'-u_1v_1'$ are bounded away from zero on $[a,b]$, that $u_1v_1'-u_1'v_1 = u_0v_0'-u_0'v_0$ and that $u_1(a)=0$ if and only if $u_0(a)=0$. Then $d\mu_0/d\mu_1$ is continuous on $\mathcal F$.
2. Assume that $X|Y=i$, for $i=0,1$, are Gaussian processes on $[a,b]$, with equal covariance function $\Gamma(s,t) = u(\min(s,t)) \, v(\max(s,t))$, for $s,t\in[a,b]$, where $u,v\in C^{2}[a,b]$ are positive functions and $v$ and $vu'-uv'$ are bounded away from zero on $[a,b]$. Assume also that the mean function of $X|Y=1$ is 0 and that of $X|Y=0$ is a function $m\in C^2[a,b]$, such that $m(a)=0$ whenever $u(a)=0$. Then $d\mu_0/d\mu_1$ is continuous on $\mathcal F$.
3. Assume that $X|Y=i$, for $i=0,1$, are Gaussian processes on $[a,b]$, with mean functions $m_i\in C^{2}[a,b]$ and covariance functions $\Gamma_i(s,t) = u_i(\min(s,t)) \, v_i(\max(s,t))$, for $s,t\in[a,b]$, where $u_i,v_i$, for $i=0,1$, are positive functions in $C^{2}[a,b]$ which fulfill the same conditions imposed in (a). Assume also that $m_i(a)=0$ whenever $u_i(a)=0$. Then $d\mu_0/d\mu_1$ is continuous on $\mathcal F$.
Therefore, under the assumptions in either (a), (b) or (c), the $k$-NN classifier discriminating between $\mu_0$ and $\mu_1$ is weakly consistent when $k\to\infty$ and $k/n\to 0$.
[Proof:]{}
1. Varberg (1961, Th. 1) shows that, under the assumptions of (a), $\mu_0$ and $\mu_1$ are equivalent measures and the Radon-Nikodym derivative of $\mu_0$ with respect to $\mu_1$ is given by $$\label{J1}
\frac{d\mu_0}{d\mu_1}(x) = C_1 \, \exp\left\{ \frac{1}{2} \left[ C_2 x^2(a) +
\int_a^b f(t) d\left( \frac{x^2(t)}{v_0(t)v_1(t)} \right) \right] \right\}$$ where $$C_1 = \left\{ \begin{array}{l}
\left( \frac{v_0(a)v_1(b)}{v_0(b)v_1(a)} \right)^{1/2} \quad \mbox{if } u_0(a)=0 \\
\left( \frac{u_1(a)v_1(b)}{v_0(b)u_0(a)} \right)^{1/2} \quad \mbox{if } u_0(a)\ne 0
\end{array} \right.
\qquad
C_2 = \left\{ \begin{array}{l}
0 \quad \mbox{if } u_0(a)=0 \\
\left( \frac{v_0(a)u_0(a)-u_1(a)v_1(a)}{v_1(a)v_0(a)u_0(a)u_1(a)} \right)^{1/2} \quad \mbox{if } u_0(a)\ne 0
\end{array} \right.$$ and $$f(s) = \frac{v_1(s)v_0'(s)-v_0(s)v_1'(s)}{v_1(s)u_1'(s)-u_1(s)v_1'(s)} \quad \mbox{for } s\in [a,b] .$$ Observe that, by the assumptions of the theorem, this function $f$ is differentiable with bounded derivative. Thus $f$ is of bounded variation and it may be expressed as the difference of two bounded positive increasing functions. Therefore the stochastic integral (\[J1\]) is well defined and it can be evaluated integrating by parts, $$\frac{d\mu_0}{d\mu_1}(x) = C_1 \exp \left[ \frac{1}{2} \left( C_3x^2(a) + C_4 x^2(b)
- \int_a^b \frac{x^2(t)}{v_0(t)v_1(t)} df(t)\right) \right]$$ with $ C_3=C_2-f(a)/v_0(a)v_1(a) $ and $ C_4 = f(b)/v_0(b)v_1(b) $. It is clear that this derivative is a continuous functional of $x$ with respect to the supremum norm.
Now, Theorem 1 implies that $\eta(x)$ is continuous and, therefore, Besicovich condition [**(BC)**]{} holds and, from Theorem 2 in Cérou and Guyader (2006), the $k$-NN classifier is weakly consistent. Note that the equivalence of $\mu_0$ and $\mu_1$ implies the coincidence of both supports $S_0=S_1=S$.
2. In Jørsboe (1968), p. 61, it is proved that, under the indicated assumptions, $\mu_0$ and $\mu_1$ are equivalent measures with the following Radon-Nikodym derivative $$\frac{d\mu_0}{d\mu_1}(x) = \exp \left\{ D_1 + D_2 \, x(a) + \frac{1}{2} \int_a^b g(t)
d\left( \frac{2x(t)-m(t)}{v(t)} \right) \right\}$$ where $$D_1 = -\frac{m^2(a)}{2 \, u(a) \, v(a)} \mathbbm 1_{\{ u(a)>0 \}} \; , \qquad
D_2 = \frac{m(a)}{u(a) \, v(a)} \mathbbm 1_{\{ u(a)>0 \}}$$ and $$g(t) = \frac{v(t)m'(t)-m(t)v'(t)}{v(t)u'(t)-u(t)v'(t)} \; .$$ Again, the integration by parts gives $$\frac{d\mu_0}{d\mu_1}(x) = \exp \left\{ D_3 + \left( D_2 -2\,\frac{g(a)}{v(a)} \right) x(a)
+ 2 \,\frac{g(b)}{v(b)}\, x(b) - 2 \int_a^b \frac{x(t)}{v(t)}\, dg(t) \right\} ,$$ with $$D_3 = D_1 - \int_a^b g(t) \, d\left( \frac{m(t)}{v(t)} \right) .$$ Thus $d\mu_0/d\mu_1$, and hence $\eta$, are continuous and the consistency of the $k$-NN classifier holds also in this case.
3. Let us denote by $P_{m,\Gamma}$ the distribution of the Gaussian process with mean $m$ and covariance function $\Gamma$. Then $\frac{d\mu_0}{d\mu_1}(x)$ is continuous since (see e.g. Folland 1991) $$\label{RNNS}
\frac{d\mu_0}{d\mu_1}(x) = \frac{dP_{m_0,\Gamma_0}}{dP_{m_1,\Gamma_1}} (x)
= \frac{dP_{m_0,\Gamma_0}}{dP_{0,\Gamma_0}} (x) \,
\frac{dP_{0,\Gamma_0}}{dP_{0,\Gamma_1}} (x) \,
\frac{dP_{0,\Gamma_1}}{dP_{m_1,\Gamma_1}} (x),$$ and, as we have shown in the proofs of (a) and (b), the Radon-Nikodym derivatives in the right-hand side of (\[RNNS\]) are all continuous.
[Remark 1 (Application to the Ornstein-Uhlenbeck processes).]{} Let $X|Y=i$, for $i=0,1$, be Gaussian processes on $[a,b]$, with zero mean and covariance function $\Gamma_i(s,t) = \sigma_i^2 \exp(-\beta_i|s-t|)$, for $s,t\in[a,b]$, where $\beta_i,\sigma_i>0$ for $i=0,1$. Assume that $\sigma_1^2\beta_1=\sigma_0^2\beta_0$. Then these processes satisfy the assumptions in Theorem 2(a).
[Remark 2 (Application to the Brownian motion).]{} Theorem 2(b) can also be used to consistently discriminate between a Brownian motion without trend ($m_0=0$) and another one with trend ($m_1\neq 0$). It will suffice to consider the case where $u(t)=t$ and $v\equiv 1$.
[Remark 3 (On triangular covariance functions).]{} Covariance functions of type $\Gamma(s,t) = u(\min(s,t)) \, v(\max(s,t))$, called *triangular, have received considerable attention in the literature. For example, Sacks and Ylvisaker (1966) use this condition in the study of optimal designs for regression problems where the errors are generated by a zero mean process with covariance function $K(s,t)$. It turns out that the Hilbert space with reproducing kernel $K$ plays an important role in the results and, as these authors point out, the norm of this space is particularly easy to handle when $K$ is triangular. On the other hand, Varberg (1964) has given an interesting representation of the processes $X(t),\ 0\leq t<b$, with zero mean and triangular covariance function by proving that they can be expressed in the form $$X(t)=\int_0^bW(u)d_uR(t,u),$$ where $W$ is the standard Wiener process and $R=R(t,u)$ is a function, of bounded variation with respect to $u$, defined in terms of $K$.*
[Remark 4 (On plug-in functional classifiers).]{} The explicit knowledge of the conditional expectation (\[etax\]) in the cases considered in Theorem 2 could be explored from the statistical point of view as they suggest to use “plug-in” classifiers obtained by replacing $\eta(x)$ in (\[opt\]) with suitable parametric or semiparametric estimators.
[Remark 5 (On equivalent Gaussian measures and their supports).]{} According to a well-known result by Feldman and Hájek, for any given pair of Gaussian processes, there is a dichotomy in such a way that they are either equivalent or mutually singular. In the first case both measures $\mu_0$ and $\mu_1$ have a common support $S$ so that Theorem 1 is applicable with $S=S_0=S_1$. As for the identification of the support, Vakhania (1975) has proved that if a Gaussian process, with trajectories in a separable Banach space ${\cal F}$, is not degenerate (i.e., then the distribution of any non-trivial linear continuous functional is not degenerate) then the support of such process is the whole space ${\cal F}$. Again, expression (\[etax\]) of the regression functional $\eta$ suggests the possibility of investigating possible nonparametric estimators for the Radon-Nikodym derivative $d\mu_0/d\mu_1$ which would in turn provide plug-in versions of the Bayes rule $g^*(x) = {\mathbbm 1}_{ \{ \eta(x) > 1/2 \}}$ with no further assumption on the structure of the involved Gaussian processes, apart from their equivalence.
**3. Some numerical comparisons**
The aim of this section is to compare (numerically) the performance of several supervised functional classification procedures already introduced in the literature. The procedures are the $k$-NN rule, computed both with respect to the supremum norm $\|\;\|_\infty$ and the $L^2$ norm $\|\;\|_2$, and other discrimination rules reviewed in Section 3.1. One of the objectives of this numerical study is to have some insight into which classification procedures perform well no matter the type of functional data under consideration and could thus be considered a sort of benchmark for the functional discrimination problem. Section 3.2 contains a Monte Carlo study carried out on two different functional data generating models. In Section 3.3 we consider six functional real data sets taken from the literature.
*3.1 Other functional classifiers*
Here we will review other classification techniques that have been used in the literature in the context of functional data. From now on we denote by $(t_1,\ldots,t_N)$ the nodes where the functional predictor $X$ has been observed.
[*Partial Least Squares (PLS) classification*]{}
Let us first describe the procedure in the context of a multivariate predictor $\mathbf X$. PLS is actually a dimension reduction technique for regression problems with predictor $\mathbf X$ and a response $Y$ (which in the case of classification takes only two values, 0 or 1, depending on which population the individual comes from). The dimension reduction is carried out by projecting $\mathbf X$ onto an lower dimensional space such that the coordinates of the projected $\mathbf X$, the PLS coordinates, are uncorrelated to each other and have maximum covariance with $Y$. Then, if the aim is classification, Fisher’s linear discriminant is applied to the PLS coordinates of $\mathbf X$ (see Barker and Rayens 2003, Liu and Rayens 2007). In the case of a functional predictor $X$ (see Preda et al. 2007), the above described procedure is applied to the discretized version of $X$, $\mathbf X=(X(t_1),X(t_2),\ldots,X(t_N))$. Here we have chosen the number of PLS directions, among the values 1,…,10, by cross-validation.
[*Reproducing Kernel Hilbert Space (RKHS) classification*]{}
We will also define this technique initially for a multivariate predictor $\mathbf X$. For simplicity, we will assume that $\mathbf X$ takes values in $[0,1]^N$. Let $\kappa$ be a function defined on $[0,1]^N\times[0,1]^N$. A RKHS with kernel $\kappa$ is the vector space generated by all finite linear combinations of functions of the form $\kappa_{\mathbf t^*}(\cdot)=\kappa(\mathbf t^*,\cdot)$, for any $\mathbf t^*\in[0,1]^N$, and endowed with the inner product given by $\langle \kappa_{\mathbf t^*}, \kappa_{\mathbf t^{**}}\rangle_\kappa=\kappa(\mathbf t^*,\mathbf t^{**})$. RKHS are frequently used in the context of Machine Learning (see Evgeniou [*et al.*]{} 2002, Wahba 2002); for their applications in Statistics the reader is referred to the monograph of Berlinet and Thomas-Agnan (2004). In this work we use the Gaussian kernel $\kappa(\mathbf s,\mathbf t) = \exp( -\|\mathbf s-\mathbf t\|_2^2/\sigma_\kappa^2 )$, where $\sigma_\kappa>0$ is a fixed parameter. The classification problem is solved by plugging a regression estimator of the type $\eta_n(\mathbf x) = \sum_{i=1}^n c_i \, \kappa(\mathbf x,\mathbf X_i)$ into the Bayes classifier. When $X$ is a random function, this procedure is applied to the discretized $X$. The parameters $c_i$, for $i=1,\ldots,n$, are chosen to minimize the risk functional $n^{-1} \sum_{i=1}^n (Y_i-\eta_n(X_i))^2 + \lambda \langle \eta,\eta\rangle_\kappa$, where $\lambda>0$ is a penalization parameter. In this work the values of the parameters $\lambda$ and $\sigma_\kappa$ have been chosen by cross-validation via a leave-one-out procedure. According to our results, it seems that the performance the RKHS methodology is rather sensitive to changes in these parameters and even to the starting point of the leave-one-out procedure mentioned.
[*Classification via depth measures*]{}
The idea is to assign a new observation $x$ to that population, $P_0$ or $P_1$, with respect to which $x$ is deeper (see Ghosh and Chaudhuri 2005, Cuevas et al. 2007). From the five functional depth measures considered by Cuevas et al. (2007) we have taken the $h$-mode depth and the random projection (RP) depth.
Specifically, the $h$-mode depth of $x$ with respect to the population given by the random element $X$ is defined as $f_h(x) = E(K_h(\|x-X\|_2))$, where $K_h(\cdot) = h^{-1} K(\cdot/h)$, $K$ is a kernel function (here we have taken the Gaussian kernel $K(t) = \sqrt{2/\pi} \exp(-t^2/2)$) and $h$ is a smoothing parameter. As the distribution of $X$ is usually unknown, in the simulations we actually use the empirical version of $f_h$, $ \hat f_h(x) = n^{-1} \sum_{i=1}^n K_h(\|x-X_i\|_2) $. The smoothing parameter has been chosen as the 20 percentile in the $L^2$ distances between the functions in the training sample (see Cuevas et al. 2007).
To compute the RP depth the training sample $X_1,\ldots,X_n$ is projected onto a (functional) random direction $a$ (independent of the $X_i$). The sample depth of an observation $x$ with respect to $P_i$ is defined as the univariate depth of the projection of $x$ onto $a$ with respect to the projected training sample from $P_i$. Since $a$ is a random element this definition leads to a random measure of depth, but a single representative value has been obtained by averaging these random depths over 50 independent random directions (see Cuevas and Fraiman 2008 for a certain theoretical development of this idea). If we are working with discretized versions $(x(t_1),\ldots,x(t_N))$ of the functional data $x(t)$, we may take $a$ according to a uniform distribution on the unit sphere of ${\mathbb R}^N$. This can be achieved, for example, setting $a=Z/\|Z\|$, where $Z$ is drawn from standard Gaussian distribution on ${\mathbb R}^N$.
[*Moving window rule*]{}
The moving window classifier is given by $$g_n(x) = \left\{ \begin{array}{ll}
0 & \mbox{if } \sum_{i=1}^n \mathbbm{1}_{\{Y_i=0,X_i\in B(x,h)\}}
\geq \sum_{i=1}^n \mathbbm{1}_{\{Y_i=1,X_i\in B(x,h)\}} , \\
1 & \mbox{otherwise} ,
\end{array} \right.$$ where $h=h_n>0$ is a smoothing parameter. This classification rule was considered in the functional setting, for instance, by Abraham et al. (2006). In this work the parameter $h$ has been chosen again via cross-validation.
*3.2 Monte Carlo results*
In this section we study two functional data models already considered by other authors. More specifically, in Model 1, similar to one used in Cuevas et al. (2007), $X|Y=i$ is a Gaussian process with mean $ m_i(t) = 30 \, (1-t)^{1.1^i} \, t^{1.1^{1-i}} $ and covariance function $\Gamma_i(s,t)=0.25\exp(-|s-t|/0.3)$, for $i=0,1$. Observe that this model with smooth trajectories satisfies the assumptions in Theorem 2 and thus we would expect the $k$-NN classification rule (with respect to the $\|\;\|_\infty$ norm) to perform nicely. Let us note that the value of 1.1 in the exponent of $m_i(t)$ is in fact the one used in Model 1, pg. 487, of Cuevas et al. (2007), although in their work a 1.2 was misprinted instead.
Model 2 appears in Preda et al. (2007), but here the functions $h_i$, used to define the mean, have been rescaled to have domain $[0,1]$. The trajectories of $X|Y=i$ are given by $$\label{Model2}
X_i(t)=U \, h_1(t) + (1-U) \, h_{i+2}(t) + \epsilon(t) \qquad \mbox{for } i=0,1,$$ where $U$ is uniformly distributed on $[0,1]$, $h_1(t) = 2 \max(3-5|2t-1|,0)$, $h_2(t) = h_1(t-1/5)$, $h_3(t) = h_1(t+1/5)$ and the $\epsilon(t)$ is an approximation to the continuous-time white noise. In practice, this means that in the discretized approximations $(X(t_1),\ldots,X(t_N))$ to $X(t)$, the variables $\epsilon(t_1),\ldots,\epsilon(t_N)$ are independently drawn from a standard normal distribution.
The simulation results are summarized in Tables 1 and 2. The number of equispaced nodes where the functional data have been evaluated is the same for both models, $51$. The number of Monte Carlo runs is 100. In every run we generated two training samples (from $X|Y=0$ and $X|Y=1$ respectively) each with sample size 100, and we also generated a test sample of size 50 from each of the two populations. The tables display the descriptive statistics of the proportion of correctly classified observations from these test samples.
\[SimMod1\]
$k$-NN$|_\infty$ $k$-NN$|_2$ PLS RKHS $h$-modal RP(hM) MWR
------------------------- ------------------ ------------- -------- -------- ----------- -------- -------- --
Minimum 0.6200 0.6600 0.6000 0.4800 0.6400 0.5400 0.6600
First quartile 0.8000 0.8000 0.8000 0.6600 0.8000 0.7800 0.8000
Median 0.8400 0.8400 0.8400 0.8400 0.8400 0.8400 0.8400
Mean 0.8396 0.8354 0.8371 0.7999 0.8409 0.8260 0.8393
Third quartile 0.8800 0.8800 0.8800 0.9400 0.8800 0.8800 0.8800
Maximum 0.9800 0.9600 0.9800 1.0000 0.9800 0.9800 1.0000
\[2 mm\] Std. deviation 0.0603 0.0572 0.0668 0.1457 0.0589 0.0725 0.0634
: Simulation results for Model 1
\[SimMod2\]
$k$-NN$|_\infty$ $k$-NN$|_2$ PLS RKHS $h$-modal RP(hM) MWR
------------------------- ------------------ ------------- -------- -------- ----------- -------- -------- -- --
Minimum 0.8400 0.8400 0.8800 0.8400 0.8600 0.8400 0.8200
First quartile 0.9200 0.9400 0.9600 0.9600 0.9400 0.9400 0.9400
Median 0.9600 0.9600 0.9800 0.9800 0.9800 0.9600 0.9600
Mean 0.9522 0.9558 0.9686 0.9688 0.9657 0.9522 0.9570
Third quartile 0.9800 0.9800 0.9800 1.0000 1.0000 0.9800 0.9800
Maximum 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
\[2 mm\] Std. deviation 0.0335 0.0355 0.0279 0.0313 0.0308 0.0345 0.0349
: Simulation results for Model 2
Regarding Model 1, observe that there is little difference between the correct classification rates of any of the methods, except for the RKHS procedure which performs worse. In Model 2 the PLS, RKHS and $h$-modal methods slightly outperform the others. When the Monte Carlo study with this model was carried out, we also applied the $k$-NN classification procedures to a spline-smoothed version of the $X$ trajectories. The result was that the mean correct classification rate increased to 0.9582 in the case of the supremum norm and to 0.9624 in the case of the $L^2$ norm. This, together with the analysis of the flies data in the next subsection, seems to suggest that, when the curves $X$ are irregular, smoothing these functions will enhance the $k$-NN discrimination procedure.
*3.3. Some comparisons based on real data sets*
*3.3.1. Brief description of the data sets*
[*Berkeley Growth Data:*]{} The Berkeley Growth Study (Tuddenham and Snyder 1954) recorded the heights of $n_0=54$ girls and $n_1=39$ boys between the ages of 1 and 18 years. Heights were measured at 31 ages for each child. These data have been previously analyzed by Ramsay and Silverman (2002).
[*ECG data:*]{} These are electrocardiogram (ECG) data, studied by Wei and Keogh (2006), from the MIT-BIH Arrhythmia database (see Goldberger [*et al.*]{} 2000). Each observation contains the successive measurements recorded by one electrode during one heartbeat and was normalized and rescaled to have length 85. A group of cardiologists have assigned a label of normal or abnormal to each data record. Due to computational limitations, of the original $2026$ records in the data set, we have randomly chosen only $200$ observations from each group.
[*MCO data:*]{} The variable under study is the mitochondrial calcium overload (MCO), measured every 10 seconds during an hour in isolated mouse cardiac cells. The data come from research conducted by Dr. David García-Dorado at the Vall d’Hebron Hospital (see Ruiz-Meana et al. 2003, Cuevas, Febrero and Fraiman 2004, 2007). In order to assess if a certain drug increased the MCO level, a sample of functions of size $n_0 = 45$ was taken from a control group and $n_1 = 44$ functions were sampled from the treatment group.
[*Spectrometric data:*]{} For each of 215 pieces of meat a spectrometer provided the absorbance attained at 100 different wavelengths (see Ferraty and Vieu 2006 and references therein). The fat content of the meat was also obtained via chemical processing and each of the meat pieces was classified as low- or high-fat.
[*Phoneme data:*]{} The $X$ variable is the log-periodogram (discretized to 150 nodes) of a phoneme. The two populations correspond to phonemes “aa” and “ao” respectively (see more information in Ferraty and Vieu 2006). We have considered a sample of 100 observations from each phoneme.
[*Medflies data:*]{} This dataset was obtained by Prof. Carey from U.C. Davis (see Carey et al. 1998) and has been studied, for instance, by M[ü]{}ller and Stadtm[ü]{}ller (2005). The predictor $X$ is the number of eggs laid daily by a Mediterranean fruit fly for a 30-day period. The fly is classified as long-lived if its remaining lifetime past 30 days is more than 14 days and short-lived otherwise. The number of long- and short-lived flies observed was 256 and 278 respectively.
*3.3.2. Results*
We have applied the classification techniques reviewed in Section 3.1 to the real data sets just described. While carrying out the simulations of Subsection 3.1, we observed that the performance of the RKHS procedure was very dependent on the initial values of the parameters $\sigma_K$ and $\lambda$ provided for the cross-validation algorithm. In fact, finding initial values for these parameters that would finally yield competitive results with respect to the other methods took a considerable time. Thus we decided to exclude the RKHS classification method from the study with real data.
We have computed, via a cross-validation procedure, the mean correct classification rates attained by the different discrimination methods on the real data sets. In Table 3 we display the results. Since the egg-laying trajectories in the medflies data set were very irregular and spiky, we have computed the correct classification rate for both the original data and a smoothed version obtained with splines. The smoothing leads to a better performance of the $k$-NN procedure with the supremum metric, just as it happened in the simulations with Model 2.
\[RealDat\]
Data set $k$-NN$|_\infty$ $k$-NN$|_2$ PLS $h$-modal RP(hM) MWR
------------------------- ------------------ ------------- -------- ----------- -------- -------- -- --
Growth 0.9462 0.9677 0.9462 0.9462 0.9462 0.9570
ECG 0.9900 0.9950 0.9825 0.9900 0.8575 0.8850
MCO 0.8427 0.8315 0.8876 0.7640 0.7079 0.6854
Spectrometric 0.9070 0.8558 0.9163 0.6791 0.6930 0.6558
Phoneme 0.7300 0.7800 0.7400 0.7300 0.7450 0.6950
Medflies (non-smoothed) 0.5468 0.5412 0.5262 0.4925 0.5056 0.5431
(smoothed) 0.5712 0.5431 0.5094 0.5075 0.5543 0.5206
: Mean correct classification rates for the real data sets
As a conclusion we would say that the $k$-NN classification methodology with respect to the $L^\infty$ norm is always among the best performing ones if the $X$ trajectories are smooth. The $k$-NN procedure with respect to the $L^2$ norm and the PLS methodology give also good results, although the latter has the drawback of a much higher computation time.
References
[.5cm-.5cm]{}
Abraham, C., Biau, G. and Cadre, B. (2006). On the kernel rule for function classification. Annals of the Institute of Statistical Mathematics 58, 619-633.
Barker M. and Rayens W. (2003). Partial least squares for discrimination. Journal of Chemometrics 17, 166-73.
Berlinet, A. and Thomas-Agnan, C. (2004). Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer Academic Publishers.
Biau, G., Bunea, F. and Wegkamp, M. (2005). Functional classification in Hilbert spaces. IEEE Transactions on Information Theory 51, 2163-2172.
Carey, J.R., Liedo, P., M[ü]{}ller, H.G., Wang, J.L. and Chiou, J.M. (1998). Relationship of age patterns of fecundity to mortality, longevity, and lifetime reproduction in a large cohort of Mediterranean fruit fly females. Journal of Gerontology, Ser. A 53, 245–251.
Cérou, F. and Guyader, A. (2006). Nearest neighbor classification in infinite dimension. ESAIM: Probability and Statistics 10, 340-355.
Cuevas, A., Febrero, M and Fraiman, R. (2004). An ANOVA test for functional data. Computational Statistics and Data Analysis 47, 111–122.
Cuevas, A., Febrero, M and Fraiman, R. (2007). Robust estimation and classification for functional data via projection-based depth notions. Computational Statistics 22, 481–496.
Cuevas, A. and Fraiman, R. (2008). On depth measures and dual statistics. A methodology for dealing with general data. *Manuscript.*
Devroye, L., Györfi, L. and Lugosi, G. (1996). A Probabilistic Theory of Pattern Recognition. Springer-Verlag.
Evgeniou , T., Poggio, T. Pontil, M. and Verri, A. (2002). Regularization and statistical learning theory for data analysis. Computational Statistics and Data Analysis, 38, 421–432.
Ferraty, F. and Vieu, P. (2003). Curves discrimination: A nonparametric functional approach. Computational Statistics and Data Analysis 44, 161–173.
Ferraty, F. and Vieu, P. (2006). Nonparametric Modelling for Functional Data. Springer.
Ferré, L. and Villa, N. (2006). Multilayer perceptron with functional inputs: an inverse regression approach. Scandinavian Journal of Statistics 33, 807–823,
Fisher, R.A. (1936). The use of multiple measurements in taxonomic problems. Annals of Eugenics 7, 179–188.
Folland, G. B. (1999). Real analysis. Modern techniques and their applications. Wiley.
Ghosh, A. K. and Chaudhuri, P. (2005). On maximal depth and related classifiers. Scandinavian Journal of Statistics 32, 327–350.
Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P., Mark, R., Mietus, J., Moody, G., Peng, C., and He, S. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 101, 215–220.
Hand, D.J. (1997). Construction and Assessment of Classification Rules. Wiley.
Hand, D.J. (2006). Classifier technology and the illusion of progress. Statistical Science 21, 1–14.
Hastie, T., Tibshirani, R. and Friedman, J. (2001). The Elements of Statistical Learning. Springer.
James, G.M. and Hastie, T.J. (2001). Functional linear discriminant analysis for irregularly sampled curves. Journal of the Royal Statistical Society, Ser. B 63, 533-550.
Jørsboe, O. G. (1968). Equivalence or Singularity of Gaussian Measures on Function Spaces. Various Publications Series, No. 4, Matematisk Institut, Aarhus Universitet, Aarhus.
Liu, Y. and Rayens, W. (2007). PLS and dimension reduction for classification. Computational Statistics 22, 189–208.
Müller, H.G. and Stadtmüller, U. (2005). Generalized functional linear models. The Annals of Statistics 33, 774-805.
Preda, C. (2007). Regression models for functional data by reproducing kernel Hilbert spaces methods. Journal of Statistical Planning and Inference 137, 829–840.
Preda, C., Saporta, G. and Lévéder, C. (2007). PLS classification of functional data. Computational Statistics 22, 223–235.
Ramsay, J.O. and Silverman, B.W. (2002). Applied Functional Data Analysis. Methods and Case Studies. Springer-Verlag.
Ramsay, J.O. and Silverman, B.W. (2005). Functional Data Analysis. Second edition. Springer.
Ruiz-Meana, M., García-Dorado, D., Pina, P., Inserte, J., Agulló, L. and Soler-Soler, J. (2003). Cariporide preserves mitochondrial proton gradient and delays ATP depletion in cardiomyocites during ischemic conditions. American Journal of Physiology - Heart and Circulatory Physiology 285, 999–1006.
Sacks, J. and Ylvisaker, N.D. (1966). Designs for regression problems with correlated errors. Annals of Mathematical Statistics 37, 66–89.
Stone, C. J. (1977). Consistent nonparametric regression. The Annals of Statistics 5, 595-645.
Tuddenham, R. D. and Snyder, M. M. (1954). Physical growth of California boys and girls from birth to eighteen years. University of California Publications in Child Development 1, 183–364.
Vakhania, N.N. (1975). The topological support of Gaussian measure in Banach space. Nagoya Mathematical Journal 57, 59–63.
Varberg, D.E. (1961). On equivalence of Gaussian measures. Pacific Journal of Mathematics 11, 751–762.
Varberg, D.E. (1964). On Gaussian measures equivalent to Wiener measure. Transactions of the American Mathematical Society 113, 262–273.
Wahba, G. (2002). Soft and hard classification by reproducing kernel Hilbert space methods. Proceedings of National Academy of Sciences 99, 16524–16530.
Wei, L. and Keogh, E. (2006). Semi-Supervised Time Series Classification. Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 748–753, Philadelphia, U.S.A.
[^1]: Corresponding author. Phone: +34 914978640, e-mail: amparo.baillo@uam.es
[^2]: The research of both authors was partially supported by Spanish grant MTM2007-66632 and the IV PRICIT program titled [*Modelización Matemática y Simulación Numérica en Ciencia y Tecnología*]{} (SIMUMAT).
|
---
author:
- 'A. Stupakov'
- 'A. V. Bagdinov'
-
-
-
-
- 'K. I. Kugel'
- 'A. A. Gorbatsevich'
- 'F. A. Pudonin'
- 'N. N. Kovaleva'
title: 'Out-of-plane and in-plane magnetization behavior of dipolar interacting FeNi nanoislands around the percolation threshold'
---
[**Abstract\
**]{} Magnetic properties of inhomogeneous nanoisland FeNi films were studied by SQUID magnetometry. The FeNi films with nominal thickness ranging from 0.6 to 2.0 nm were deposited by rf sputtering on Sitall glass substrates and covered by a protecting Al$_2$O$_3$ layer on the top. The SQUID data indicate pronounced irreversibility behavior for the out-of-plane temperature dependent magnetization response (measured at $H$$\simeq$100Oe) using zero-field cooling (ZFC) and field cooled warming (FCW) after the applied dc magnetizing field $H_m$$\simeq$2T for the FeNi samples with the nominal thickness 1.1nm$\lesssim$$d$$\lesssim$1.8nm, below the percolation threshold. The positive difference between the FCW and ZFC data identifies two irreversibility temperature scales, $T_B$$\approx$50 K and $T^*$$\approx$200, which can be associated with the superparamagnetic and superferromagnetic behavior in inhomogeneous nanoisland FeNi films, respectively. However, above the film percolation threshold, we observed a crossover from the out-of-plane to in-plane magnetization orientation. Here, the in-plane FCW-ZFC difference implies negative remanent magnetization response in the temperature range $T_B$$\lesssim$$T$$\lesssim$$T^*$. The observed magnetization properties can be associated with the presence of the superferromagnetic phase in self-assembled clusters of quasi-2D metallic magnetic FeNi nanoislands.
rf sputtered nanoisland FeNi films, dipolar magnets, superparamagnetic and superferromagnetic properties, SQUID magnetometry, zero-field-cooling (ZFC) and field cooled warming (FCW) magnetization measurements.
\
Arrays of magnetic nanoparticles (dipolar magnets) are considered to form the basis of novel ultrahigh density magnetic data storage technology [@Sun; @Reiss; @Frey]. In dipolar magnets, each magnetic nanoparticle (NP) is in a single-domain ferromagnetic (FM) state with parallel orientation of intraparticle atomic moments arising due to strong exchange interactions. If the temperature is high enough to overcome energy barrier between different orientations of NP net magnetic moments, they exhibit the Curie-like behavior above the blocking temperature ($T_B$) in the so-called superparamagnetic (SPM) phase. Meanwhile, owing to strong long-range dipolar interactions, the arrays of NPs can possess magnetic ordering at comparatively high temperatures, and such regime is called superferromagnetc (SFM). Indeed, dipolar interactions of single-domain NPs comprising many atomic moments ($\sim$10$^3$$\div$10$^5$$\mu_B$) can be much stronger than ordinary dipolar interactions of localized atomic moments $\sim$$\mu_B$, where $\mu_B$=9.27$\cdot$10$^{-21}$ emu is the Bohr magneton, which have a characteristic scale much less than 1 K. For example, one can estimate that magnetic moment $m$ of Fe$_{21}$Ni$_{79}$ permalloy nanodiscs with the diameter $a$$\sim$10$\div$30nm and the height 1 nm varies from 6.8$\cdot$10$^3$ to 6.1$\cdot$10$^4$$\mu_B$ [@footnote1]. Then, the characteristic energy of dipole-dipole interactions, $E_{dip}$=$\frac{2m^2}{r^3}$, of two similar single-domain nanodiscs located at the distance $r$=30nm varies from 2 to 180K.
In the SFM phase, the magnetic structure in regular arrays of single-domain NPs depends essentially on the lattice type and on the particle magnetic anisotropy. The latter, in turn, strongly depends on the particle shape and size. For example, in small enough nanoplatelets, spin reorientation transition (SRT) may lead to effective perpendicular magnetic anisotropy [@Vedmedenko]. For 2D lattices of NPs with high perpendicular anisotropy, the magnetic structure corresponds to various types of two-sublattice antiferromagnetic (AF) order. For square lattices with in-plane anisotropy, the magnetic structure has four-sublattice AF order, whereas for the triangular lattices a FM order is implemented [@Rozenbaum]. Here, for finite fragments of triangular lattices, the ground state can be a vortex state formed by the magnetic moments lying in the plane (supervortex) [@Politi; @Dzian]. Similar as for soft magnetic dots [@Im; @Cowburn; @Shinjo; @Wachowiak], supervortex represents a topologically-nontrivial ordered state, more intricate than the standard domain structure.
Magnetic properties in quasi-2D systems of inhomogeneously distributed magnetic NPs usually exhibit complex behavior as a consequence of the SPM and SFM phase coexistence [@DanaMiu]. Here, local magnetic order determined by geometric self-arrangements of neighboring interacting NPs can exist. In quasi-2D systems of inhomogeneous dipolar magnets, fairly separated weakly interacting NPs determine SPM behavior above the blocking temperature $T_B$, whereas strongly interacting NPs in the close-packed assemblies are responsible for SFM behavior. By using electron holography with sub-particle resolution, local in-plane FM vs. AF order was observed in assemblies of $\sim$15 nm Co nanoparticles, depending on the close-packed triangular vs. square arrangements, and even several flux-closed regions, which can be associated with a supervortex state, were recognized [@Dunin-Borkowski]. These results are supported by the numerical simulations, which emphasize that local dipolar magnetism in quasi-2D inhomogeneous NP systems survives even at a pronounced structural disorder [@Dunin-Borkowski].
Here, by using SQUID magnetometry, we study in-plane and out-of-plane magnetic response in inhomogeneous nanoisland FeNi films, composed of flat nanoislands with lateral sizes of 5$\div$30 nm, with the nominal film thickness varying from 0.6 to 2.0 nm (as schematically illustrated by Fig.\[Fig1\]). The inhomogeneous FeNi films were grown by rf sputtering deposition on Sitall glass substrates, commonly used as film substrates in microelectronics. The experimental notice presented in Ref. [@Boltaev_APL], where equatorial magneto-optical Kerr effect was studied in periodic structures of alternating FeNi and Co nanoisland layers, suggests that a supervortex collective state is essentially relevant here. We present the evidence of the out-of-plane SFM magnetization behavior for the FeNi films with the nominal film thickness 1.1nm$\lesssim$$d$$\lesssim$1.8nm, below the physical percolation threshold. Recently, the out-of-plane SFM behavior in quasi-2D Fe(2.5nm)/Al$_2$O$_3$ multilayer composites was reported [@DanaMiu]. Moreover, we found a crossover from the out-of-plane to in-plane magnetization orientation with increasing nominal FeNi film thickness across the physical percolation threshold and discovered negative remanent magnetization response. This is in agreement with the earlier observation of negative remanence in nearly percolating magnetic granular (Ni,Fe) films in an insulating amorphous SiO$_2$ matrix by Yan [*et al.*]{} [@YanXu]. The understanding of the found out-of-plane and in-plane magnetization properties, associated with self-organized ensembles of quasi-2D single-domain FM FeNi nanoislands is quite important for the fundamental physics of magnetism, as well as for technological applications.
[**2.Materials and Methods**]{}\
[*2.1.FeNi films growth and characterization*]{}\
The nanoisland FeNi films were grown by rf sputtering deposition from Fe$_{21}$Ni$_{79}$ targets at a base vacuum pressure less than 2$\times$10$^{-6}$ Torr and a background argon pressure of 4$\times$10$^{-4}$ Torr. Glass-like Sitall material was used for substrates. The analysis of an X-ray diffraction pattern of the used Sitall substrate showed that it is represented by TiO$_2$ rutile phase [@kovaleva_apl_Ta]. During the deposition, the substrate temperature was 73$\pm$3$^\circ$C. The nominal film thickness (that is, the thickness of the corresponding continuous film) was controlled by the deposition rate and time (see more details in Ref.[@Boltaev_FTT]). We prepared the nanoisland FeNi films with nominal thickness varying from 0.6 to 2.0 nm. Our earlier spectroscopic ellipsometry studies of the nanoisland FeNi films of different thickness grown on the Sitall glass demonstrated that their dielectric permittivity changes from insulating- to metallic-like at the nominal FeNi film thickness about 1.8 nm. In addition, the temperature dependence of dc conductivity suggests the existence of the percolation threshold for the same nominal FeNi film thickness [@Pudonin_tezisi]. To avoid oxidation of the films at ambient conditions, the grown FeNi films were covered [*in situ*]{} by the Al$_2$O$_3$ capping layer 2.1 nm thick.
[*2.2.Atomic-force microscopy study of the grown FeNi films*]{}\
Surface morphology of the nanoisland FeNi films grown by the rf sputtering deposition on the Sitall glass substrates was studied by atomic-force microscopy (AFM) using Ntegra Prima (NT-MDT, Zelenograd, Russia) facility. Figure\[AFM\](a) represents a large-scale AFM image of the FeNi film sample \[Al$_2$O$_3$(2.1nm)/FeNi($d$)/Sitall substrate\] with the nominal film thickness $d$$\simeq$1.2nm. The shown large-scale topography profile indicates the height variation in the range 1$\div$3nm, which characterizes the surface roughness. The smaller-size image in Fig.\[AFM\](b) clearly identifies the grainy structure with the grain lateral dimensions in the range of 15$\div$25nm. Due to intrinsically uneven surface of the substrate, the grain height fluctuates strongly. The typical height is of about 1nm \[see the height profile in Fig.\[AFM\](b)\]. The inevitable AFM broadening does not allow estimating the real width of gaps between the grains. Figure\[AFM\](c) represents an AFM image of the FeNi film sample \[Al$_2$O$_3$(2.1nm)/FeNi($d$)/Sitall substrate\] with the nominal film thickness $d$$\simeq$1.9nm. Due to appreciable coalescence of the nanoislands in the film with the thickness above the percolation threshold at $d_c$$\simeq$1.8nm [@Pudonin_tezisi] the topography profile is more shallow in the percolating regions.
[*2.3.SQUID measurements*]{}\
For magnetization measurements, we cut out the FeNi film samples of approximate dimensions 3$\times$3 mm$^2$. Using the SQUID magnetometer MPMS XL 7 T, we were able to measure magnetization in the temperature range from 2 to 300 K. High sensitivity of magnetic measurements (2$\times$10$^{-8}$ emu) was enabled by reciprocating sample transport. Recently by using a MPMS XL 7 T SQUID magnetometer total magnetization of $d^0$ charge-imbalanced FM interface between nonmagnetic perovskites of the order of 10$^{-6}$ emu was reported [@Oja]. For the zero-field-cooled (ZFC) measurements, a sample was first cooled down to $T$$\simeq$5K in zero magnetic field. Then, the magnetic field $H$$\simeq$100Oe was applied, and the ZFC data were collected, while the sample is slowly warmed up above the irreversibility temperature. We would like to point out that for single-domain Fe$_{21}$Ni$_{79}$ nanoislands, the stray magnetic fields at the edge constitute 1.08 T. Therefore, to avoid uncertainties in the determination of the equilibrium magnetization at small $H$ and to achieve a fully polarized magnetic state, we applied the large magnetizing filed $H_m$$\simeq$2 T, while the film sample was cooled down to 5 K, and finally switching the field off. Then, the field cooled warming (FCW) data were collected, while the sample is slowly warmed up in the same measurement field $H$$\simeq$100Oe. In a more usual protocol, after the ZFC part, a second set of data is collected while the sample is slowly cooled down in the same field, the field-cooled (FC) part. However, use of the ZFC-FC protocol, when the measurement field is not large, usually of about 100$\div$200 Oe, cannot guarantee a fully polarized magnetic state in our case.
[ **3.Results and Discussion**]{}\
Figure \[FCWZFC\](a-d) shows the ZFC and FCW magnetization response of the FeNi film samples (schematically illustrated by Fig.\[Fig1\]) with different nominal film thickness varying from 0.61 to 2.04 nm, registered in the in-plane and out-of-plane geometry of the applied magnetic field. To compare the ZFC and FCW magnetization response, $M_0(T)$, measured for the different FeNi film samples, the magnetization response was normalized to the equal sample area, $S$=10$^{-1}$cm$^2$ (which is typical for our samples measured here by SQUID magnetometry), using the following formula $$\begin{aligned}
M_0^{norm}(T)\simeq S\rho_SL\frac{M_0(T)}{m_0},
\label{Mnorm}\end{aligned}$$ where $m_0$ is the sample mass, $\rho_S$$\simeq$2.72$\pm$0.08 g/cm$^3$ is the Sitall substrate density, and $L$$\simeq$0.056$\pm$0.004 cm is the substrate thickness. The normalized magnetization response of the FeNi film samples shown in Fig. \[FCWZFC\](a-d) is typical paramagnetic-like, due to the dominating contribution of the Sitall substrate, which contains clustered magnetic impurities and/or defects [@footnote2].
One can notice that the out-of-plane magnetization response, $M_0^{norm}(T)$, clearly demonstrates difference between the FCW and ZFC curves for all studied samples \[see the top panels of Fig. \[FCWZFC\](a-d)\]. However, we found that the FCW-ZFC difference is remarkably pronounced for the FeNi film samples with the nominal film thickness 1.1nm$\lesssim$$d$$\lesssim$1.8nm, where the associated irreversibility behavior persisted up to $T^*$$\approx$200 K \[see Figs.\[FCWZFC\](b-d) and \[Dif\](b)\]. In addition, a clear kink was observed there for the out-of-plane FCW-ZFC difference at $T_B$$\approx$50K \[as illustrated by Fig.\[Dif\](b)\]. The revealed temperatures, $T_B$$\approx$50K and $T^*$$\approx$200K, indicate the existence of two different temperature scales for the out-of-plane irreversibility behavior of the studied nanoisland FeNi films. We noticed an apparent analogy to the results for the quasi-2D Fe(2.5nm)/Al$_2$O$_3$ multilayer composites at low-filling factor [@DanaMiu]. In line with Ref.[@DanaMiu], the temperatures $T_B$$\approx$50K and $T^*$$\approx$200K can be associated with the SPM and SFM behavior, respectively. Here, fairly separated and weakly interacting small FeNi nanoislands determine the SPM behavior above the blocking temperature $T_B$$\approx$50 K, whereas strongly interacting FeNi nanoislands in their dispersive assemblies are responsible for the induced out-of-plane SFM behavior. The latter is indicated by the additional hysteretic-like contribution persistent up to the higher irreversibility temperature $T^*$$\approx$200K \[as it is schematically illustrated by Fig.\[Dif\_Demo\](b)\]. From Fig.\[Dif\](b) one can estimate that the SFM hysteretic-like contribution attains its maximum of about 1.8$\cdot$10$^{-6}$ emu at low temperatures ($\approx$20% from the saturation magnetization of Fe$_{21}$Ni$_{79}$ permalloy [@footnote1], here one should take into account that saturation magnetization of NPs is usually somewhat less than that of a bulk material).
Analyzing the in-plane normalized magnetization response, $M_0^{norm}(T)$, obtained from the FCW and ZFC measurements, we can distinguish three different types of behavior, depending on the nominal FeNi film thickness \[see the bottom panels of Fig.\[FCWZFC\](a-d) and Fig.\[Dif\](a-c)\]. Thus, for the thinnest investigated FeNi film sample (i) with the nominal thickness $d$$\simeq$0.61nm, the FCW-ZFC difference is quite small \[see Fig.\[FCWZFC\](a)\]. Here, the in-plane FCW-ZFC difference starts to deviate from nearly zero values (within the experimental accuracy) below $\approx$60$\div$70K \[see Fig.\[Dif\](a)\], indicating the associated SPM character of the irreversibility behavior below the blocking temperature \[as it is schematically demonstrated by Fig.\[Dif\_Demo\](a)\]. The normalized FCW-ZFC difference attains its maximum of about 2$\cdot$10$^{-6}$ emu at 5 K($\approx$45% from the saturation magnetization of Fe$_{21}$Ni$_{79}$ permalloy [@footnote1]) \[see Fig.\[Dif\](a)\].
However, for the studied FeNi film samples (ii) with nominal thickness in the range 1.1nm$\lesssim$$d$$\lesssim$1.8nm, the in-plane response did not show clearly perceptible difference between the temperature dependent magnetization response using ZFC and FCW at the dc magnetizing field $H_m$$\simeq$2T \[see Figs.\[FCWZFC\](b,c) and \[Dif\](b)\].
And finally, there is a remarkable difference between the in-plane FCW and ZFC normalized magnetization curves for the FeNi film sample (iii) with the thickness $d$$\simeq$2.04nm, above the physical percolation threshold at $d_c$$\simeq$1.8nm [@Pudonin_tezisi] \[see the bottom panel of Fig.\[FCWZFC\](d)\]. Here, the in-plane ZFC curve shows a pronounced maximum at the blocking temperature $T_B^{c}$$\simeq$25K. This implies the existence of a crossover from the out-of-plane to in-plane magnetization orientation. The $T_B^{c}$$\simeq$25K seemingly manifests SPM properties in the FeNi film close to the percolation threshold at $d_c$$\simeq$1.8nm [@Pudonin_tezisi]. Below $T_B^{c}$, the normalized FCW-ZFC difference attains its maximum of about 6$\cdot$10$^{-6}$emu at 10K ($\approx$37% from the saturation magnetization of Fe$_{21}$Ni$_{79}$ permalloy [@footnote1]) \[see Fig.\[Dif\](c)\]. Surprisingly, here the in-plane FCW curve exhibits lower magnetization values than that of the corresponding ZFC curve from about $T_B^{c}$ to $T^*$$\approx$200K \[see the bottom panel of Fig.\[FCWZFC\](d)\]. Due to this, the normalized FCW-ZFC difference turns out to be negative, with a pronounced minimum of about $-1.8$$\cdot$10$^{-6}$ emu around 50K \[see Fig.\[Dif\](c)\]. And, with rising temperature above the SFM irreversibility temperature $T^*$$\approx$200K, the FCW-ZFC difference vanishes. In contrast to the induced out-of-plane SFM behavior found below the percolation threshold, where the FCW curve lies above the ZFC curve, here the FCW curve lies below the ZFC curve. According to our simple modeling, this behavior can be adjusted by the negative SFM contribution \[as it is schematically illustrated by Fig.\[Dif\_Demo\](c)\]. The found behavior indicates that the magnetization response is in a direction opposite to the applied dc magnetizing field $H_m$$\simeq$2T (while the film sample was cooled down from above $T^*$$\approx$200 K to 5 K), and can be associated with the SFM component.
Figure \[Rel\](a,b) displays temperature dependence of the in-plane and out-of-plane normalized remanent magnetization response of the nanoisland FeNi film samples \[capping Al$_2$O$_3$ layer (2.1 nm)/FeNi film ($d$)/Sitall substrate\] produced by the magnetizing field $H_m$$\simeq$1T at 10K. Here, after the magnetizing field was switched off, the remanent magnetization was recorded at zero external magnetic field ($H$$\simeq$0 T) on rising temperature from 10 to 300 K, and then on cooling down to the lowest measuring temperature. The magnetization measurements at $H$$\simeq$0 T allowed us to get rid of the paramagnetic-like contribution of the Sitall substrate \[dominating the magnetization response shown in Fig.\[FCWZFC\](a-d)\]. One can notice from Fig.\[Rel\](a) that the out-of-plane remanent magnetization of the FeNi film with the thickness $d$$\simeq$1.1 nm decreases from about 5$\cdot$10$^{-6}$ emu at 10K ($\approx$57% from the saturation magnetization of Fe$_{21}$Ni$_{79}$ permalloy [@footnote1]) to about 2$\cdot$10$^{-6}$ emu at 300K ($\approx$23%, respectively). The observed trends of the in-plane remanent magnetization are the most pronounced for the FeNi film sample with the nominal thickness $d$$\simeq$2.04nm \[see Fig.\[Rel\](b)\]. Here, the FeNi film was initially magnetized along the magnetic field ($H_m$$\simeq$1T) direction applied in the film plane at 10K. Surprisingly, with rising temperature up to 50 K, the in-plane normalized remanent magnetization relaxed down to negative values of about $-$2$\cdot$10$^{-6}$ emu. Here, above the blocking temperature $T_B$, large magnetic moments of individual SPM nanoislands become strongly fluctuating, and the resulting contribution from the SPM component is zero at $H$$\simeq$0. This means that some parts of the FeNi film, related to the SFM component, retain the magnetization opposite to the applied magnetizing field $H_m$$\simeq$1T. Indeed, with further increasing temperature above the irreversibility temperature $T^*$, the negative magnetization response disappeared, and the in-plane remanent magnetization attained a small value at 300 K. Interestingly, the in-plane temperature dependence of the magnetization was reproducible at subsequent cooling down, exhibiting the same negative magnetization response below 150K down to 50K. This means that the negative magnetization component is related to a magnetically-ordered ground state of the SFM phase. It remains intact to the applied magnetizing field $H_m$$\simeq$1T, preserving the initial magnetization orientation. The result shown in Fig.\[Rel\](b) suggests that around 50 K the inverted hysteresis $M$$-$$H$ loop will be observed for the in-plane measurements, with the negative magnetization value of –12 % at $H$$\simeq$0 (normalized to positively saturated value). One can notice from Fig.\[Rel\](a) that the temperature dependence of the in-plane remanent magnetization of the FeNi film with the nominal thickness $d$$\simeq$1.1nm showed, in main, similar trends. However, the observed temperature effects were less pronounced here and shifted to lower temperatures. We note that the observed trends of the remanent magnetization have a strong resemblance to the results of the independent FCW-ZFC measurements \[see Fig.\[Dif\](b,c)\], giving supporting experimental evidence.
Now we discuss the observed magnetization properties. The observed out-of-plane SFM behavior below the percolation threshold can be associated with an effective perpendicular magnetic anisotropy in the rf sputtered FeNi films composed of quasi-2D FM FeNi nanoislands. For example, an effective perpendicular anisotropy can exist in small enough quasi-2D FeNi nanoislands due to SRT [@Vedmedenko]. As a result, the local magnetic structure may correspond to AF ground state on square or triangular lattice fragments of self-assembled quasi-2D FeNi nanoislands. In addition, here we observed a crossover from the out-of-plane to in-plane magnetization orientation with increasing the nanoisland size close to the percolation transition. Here, the in-plane magnetic structure may have four-sublattice AF order for square lattice fragments, whereas a FM order may be implemented for the triangular lattice fragments.
We would like to note that such system with an effective perpendicular magnetic anisotropy has also a tendency to inhomogeneous distribution of magnetic moments in the form of supervortices [@Dzian]. We infer that the studied nanoisland FeNi films, composed of inhomogeneously distributed FM single-domain nanoislands, represent a unique playground to challenge a supervortex state and its magnetic properties. For small magnetic anisotropy, a purely planar vortex can exist on close-packed hexagonal fragments of a triangular lattice, with in-plane distribution of FM NP’s magnetic moments. Here, the total out-of-plane projection of the magnetic moment vanishes, but the in-plane component of the total magnetization is nonzero, as the magnetic moment of the vortex core remains not compensated [@Dzian]. This may be relevant to the in-plane magnetization properties found in the present study for the nanoisland FeNi film with the nominal film thickness above the percolation threshold at $d_c$$\simeq$1.8nm. With the increase of the particle magnetic anisotropy, the vortex core starts to protrude out-of-plane. And with the further increase, the symmetry of the vortex ground state increases, where the planar magnetization component vanishes (featuring zero-net magnetic moment), but the perpendicular component changes significantly [@Dzian]. This may be well consistent with the magnetization properties found in the present study for the nanoisland FeNi films with the nominal film thickness 1.1nm$\lesssim$$d$$\lesssim$1.8nm.
Recently the out-of-plane SFM behavior in quasi-2D Fe(2.5nm)/Al$_2$O$_3$ multilayer composites was reported [@DanaMiu]. The results by Miu [*et al.*]{} indicate that the dipolar interactions are not the major interactions and support the relevance of two-dimensionality and additional short-range “superexchange” interactions for the occurrence of the out-of-plane SFM behavior in quasi-2D FM NPs above some critical filling factor (see Ref.[@DanaMiu] and references therein). In particular, the stray fields of non-point-like magnetic dipoles can promote this “superexchange” and drive SFM order in quasi-2D FM NPs. In addition, an indirect Ruderman-–Kittel–-Kasuya-–Yosida (RKKY) interaction between FM NPs in quasi-2D magnetic structures can contribute to the short-range “superexchange” coupling mechanism and be responsible for SFM behavior [@Du]. Thus, the SFM phase, associated with complex magnetic behavior in quasi-2D clusters of large NP’s localized magnetic moments implies essentially a Many-Body Localized (MBL) state [@SciRep].
In addition, here we demonstrated that the discovered in-plane negative magnetization response above the nanoisland FeNi film percolation threshold can be associated with the SFM component. Earlier, negative remanent magnetization was observed in nearly percolating magnetic granular (Ni,Fe) films in an insulating amorphous SiO$_2$ matrix [@YanXu]. The temperature dependent remanence magnetization observed there and shown in Fig.2 of Ref.[@YanXu] is similar to the in-plane magnetization behavior found in the present study for FeNi nanoislands above the percolation threshold \[see Figs.\[Dif\](c) and \[Rel\](b)\]. The remanence observed in Ref.[@YanXu] was as large as $-$9% compared to the positively saturated value. It was suggested in this study that near the percolation threshold the magnetostatic interaction between coexisting SPM and FM components, with a special geometry of the FM nanoclusters, favors their opposite alignment, induced by the applied magnetizing field. An alternative interpretation in terms of interface exchange interaction or exchange anisotropy was suggested for similar phenomena observed in amorphous and multilayered materials (see Ref.[@YanXu] and references therein). The negative remanent magnetization is possible in an exchange-coupled bilayer, when a magnetically soft material is influenced by the demagnetizing field of the hard material. However, all these has a strong similarity with exchange bias effect, actively studied in many composite magnetic materials.
[**4.Conclusions**]{}\
We present the evidence of the out-of-plane SFM behavior for the nanoisland FeNi films with the nominal film thickness 1.1nm$\lesssim$$d$$\lesssim$1.8nm below the percolation threshold at $d_c$$\simeq$1.8nm [@Pudonin_tezisi] in the temperature range, which fits well to the estimated characteristic energy of the long-range dipolar interactions of about 180 K (the estimate is given in the Introduction). The SFM behavior is indicated by the additional hysteretic-like contribution persistent up to the irreversibility temperature $T^*$$\approx$200K. Besides, an admixture of the SPM phase was identified here by a clear kink in the out-of-plane FCW-ZFC difference at $T_B$$\simeq$50 K.
Above the film percolation threshold, we observed a crossover from the out-of-plane to in-plane magnetization orientation. Here, the ZFC curve shows a clear maximum near the blocking temperature $T^c_B$$\simeq$25 K, which certifies the presence of SPM component. The in-plane FC-ZFC difference turns out to be negative in the temperature range $T_B$$\lesssim$$T$$\lesssim$$T^*$, implying that the magnetization response is in a direction opposite to the applied dc magnetizing field $H$$\simeq$2T. The investigation of $M(H)$ hysteresis loops at small applied magnetic fields at different temperatures will be interesting and relevant here. We showed that the discovered in-plane negative magnetization response above the nanoisland FeNi film percolation threshold can be associated with the SFM component. From our study, we can conclude that the negative magnetization response can be related to some parts of the FeNi film, which retain the magnetization opposite to the applied magnetizing field, preserving their initial magnetization orientation. These parts, related to the SFM component, can be considered as being magnetically hard, so that the direction of their magnetization could not be changed by the applied magnetizing field. The origin of the magnetically hard component in the studied nanoisland FeNi films needs to be further investigated. For example, its relevance to a core of the purely planar vortex, which can exist on close-packed hexagonal fragments of a triangular lattice here, with in-plane distribution of FM NP’s magnetic moments, should be challenged.
We conclude that the observed magnetization properties can be associated with the SFM behavior in self-assembled clusters of quasi-2D metallic magnetic FeNi nanoislands. The SFM phase, associated with complex magnetic behavior in quasi-2D clusters of large NP’s localized magnetic moments ($\sim$10$^3$$\div$10$^5$$\mu_B$) implies a MBL state. The electronic excitations [@SciRep; @Kovaleva_PRL; @Kovaleva_PRB] of this MBL state and response to strong applied magnetic fields need to be further fundamentally studied. Also, the understanding of the found out-of-plane and in-plane SFM behavior, associated with self-organized ensembles of quasi-2D single-domain nanoislands, requires further studies by using, for example, magnetic imaging techniques.
[ **Abbreviations**]{}\
ZFC: zero-field cooling, FC: field cooling, FCW: field-cooled warming, NP: nanoparticle, FM: ferromagnetic, SPM: superparamagnetic, SFM: superferromagnetic, SRT: spin reorientation transition, AF: antiferromagnetic, AFM: atomic-force microscopy.
[ **Conflict of Interests**]{}\
The authors declare no competing financial interests.
[ **Acknowledgments**]{}\
The authors acknowledge fruitful discussions with M. Forrester, F. Kusmartsev, and N. Sibeldin. This work was supported by the Czech Science Foundation GA CR (Grant No. 15-13778S) and by the Russian Foundation for Basic Research (projects 14-02-00276 and 16-02-00304). Our experiments were performed in MLTL (http://mltl.eu), which is supported within the program of Czech Research Infrastructures (project no. LM2011025).
References {#references .unnumbered}
==========
[99]{}
S. Sun and C. B. Murray, “Synthesis of monodisperse cobalt nanocrystals and their assembly into magnetic superlattices,” [*Journal of Applied Physics*]{}, vol. 85, no. 8, pp. 4325–4330, 1999.
G. Reiss and A. Hütten, “Applications beyond data storage,” [*Nature Materials*]{}, vol. 4, no. 10, pp. 725–726, 2005.
N. A. Frey and S. Sun, “Magnetic Nanoparticle for Information Storage Applications. Inorganic Nanoparticles: Synthesis, Applications, and Perspectives,” [*CRC Press*]{}, 2011.
Here the saturation magnetization $M_S$$\simeq$ 800 emu/cm$^3$.
E. Y. Vedmedenko, H. P. Oepen, and J. Kirschner, “Size-dependent spin reorientation transition in nanoplatelets,” [*Physical Review B*]{}, vol. 67, no. 1, pp. 012409–1–4, 2003.
V. M. Rozenbaum, V. M. Ogenko and A. A. Chuiko, “Vibrational and orientational states of surface atomic groups,” [*Soviet Physics Uspekhi*]{}, vol. 34, no. 10, pp. 883-–902, 1991.
P. Politi, M. G. Pini and R. L. Stamps, “Dipolar ground state of planar spins on triangular lattices,” [*Physical Review B*]{}, vol. 73, no. 2, pp. 020405(R)–1–4, 2006.
S. A. Dzian, A. Yu. Galkin, B. A. Ivanov et al., “Vortex ground state for small arrays of magnetic particles with dipole coupling,”[*Physical Review B*]{}, vol. 87, no. 18, pp. 184404-1-184404-6, 2013.
M.-Y. Im, P. Fischer, K. Yamada et al., “Symmetry breaking in the formation of magnetic vortex states in a permalloy nanodisk,” [*Nature Communications*]{}. vol. 3, pp. 983–988, 2012.
R. P. Cowburn, D. K. Koltsov, A. O. Adeyeye et al., “Single-domain circular nanomagnets,” [*Physical Review Letters*]{}, vol. 83, no. 5, pp. 1042–1045, 1999.
T. Shinjo, T. Okuno, R. Hassdorf et al.,“Magnetic vortex core observation in circular dots of permalloy,” [*Science*]{}, vol. 289, no. 5481, pp. 930–932, 2000.
A. Wachowiak, J. Wiebe, M. Bode et al., “Direct observation of internal spin structure of magnetic vortex cores,” [*Science*]{}, vol. 298, no. 5593, pp. 577–580, 2002.
D. Miu, S. I. Jinga, B. S. Vasile et al., “Out of plane superferromagnetic behavior of quasi two-dimensional Fe/Al$_2$O$_3$ multilayer nanocomposites,” [*Applied Physics Letters*]{}, vol. 117, no. 7, pp. 074303–1–4, 2015.
M. Varón, M. Beleggia, T. Kasama et al., “Dipolar magnetism in ordered and disordered low-dimensional nanoparticle assemblies,” [*Scientific Reports*]{} vol. 3, pp. 1234–1–5, 2013.
A. P. Boltaev, F. A. Pudonin, and I. A. Sherstnev, “Vortex-like magnetization of multilayer magnetic nanoisland systems in weak magnetic fields,” [*Applied Physics Letters*]{}, vol. 102, no. 14, pp. 142404–1–3, 2013.
X. Yan and Y. Xu, “Negative remanence in magnetic nanostructures,” [*Journal of Applied Physics*]{}, vol. 79, no. 8, pp. 6013–6015, 1996.
N. N. Kovaleva, D. Chvostova, A. V. Bagdinov et al., “Interplay of electron correlations and localization in disordered $\beta$-tantalum films: Evidence from dc transport and spectroscopic ellipsometry study,” [*Applied Physics Letters*]{}, vol. 106, no, pp. 051907–1–5, 2015.
A. P. Boltaev, F. A. Pudonin, I. A. Sherstnev, “Specific features of the magnetoresistance in multilayer systems of magnetic nanoislands in weak magnetic fields,” [*Physics of the Solid State*]{}, vol. 53, no. 5, pp. 950–956, 2011. I. A. Sherstnev, PhD thesis “Electronic transport and magnetic structure of nanosland ferromagnetic materials systems”, http://www.lebedev.ru/ru/dissertation-councils/vak.html?date=2014-04-28.
R. Oja, M. Tyunina, L. Yao et al., “Ferromagnetic interface between nonmagnetic perovskites,” [*Physical Review Letters*]{}, vol. 109, no. 12, pp. 127207–1–5, 2012.
We fitted the in-plane magnetization field dependence of the Sitall substrate measured at $T$$\simeq$10K with the Langevin function $M(H,T)=N_p \mu_p\left[{\rm coth}\left(\frac{\mu_p H}{k_{\rm B} T}\right)-\frac{k_{\rm B} T}{\mu_p H}\right]$, where $k_{\rm B}$ is the Boltzman’s constant, and estimated average magnetic moment of magnetic impurities $\mu_p$$\simeq$6$\mu_B$ and their concentration $N_p$$\simeq$2.03$\cdot$10$^{19}$cm$^{-3}$. J. Du, B. Zhang, R. K. Zheng et al., “Memory effect and spin-glass-like behavior in Co-Ag granular films,” [*Physical Review B*]{}, vol. 75, no. 1, pp. 014415–1-7, 2007.
N. N. Kovaleva, K. I. Kugel, A. V. Bazhenov et al., “Formation of metallic magnetic clusters in a Kondo-lattice metal: Evidence from an optical study,” [*Scientific Reports*]{}, vol. 2, pp. 890–1–7, 2012.
N. N. Kovaleva, A. V. Boris, C. Bernhard et al., “Spin-controlled Mott-Hubbard bands in LaMnO$_3$ probed by optical ellipsometry,” [*Physical Review Letters*]{}, vol. 93, no. 14, pp. 147204–1–4, 2004.
N. N. Kovaleva, A. V. Boris, P. Yordanov et al., “Optical response of ferromagnetic YTiO$_3$ studied by spectral ellipsometry,” [*Physical Review B*]{}, vol. 76, no. 15, pp. 155125–1–11, 2007.
![A schematic picture of the nanoisland FeNi film samples \[capping Al$_2$O$_3$layer (2.1nm)/FeNi ($d$)/Sitall substrate\].](Fig1.eps){width="12.0cm"}
\[Fig1\]
![AFM images of the FeNi film samples \[Al$_2$O$_3$(2.1 nm)/FeNi($d$)/Sitall substrate\] with the nominal film thickness (a,b) $d$$\simeq$1.2 nm and (c) $d$$\simeq$1.9 nm.](Fig2.eps){width="8.0cm"}
\[AFM\]
{width="17.0cm"}
\[FCWZFC\]
{width="15.0cm"}
\[Dif\]
{width="15.0cm"}
![The temperature dependence of the normalized remanent magnetization, produced by the in-plane and out-of-plane magnetizing field $H_m$$\simeq$1T at $T$$\simeq$10K in the studied FeNi film samples \[Al$_2$O$_3$(2.1nm)/FeNi($d$)/Sitall substrate\] with the nominal film thickness (a) 1.10nm and (b) 2.04nm. Arrows indicate the magnetization response ($H$$\simeq$0) with cycling temperature from 10K to 300K and back, down to low temperatures. The displayed symbols are larger than the error bars. The solid curves are the guides to the eye.](Fig6.eps){width="11.0cm"}
\[Rel\]
|
---
abstract: 'Data clustering, including problems such as finding network communities, can be put into a systematic framework by means of a Bayesian approach. The application of Bayesian approaches to real problems can be, however, quite challenging. In most cases the solution is explored via Monte Carlo sampling or variational methods. Here we work further on the application of variational methods to clustering problems. We introduce generative models based on a hidden group structure and prior distributions. We extend previous attends by Jaynes, and derive the prior distributions based on symmetry arguments. As a case study we address the problems of two-sides clustering real value data and clustering data represented by a hypergraph or bipartite graph. From the variational calculations, and depending on the starting statistical model for the data, we derive a variational Bayes algorithm, a generalized version of the expectation maximization algorithm with a built in penalization for model complexity or bias. We demonstrate the good performance of the variational Bayes algorithm using test examples.'
author:
- |
Alexei Vazquez\
The Simons Center for Systems Biology\
Institute for Advanced Study, Einstein Drive, Princeton, New Jersey 08540, USA
title: |
Bayesian approach to clustering real value, hypergraph and bipartite graph data:\
solution via variational methods
---
Introduction
============
Mixture models provide an intuitive statistical representation of datasets structured in groups, clusters or classes [@maclachlan00]. A complex dataset is decomposed into the superposition of simpler datasets. The inverse problem consists in determining the group decomposition and the statistical parameters characterizing each group. For a fixed number of groups the expectation maximization (EM) algorithm provides a recursive solution to the inverse problem [@dempster77]. The estimation of the right number or groups has been, however, a great challenge. Corrections such as the Arkaike information criterion (AIC) [@akaike74] and the Bayesian information criterion (BIC) [@schwarz78] have been derived, penalizing model complexity and overfitting. Yet, the number of groups estimated from these criteria is in general unsatisfactory.
In contrast, a Bayesian approach would not attempt to estimate what is the “optimal” number of groups, but instead average over models with a different number of groups [@jeffreys39]. The Bayesian approach is becoming a popular technique to solve problems in data analysis, model selection and hypothesis testing [@spirtes00; @mackay03; @robert07]. Many of the original ideas come from the early work of Jeffreys [@jeffreys39], but it is just recently that they are starting to be used widely [@spirtes00; @mackay03; @robert07]. The application of Bayesian approaches to real problems can be, however, quite challenging. In most cases the solution is explored via Monte Carlo sampling [@chen00; @mackay03] or variational methods [@mackay03; @beal03; @yedidia05]. The application of variational methods to Bayesian problems results in the variational Bayes (VB) algorithm [@mackay03; @beal03]. The VB algorithm is a set of self-consistent equations analog to the EM algorithm. They can be solved recursively obtaining an approximate solution to the inverse inference problem. These methods have been applied, for example, to Gaussian mixture models for real value data [@maclachlan00; @rasmussen00], Dirichlet mixture models for categorical data [@blei03] and the problem of finding graph modules [@hofman07].
Here we further study the use of variational methods in the context of Bayesian approaches, focusing on data clustering problems. I the first two sections we review the Bayesian approach. In Section \[variational\] we revisit the connection between the Bayesian formulation and statistical mechanics. In section \[models\] we introduce the generalities of generative models with a hidden structure at the samples side and at both the samples and variables side. In Section \[S:priors\] we extend the previous work by Jaynes [@jaynes68] deriving prior distributions based on symmetry properties. We report a correction to his result for the model with a location and scale parameter and an extension of his result for the binomial model to the multinomial model. In the following Sections we study the problem of two-sides clustering real value data and of clustering data represented by a hypergraph or bipartite graph. Depending on our starting statistical model, we obtain a VB algorithm. Because of its Bayesian root, the VB algorithms have a built in correction for model complexity or bias and, therefore, they do not require the use of additional complexity criteria. The performance of the VB algorithms is tested in some examples, obtaining satisfactory results whenever there is a significant distinction between the groups.
Bayesian approach and variational solution {#variational}
==========================================
The [*Bayesian approach*]{} is a systematic methodology to interpret complex datasets and to evaluate model hypothesis. Its main ingredients or steps are: given a dataset $D$, (i) introduce a statistical model with model parameters, $\phi$, (ii) write down the likelihood to observe the data given the proposed model and parameters, $P(D|\phi)$, (iii) determine the prior distribution for the model parameters based on our current knowledge, $P(\phi)$, and, finally, (iv) invert the statistical model of the data given the likelihood and prior distribution to obtain the posterior distribution of the model parameters given the model and data, $P(\phi|D)$. The latter step is based on Bayes rule
$$\label{Bayes-Theorem}
P(\phi|D) = \frac{1}{Z} P(D|\phi)P(\phi)$$
where
$$\label{Z}
Z = P(D)=\int d\theta P(D|\phi)P(\phi)\ .$$
Having obtained the distribution of the model parameters, at least formally, we can determine other magnitudes. For example, the average of a quantity $A(\phi)$ is given by
$$\label{Ave}
\langle A(\phi)\rangle = \int d\phi P(\phi|D) A(\phi)\ .$$
In practice calculating (\[Z\]) or (\[Ave\]) is a formidable task. A very powerful approximation scheme is the [*variational method*]{} [@mackay03; @beal03]. The main idea of the variational method is to approximate the generally difficult to handle distribution $P(\phi|D)$ by a distribution $Q(\phi|D)$ of a more tractable form. In the following we omit the dependency of $Q$ on $D$ and just write $Q(\phi)$. Given $Q(\phi)$ we can obtain a bound for $F=-\ln Z$ using Jensen’s inequality
$$\begin{aligned}
\label{Jensen}
F & = & -\ln Z
\nonumber\\
& = & -\ln \int d\phi Q(\phi)
\frac{P(D|\phi)P(\phi)}{Q(\phi)}
\nonumber\\
& \leq & -\int d\phi Q(\phi)
\ln \frac{P(D|\phi)P(\phi)}{Q(\phi)}\end{aligned}$$
The latter equation can be rewritten as [@mackay03]
$$\label{F}
F \leq U - TS$$
where $T=1$,
$$\label{U}
U = -\int d\phi Q(\phi) \ln P(D|\phi)$$
is minus the average log likelihood and
$$\label{S}
S = - \int d\phi Q(\phi) \ln
\frac{Q(\phi)}{P(\phi)}$$
is the Kullback-Leibler divergence of $Q(\phi)$ relative to the prior distribution $P(\phi)$ [@kullback59]. Equation (\[F\]) resembles the usual free energy in statistical mechanics: $F = U - TS$, where $U$, $S$ and $T$ are the internal energy, entropy and temperature of the system, the temperature being expressed in units of the Boltzman constant $k_{\rm B}$. Minus the average log likelihood plays the role of the internal energy, the Kullback-Leibler divergence of $Q(\phi)$ plays the role of the entropy and temperature equals one.
Equation (\[F\]) emphasizes the two components determining the best choice of variational distribution $Q(\phi)$: better fit to the data and model bias. How well the data is fitted is quantified by the internal energy $U$ (\[U\]). To achieve the best fit, or internal energy ground state, $Q(\phi)$ should be concentrated around the regions of the parameter space where $P(D|\phi)$ is maximum. The best choice in this respect will by the maximum likelihood estimate (MLE)
$$\label{QMLE}
Q_{\rm MLE}(\phi) = \delta(\phi-\phi^*)$$
where
$$\label{phiMLE}
\phi^*=\max_{\phi}P(D|\phi)\ .$$
In the opposite extreme, when no data is presented to us, the best distribution is that maximizing the Kullback-Leibler divergence relative to the prior distribution. This maximum entropy (ME) solution is the prior distribution itself
$$\label{QME}
Q_{ME}(\phi) = P(\phi)\ .$$
In general, the drive to better fit the data is opposed by the tendency to obtain the least unbiased model. The variational solution is therefore in the middle between the one extreme of biased models fitting the data very well and completely unbiased models giving a bad fit to the data. It is obtained after minimizing (\[F\]) with respect to $Q(\phi)$ over a restricted class of functions. This variational solution $Q(\phi)$ represents the closest distribution to $P(\phi|D)$ within the class of functions considered.
Statistical model with a population structure {#models}
=============================================
In this section we present the generalities of statistical models with a first level population structure. Similar models has been studied in [@blei03; @hofman07]. Our working hypothesis is that there is a hidden population structure, characterized by the subdivision of the population samples into groups. We assume that we are given a dataset $D$ which, in some way to be determined, reflects the population structure. The problem consist in inferring this hidden structure and the associated model parameters from the data. To tackle this problem we introduce a statistical model with a built in population structure as a generative model of the data. The population structure and the model parameters are then inferred solving the inverse problem. More precisely
- We consider a population composed of $n$ elements divided in $K$ groups.
- The samples assignment to groups is generated by a multinomial model with probabilities $\pi_k$, $k=1,\ldots,K$. Denoting by $g_i$ the group to which the $i$-th sample belongs, we obtain
$$\label{Pgpi}
P(g|\pi) = \prod_{i=1}^n\pi_{g_i}\ .$$
- Given the group assignments $g_i$, and depending on the dataset, we write down the likelihood $P(D|g,\theta)$ to observe the data parametrized by the parameter set $\theta$.
- Putting all this together we obtain the posterior distribution
$$\label{GM1}
P(\phi|D) = \frac{1}{Z} P(D|g,\theta)P(g|\pi)P(\theta)P(\pi)P(K)\ ,$$
where $\phi=(g,\theta,\pi,K)$ and $P(\theta)$, $P(\pi)$ and $P(K)$ are the prior distributions of $\theta$, $\pi$ and $K$.
The form of the prior distributions, except for $P(K)$, is the subject of the next section. The distribution $P(K)$ is irrelevant for problems with large datasets. The difference between the log-likelihood of models with different values of $K$ is in general of the order of the dataset size and, as a consequence, the contribution of $\ln P(K)$ is negligible. Thus, in the following sections we simply neglect the contribution given by $P(K)$. Finally, we specify the likelihood $P(D|g,\theta)$ when addressing specific problems.
In some cases we are going to assume that the variables in our dataset are also divided in groups. Here we consider a set of $m$ variables divided in $L$ groups. The variables assignment to groups is generated by a multinomial model with probabilities $\kappa_l$, $l=1,\ldots,L$. Denoting by $c_j$, $j=1,\ldots,m$, the variable group to which variable $j$ belongs we can then write
$$\label{Pckappa}
P(c|\kappa) = \prod_{j=1}^m \kappa_{c_j}$$
After adding this variable group structure, the posterior distribution (\[GM1\]) is replaced by
$$\begin{aligned}
\label{GM2}
P(\phi|D) &=& \frac{1}{Z} P(D|g,c,\theta)P(g|\pi)P(c|\kappa)
\nonumber\\
&\times& P(\pi)P(\kappa)P(\theta)P(K)\ ,\end{aligned}$$
where $\phi=(g,c,\theta,\pi,\kappa,K)$ and $P(\kappa)$ is the prior distribution of $\kappa$.
Prior distributions {#S:priors}
===================
\[priors\]
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Model Likelihood Conjugate prior Invariant prior Renormalization limit
------------- ------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------- ----------------------------------------- -----------------------------------------------------------
Binomial $\binom{N}{n} p^n(1-p)^{N-n}$ $ {\rm Beta}(p;\tilde{\alpha},\tilde{\beta}) = ${\rm const.} p^{-1}(1-p)^{-1}$ $\tilde{\alpha}\rightarrow0$, $\tilde{\beta}\rightarrow0$
\frac{ 1 }{ {\rm B}(\tilde{\alpha},\tilde{\beta}) }
p^{\tilde{\alpha}-1}(1-p)^{\tilde{\beta}-1} $
Multinomial $ \frac{ (\sum_{k=1}^Kn_k)! }{ \prod_{k=1}^K n_k! } $ {\rm D}(\pi;\gamma) = $ {\rm const.} \prod_{i=1}^K\pi_i^{-1}$ $\tilde{\gamma}_k\rightarrow0$
\prod_{k=1}^K\pi_k^{n_k}$ \frac{1}{{\rm B}(\tilde{\gamma})} \prod_{k=1}^K\pi_k^{\tilde{\gamma_k}-1} $
Normal $\prod_{i=1}^n $\frac{ 2\left(\frac{\tilde{\alpha}}{2}\tilde{\sigma}^2\right)^{ \frac{\tilde{\alpha}}{2} } } $\frac{\rm const.}{\sigma^2}$ $\tilde{\alpha}\rightarrow0$
\frac{ 1 }{ \sqrt{2\pi\sigma^2} } e^{ - \frac{ (X_i-\mu)^2 }{ 2\sigma^2 } }$ { \Gamma\left(\frac{\tilde{\alpha}}{2}\right)\sigma^{\tilde{\alpha}+1} }
e^{ -\frac{ \tilde{\alpha}\tilde{\sigma}^2 }{ 2\sigma^2 } }
\sqrt{ \frac{ \tilde{\alpha} }{ 2\pi\sigma^2 } }
e^{ -\frac{ \tilde{\alpha}(\mu-\mu_0)^2 }{ 2\sigma^2 } }$
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The choice of the prior distribution $P(\phi)$ is probably one of the less obvious topics in Bayesian analysis. Currently the predominant choice is the use of conjugate priors. The form of conjugate priors is indicated by the likelihood, making the prior selection less ambiguous. For example, the binomial likelihood $P(n|p)\propto p^n (1-p)^{N-n}$ suggests a beta distribution for $P(p|n)$. Furthermore, by choosing a beta distribution as a prior, $P(p) \propto p^{\tilde{\alpha}-1}(1-p)^{\tilde{\beta}-1}$, the posterior distribution remains a beta distribution, but with exponents $\alpha=\tilde{\alpha}+n\nonumber$ and $\beta=\tilde{\beta}+N-n$. In this sense, the beta distribution is the conjugate prior of the binomial likelihood. A list of conjugate priors relevant for this work is provided in Table \[priors\].
Yet, the fact that the form of conjugate priors is suggested by the likelihood does not demonstrate that they are the correct choice of priors. Moreover, even if we accept their use, it is not clear what is the correct choice for the prior distribution parameters, e.g. $\tilde{\alpha}$ and $\tilde{\beta}$. Different methods have been proposed to determine these parameters. In general they are based on [ *a posteriori*]{} analyzes, e.g. calculations, making use of the data in some way or another. Such methods violate, however, the concept of prior distribution, defined as the distribution of the model parameters in the absence of the data.
An alternative approach is that by Jaynes [@jaynes68]. According to Jaynes, in the absence of any data, the priors should be solely determined based on the symmetries and constraints of the problem under consideration. In this work we make use of Jaynes’s approach to determine the prior distribution. Below we derive Jaynes’s priors for the cases relevant for this work.
Prior for a model with location and scale parameters {#LP}
----------------------------------------------------
Consider a problem where the data consists of equally distributed random variables $X_i$, $i,\ldots,n$, taking real values. Furthermore let us assume that the likelihood has the form
$$\label{l1}
P(X|\mu,\sigma) = \prod_i f\left( \frac{X_i-\mu}{\sigma} \right) \frac{1}{\sigma}\ ,$$
where $f(x)$ is a probability density function in the real line and $\mu$ and $\sigma$ are a location and scale parameter respectively. Our task consist in determining the prior distribution of $\mu$ and $\sigma$. Now, suppose $X_i$ represent positions, which could be measured from difference systems of reference and using different units. In this context the prior distribution should be the same regardless of our system of reference and units. More precisely, our system is invariant under the transformations
$$\begin{aligned}
\label{t1}
x^\prime &=& a(x+b)\nonumber\\
\mu^\prime &=& a(\mu+b)\nonumber\\
\sigma^\prime &=& a\sigma\end{aligned}$$
where $b$ represents a translation and $a$ a change of scale or units. The likelihood is invariant under these transformations and so must be the prior distribution. Therefore,
$$\label{i1}
P(\mu^\prime,\sigma^\prime) d\mu^\prime d\sigma^\prime = P(\mu,\sigma) d\mu d\sigma$$
The solution to this functional equation is
$$\label{p1}
P(\mu,\sigma) = \frac{\rm const.}{\sigma^2}\ .$$
This analysis was first reported by Jaynes [@jaynes68]. He obtained, however, $P(\mu,\sigma)\propto 1/\sigma$. This discrepancy is rooted in the fact that Jaynes did not take into account that the location parameter $\mu$ follows the same rules than $x$ upon the translation and scale transformations. He assumed $\mu^\prime=\mu+b$ [@jaynes68] while the correct transformation is $\mu^\prime=a(\mu+b)$ (\[t1\]).
Prior for the multinomial model {#PM}
-------------------------------
Consider the multinomial model with $K$ states
$$\label{l2}
P(n|\pi) =\frac{ \left( \sum_{k=1}^Kn_k \right)! }{ \prod_{i=1}^K\pi_k }
\prod_{k=1}^K \pi_k^{n_k}\ ,$$
where $n_k$ is the number of times state $k$ was observed and $\pi_k$ is the probability to observe state $k$ in one trial, $0\leq\pi_k\leq1$ and $\sum_{k=1}^K\pi_k=1$. Here we extend the approach followed by Jaynes for the binomial model [@jaynes68].
The probabilities $\pi_k$ may be different depending on our believe, e.g. all states are equally probable. Different investigators may have different believes, resulting in different choices of $\pi_k$. The main assumption is that the prior distribution should be independent of what is our specific believe and, therefore, should be invariant under a believe transformation.
[*Believe transformation:*]{} Let us represent by $S_k$ the state $k$, and let $P(S_k|E)$ and $P(S_k|E^\prime)$ be the probabilities to observe state $S_k$ in one trial according to believe $E$ and $E^\prime$, respectively. From Bayes rule it follows that
$$\label{t2}
P(S_k|E^\prime) = \frac{ P(E^\prime|S_k,E)P(S_k|E) }{ \sum_j
P(E^\prime|S_j,E)P(S_j|E) }$$
for $k=1,\ldots,K$. The latter equation can be rewritten as
$$\label{t3}
\pi_k^\prime = \frac{a_k}{A} \pi_k$$
for $k=1,\ldots,K-1$ and $\pi_K^\prime=1-\sum_{k<K}\pi_k^\prime$, where $\pi_k=P(S_k|E)$, $\pi_k^\prime=P(S_k|E^\prime)$,
$$\label{ai}
a_k = \frac{ P(E^\prime|S_k,E) }{ P(E^\prime|S_K,E) }$$
and
$$\label{A}
A = 1 + \sum_{k<K} (a_jk-1)\pi_k\ .$$
Equation (\[t3\]) provides the transformation rules of the probabilities $\pi_k$ from one system of believe to another.
The invariance under the above transformation lead to the functional equation
$$\label{i2}
P(\pi^\prime) d\pi^\prime = P(\pi) d\pi\ ,$$
To solve this equation we first need to compute the determinant of the transformation Jacobian. The Jacobian of the transformation (\[t3\]) has the matrix elements
$$\label{Jij}
J_{ij} = \frac{\partial\pi_i^\prime}{\partial\pi_j}
= \frac{a_i\delta_{ij}}{A} - \frac{a_i(a_j-1)\pi_i}{A^2}\ ,$$
$i,j=1,\ldots,K-1$. This matrix can be decomposed into the product $J=BC$, where $B_{ij}=a_i\delta_{ij}/A$ is a diagonal matrix and $C_{ij}=\delta_{ij}-(a_j-1)\pi_i/A$ has two eigenvalues, $\lambda_1=A^{-1}$ and a $n-2$-degenerate eigenvalue $\lambda_2=1$. Putting all together we obtain
$$\label{dJ}
|J| = |B|\lambda_1\lambda_2^{n-2} = \frac{1}{A^n} \prod_{k=1}^K a_k\ .$$
The solution of (\[i2\]), with $d\pi^\prime = |J|d\pi$, is given by
$$\label{p2}
P(\pi) = {\rm const.} \prod_{i=1}^K \pi_i^{-1}\ .$$
Note that for $K=2$, $\pi_1=p$ and $\pi_2=1-p$, we recover the result by Jaynes for the binomial model
$$\label{p3}
P(p)\propto p^{-1}(1-p)^{-1}\ .$$
Improper priors renormalization
-------------------------------
The prior distributions (\[p1\]) and (\[p2\]) are improper, i.e. their integral over the parameter space is not finite. At first this may sound an unsuitable property for a prior distribution. Nevertheless, the improper nature of these prior distributions is just indicating that the symmetries in our problem are not sufficient to fully determine them. Data is required to obtain a proper distribution. The best example for an intuitive understanding of these arguments is the prior distribution of the location parameter. In the absence of any data and under the assumption of translational invariance, it is clear that every value in the real line is an equally probable value for the location parameter, resulting in an improper prior.
From the operational point of view, the posterior distribution may be proper even when the prior is not. Indeed, the integral $\int d\phi
P(\phi)$ may be improper, $\int d\phi P(\phi|D) \propto \int d\phi
P(D|\phi)P((\phi)$ may be proper. The posterior distribution can be improper when the inference problem has not been correctly formulated or there is not sufficient data to determine the model parameters.
To avoid dealing with improper distributions, we can renormalize improper priors to some limit of a proper distribution. Since conjugate priors facilitate analytical calculations they are a good starting point. This is illustrated in Table (\[priors\]) for selected examples. These are the prior distributions used herein. In particular, for the multinomial probabilities $\pi$ and $\kappa$ we use the renormalized invariant priors
$$\label{Ppi}
P(\pi) = \frac{1}{{\rm B}(\tilde{\gamma})}
\prod_{k=1}^K \pi^{\tilde{\gamma}_k-1}$$
$$\label{Pkappa}
P(\kappa) = \frac{1}{{\rm B}(\tilde{\epsilon})}
\prod_{l=1}^L \kappa^{\tilde{\epsilon}_l-1}$$
with $\tilde{\gamma}\rightarrow0$ and $\tilde{\epsilon}_l\rightarrow0$.
Mean-field approximation {#MF}
========================
In this section we specify the form of the variational function $Q(\phi)$. To allow for an analytical solution we neglect correlations between the group assignments and the remaining model parameters. We denote by $p_{ik}$ the probability that sample $i$ belongs to sample group $k$ and by $q_{jl}$ the probability that probe $j$ belongs to probe group $l$. Furthermore, given that $\theta$, $\pi$ and $\kappa$ always appear in different factors in (\[GM1\]) or (\[GM2\]) then their join distribution factorizes. Within the mean-field approximation for the group assignments and the later factorization the variational function can be written as
$$\label{MF1}
Q(\phi) = \prod_i p_{ig_i} R(\theta)R(\pi)$$
when dealing with the generative model (\[GM1\]) and
$$\label{MF2}
Q(\phi) = \prod_i p_{ig_i} \prod_j q_{jc_j} R(\theta)R(\pi)R(\kappa)$$
when dealing with the generative model (\[GM2\]), where $R(x)$ denotes a generic probability density function of $x$.
Summarizing, in the case studies below, we are going to solve the generative models (\[GM1\]) or (\[GM2\]), making use of renormalized invariant priors (Table \[priors\]) and the MF variational function (\[MF1\]) or (\[MF2\]), respectively. This approach is based on the assumptions that: the population is divided in groups, the group assignments are generated by a multinomial model, the priors are renormalized invariant distributions, and a MF approximation of the variational solution with respect to the group assignments.
Case study: Clustering real value data {#real}
======================================
Quite often we deal with datasets consisting of a real value measurement $X_{ij}$ over $i=1,\ldots,n$ samples and $j=1,\ldots,m$ variables, where the samples and variables are not necessarily independent. For simplicity, the particular kind of dependency we focus on is the existence of sample and variable groups. Our problem is to infer the sample and variable groups and the statistical parameters characterizing them.
To address this problem we consider the generative model (\[GM2\]) with a normal likelihood, representing a two-sides Gaussian mixture model. The two-sides Gaussian mixture model is a natural extension of the Gaussian mixture model [@maclachlan00; @rasmussen00] to characterize datasets with a group structure for both the samples and variables. Our contributions in this context are the use of prior distributions derived from symmetry arguments alone and the inclusion of a group structure at the variables side. The dataset, likelihood and priors associated with our statistical model are defined as follows:
[*Data:*]{} Consider $i=1,\ldots,n$ samples, $j=1,\ldots,m$ variables, and the real value measurements $X_{ij}$.
[*Likelihood:*]{} We assume that $X_{ij}$ are random variables with a normal distribution, with group dependent mean $\mu_{g_ic_j}$ and group independent variance $\sigma$, resulting in the likelihood
$$\label{Preal}
P(X|g,c,\mu,\sigma) = \prod_{ij} \frac{1}{ \sqrt{2\pi\sigma^2} }
e^{ - \frac{\left(X_{ij}-\mu_{g_ic_j}\right)^2}{2\sigma^2} }\ .$$
Here we are assuming that the main difference between groups is given by the means while the variance is group independent. The latter is a good approximation when the source of noise is given by the measurement itself and it behaves the same independently of the sample and variable group.
[*Priors:*]{} For the prior $P(\mu,\sigma)$ we generalize the Normal distribution prior in Table \[priors\]. Accounting for more than one location parameter we obtain
$$\begin{aligned}
\label{PGm}
P(\mu,\sigma) &=& \frac{ 2\left(\frac{\tilde{\alpha}}{2}\tilde{\sigma}^2\right)
^\frac{\tilde{\alpha}}{2} }{ \Gamma\left(\frac{\tilde{\alpha}}{2}\right)
\sigma^{\tilde{\alpha}+1} } e^{ -\tilde{\alpha}\frac{\tilde{\sigma}^2}{2\sigma^2} }\\
&\times& \prod_{kl} \sqrt{ \frac{\tilde{\alpha}}{2\pi\sigma^2} }
e^{ -\frac{\tilde{\alpha}}{2\sigma^2} \left(\mu_{kl}-\tilde{\mu}_{kl}\right)^2 }\end{aligned}$$
and we work in the limit $\tilde{\alpha}\rightarrow0$.
To apply the variational method we consider the MF approximation (\[MF2\]). Substituting the likelihood (\[Preal\]), the priors (\[Ppi\]), (\[Pkappa\]) and (\[PGm\]) and the MF variational function (\[MF2\]) into (\[F\]), and integrating over $\phi$ (summing over $g_i$ and $c_j$ and integrating over $\mu_{kl}$, $\sigma$, $\pi_k$ and $\kappa_l$) we obtain
$$\begin{aligned}
\label{Freal}
F &\leq& {\rm const.} + (nm+KL+\tilde{\alpha}+1)\langle\ln\sigma\rangle
\nonumber\\
&+& \frac{1}{2}
\sum_{ijkl}
p_{ik}q_{jl}\left( \langle\frac{1}{\sigma^2}\rangle X_{ij}^2 -
2X_{ij}\langle\frac{\mu_{kl}}{\sigma^2}\rangle
+ \langle\frac{\mu_{kl}^2}{\sigma^2}\rangle \right)
\nonumber\\
&+& \frac{\tilde{\alpha}}{2} \left[
\langle\frac{1}{\sigma^2}\rangle \tilde{\sigma}^2
+ \sum_{kl} \left( \langle\frac{1}{\sigma^2}\rangle \tilde{\mu}_{kl}^2
- 2\tilde{\mu}_{kl}\langle\frac{\mu_{kl}}{\sigma^2}\rangle
+ \langle\frac{\mu_{kl}^2}{\sigma^2}\rangle \right) \right]
\nonumber\\
&-&\sum_k\left(\sum_ip_{ik}+\tilde{\gamma}_k-1\right)
\langle\ln\pi_k\rangle
\nonumber\\
&-&\sum_l\left(\sum_jq_{jl}+\tilde{\epsilon}_l-1\right)
\langle\ln\kappa_l\rangle
\nonumber\\
&+& \int d\mu d\sigma R(\mu,\sigma)\ln R(\mu,\sigma)
\nonumber\\
&+& \int d\pi R(\pi)\ln R(\pi)
+ \int d\kappa R(\kappa)\ln R(\kappa)\kappa
\nonumber\\
&+&\sum_{ik} p_{ik}\ln p_{ik} + \sum_{jl} q_{jl}\ln q_{jl}\end{aligned}$$
Minimizing (\[Freal\]) with respect to $p_{il}$, $q_{jl}$, $R(\mu,\sigma)$, $R(\pi)$ and $R(\kappa)$ we obtain (VB-1):
$$\label{preal}
p_{ik} = \frac{ e^{\langle\ln\pi_k\rangle -\frac{1}{2\sigma_*^2}
\sum_{jl} q_{jl}\left(
\frac{\sigma_*^2}{\alpha_{kl}} + \left(X_{ij}-\langle \mu_{kl}\rangle\right)^2
\right) } }
{\sum_s e^{ \langle\ln\pi_s\rangle -\frac{1}{2\sigma_*^2}
\sum_{jl} q_{jl}\left(
\frac{\sigma_*^2}{\alpha_{sl}} + \left(X_{ij}-\langle \mu_{sl}\rangle\right)^2
\right) } }$$
$$\label{qreal}
q_{jl} = \frac{ e^{ \langle\ln\kappa_l\rangle -\frac{1}{2\sigma_*^2}
\sum_{ik} p_{ik}\left(
\frac{\sigma_*^2}{\alpha_{kl}} + \left(X_{ij}-\langle \mu_{kl}\rangle\right)^2
\right) } }
{ \sum_s e^{ \langle\ln\kappa_l\rangle -\frac{1}{2\sigma_*^2}
\sum_{ik} p_{ik}\left(
\frac{\sigma_*^2}{\alpha_{ks}} + \left(X_{ij}-\langle \mu_{ks}\rangle\right)^2
\right) } }$$
$$\begin{aligned}
\label{Rmusigma}
R(\mu,\sigma) &=& \frac{ 2\left(\frac{\alpha}{2}\sigma_*^2\right)^{\frac{\alpha}{2}} }
{ \Gamma\left(\frac{\alpha}{2}\right) \sigma^{\alpha+1} }
e^{ -\frac{\alpha\sigma_*^2}{\sigma^2} }
\nonumber\\
&\times& \prod_{kl} \sqrt{ \frac{ \alpha_{kl} }{ 2\pi\sigma^2 } }
e^{ -\frac{\alpha_{kl}}{2\sigma^2} \left(\mu_{kl}-\langle\mu_{kl}\rangle\right)^2 }\end{aligned}$$
$$\label{alphakl}
\alpha_{kl} = \tilde{\alpha} + \sum_{ij}p_{ik}q_{jl}$$
$$\label{alpha1}
\alpha = \tilde{\alpha}+nm$$
$$\label{mu}
\langle\mu_{kl}\rangle =
\frac{ \tilde{\alpha}\tilde{\mu}_{kl} + \sum_{ij} p_{ik}q_{jl}X_{ij} }
{ \tilde{\alpha} + \sum_{ij} p_{ik}q_{jl} }$$
$$\begin{aligned}
\label{sigma}
\sigma_*^2 &=& \frac{1}{\tilde{\alpha}+nm} \left[
\tilde{\alpha} \left( \tilde{\sigma}^2
+\sum_{kl}\left(\tilde{\mu}_{kl}^2
-\langle\mu_{kl}\rangle^2\right) \right) \right.
\nonumber\\
&+& \left. \sum_{ijkl}p_{ik}q_{jl} \left( X_{ij}^2-\langle\mu_{kl}\rangle^2 \right) \right]\end{aligned}$$
$$\label{P_pi}
R(\pi)={\rm D}(\pi;\gamma)\ ,\ \ \ \
\gamma_k = \tilde{\gamma}_k+\sum_ip_{ik}$$
$$\label{P_kappa}
R(\kappa)={\rm D}(\kappa;\epsilon)\ ,\ \ \ \
\epsilon_l = \tilde{\epsilon}_l+\sum_jq_{jl}$$
$$\begin{aligned}
\label{F_real}
F^* &=& {\rm const.} + \sum_{ik} p_{ik}\ln p_{ik} + \sum_{jl} q_{jl}\ln q_{jl}
- \ln{\rm B}(\gamma)
\nonumber\\
&-& \ln{\rm B}(\epsilon) + \frac{1}{2}\sum_{kl}\ln\alpha_{kl}\ .\end{aligned}$$
These are a set of self-consistent equations which can be solved recursively to determine the probabilistic group assignments and the $\mu$, $\sigma$, $\pi$ and $\kappa$ distributions. They are the same in spirit as those for the EM algorithm [@dempster77]. Following [@mackay03; @beal03] we refer to them as [*variational Bayes*]{} (VB) algorithm.
The main difference between the EM and VB algorithms is that in the former case we would take the average of the log likelihood over the group assignments but not over the distributions of $\mu$, $\sigma$, $\pi$ and $\kappa$. By taking the average over $\mu$ and $\sigma$ we obtain the additional $1/\alpha_{kl}$ term within the parenthesis in equations (\[preal\]) and (\[qreal\]). According to (\[alphakl\]) $\alpha_{k}$ is equal to $\tilde{\alpha}$ plus the product of the average number of samples in sample group $k$ ($\sum_ip_{ik}$) and the average number of variables in variable group $l$ ($\sum_jq_{jl}$). Therefore, the $1/\alpha_{k}$ term penalizes assignments to small size groups. And it balances the contribution of $(X_{ij}-\langle\mu_{kl}\rangle)^2$, which drives the estimates towards a better fit and consequently groups of minimal size.
VB implementation, real value data
----------------------------------
The actual implementation of the VB-1 algorithm in the context of real value data proceeds as follows. Set sufficiently large values for $K$ and $L$, larger than our expectation for the actual values of $K$ and $L$. In the following test examples we use $K=L=20$. Set the parameters $\tilde{\alpha}$, $\tilde{\mu}_{kl}$, $\tilde{\sigma}$, $\tilde{\gamma}_k$ and $\tilde{\epsilon}_l$. We set $\tilde{\alpha}=\tilde{\gamma}_k=\tilde{\epsilon}_l=10^{-6}$, $\tilde{\mu}_{kl}=0$ and $\tilde{\sigma}=1$. The choice of $\tilde{\mu}_{kl}$ and $\tilde{\sigma}$ is practically irrelevant provided we have chosen a sufficiently small $\tilde{\alpha}$. Set random initial conditions for $p_{ik}$ and $q_{jl}$. Starting from these random initial conditions iterate equations (\[preal\])-(\[F\_real\]) until the solution converges up to some predefined accuracy. We use relative error of $F^*$ smaller than $10^{-6}$. In practice, compute $\langle\mu_{kl}\rangle$, $\alpha_{kl}$, $\sigma_*$, $\gamma_k$, $\langle\ln\pi_k\rangle$, $\epsilon_l$, $\langle\ln\kappa_l\rangle$, $p_{ik}$, $q_{jl}$ and $F^*$ in that order. To explore different potential local minima use different initial conditions and select the solution with lowest $F^*$. Since this algorithm penalizes groups with few members it turns out that, for sufficiently large $K$ and $L$, some sample and condition groups result empty. If this is not the case $K$ and/or $L$ should be increased until at least one sample group and one variable group results empty.
![[**Clustering real value data:**]{} Mutual information $I=I(p^O,p^*)$ between the original $p^O$ and estimated $p^*$ groups assignments, relative to its maximum value $I_0$ when $p^*=p^O$. The original data was made of $n=100$ samples divided in $K$ groups and $m=100$ conditions divided in $L$ groups. The values of $X_{ij}$ were extracted from a normal distribution with mean $\mu_{kl}=k+l$ and variance $\sigma$. The figure shows the mutual information between the original groups and the group assignment, estimated by the VB-1 algorithm, as a function of the variance $\sigma$. The dashed-dotted, solid and dashed lines corresponds with the worst, average and best case on 100 test examples, respectively. In a) $K=L=2$ and in b) $K=L=4$. In both cases the mutual information is approximately equal to its maximum $I_0$ for values of $\sigma$ less than one, the minimum difference between the original means $\mu_{kl}$.[]{data-label="fig_real"}](bc.fig.gaussian.eps){width="3.2in"}
Test examples
-------------
To test the performance of the VB-1 algorithm, (\[preal\])-(\[F\_real\]), we consider test examples generated by the likelihood (\[Preal\]) itself. Our aim is to test the variational result in the context of a relatively small number of samples and conditions. To quantify the goodness of the group assignment we consider the mutual information between the original $p^O$ ($p^O_{ik}=\delta_{g_ik}$) and estimated $p^*$ sample group assignments,
$$\label{Ip0p}
I(p^O,p^*) = \sum_{kk^\prime} \rho_{kk^\prime} \ln
\frac{ \rho_{kk^\prime} }{ \rho^O_k\rho^*_{k^\prime} }$$
where
$$\label{rhopp}
\rho_{kk^\prime} = \frac{1}{n}\sum_i p^O_{ik} p^*_{ik}$$
$$\label{rhop0}
\rho^O_{k} = \frac{1}{n}\sum_i p^O_{ik}$$
$$\label{rhop}
\rho^*_k = \frac{1}{n}\sum_i p^*_{ik}\ .$$
Note that $I(p^O,p^*)$ takes its maximum value when $p^*=p^O$, denoted by $I_0=I(p^O,p^O)$. Off course, the same could be done for the condition group assignments as well.
In our test examples the original data was made of $n=100$ samples divided in $K$ groups and $m=100$ conditions divided in $L$ groups. The values of $X_{ij}$ were extracted from a normal distribution with mean $\mu_{kl}=k+l$ and variance $\sigma$. We estimate the group assignment using the VB-1 algorithm, sampling one initial condition. Figure \[fig\_real\] shows the mutual information between the original and estimated groups as a function of the variance $\sigma$. In a) $K=L=2$ and in b) $K=L=4$. In both cases the mutual information is approximately equal to its maximum $I_0$ for values of $\sigma$ less than 1. Since 1 is the minimum difference between the original means $\mu_{kl}$, we conclude that the VB-1 algorithm performs well when there is a significant difference between the distributions associated with different groups. For larger values of $\sigma$ the VB-1 algorithm performance starts to decrease. This is not, however, a deficiency of the algorithm but an unavoidable consequence of the mixing between the distributions coming from different groups. It is worth noticing that we obtain similar results for the case $K=4$ and $L=1$, indicating that the method works when there is no group structure on one side, in this case the conditions.
Case study: clustering data represented by hypergraphs and bipartite graphs
===========================================================================
There are several datasets consisting of a certain number of properties and the information of whether or not each sample exhibits each of the properties. For example, the dataset in Fig. \[fig\_hg\_bg\] describes a population of three animals characterized by two attributes, hair and legs. The attribute hair can take the value YES (has hair) or NO (does not have hair) while the attribute legs takes the values 2 or 4 (at least within this dataset). The mathematical treatment of this problem is significantly simplified if the variables are mapped onto Boolean variables. To each $S$ states variable we associate $S$ Boolean variables, each representing the occurrence or not of a specific letter of the alphabet. For example, the attribute hair is associated with hair-YES and hair-NO and the attribute legs with legs-2 and legs-4 (Fig. \[fig\_hg\_bg\]b). The outcome of this mapping is represented by the Boolean matrix $a_{ij}$, taking the value 1 if the answer to the Boolean variable $j$ is YES on sample $i$ and 0 otherwise.
![[**Hypergraph and bipartite graph data representations:**]{} a) An example of a problem with categorical data. b) Mapping of the categorical variables onto augmented Boolean variables. c) Hypergraph representation of the categorical dataset in a). d) Bipartite graph representation of the categorical dataset in a). e) A graph example. f) Nearest-neighbor mapping of the graph in e) onto a hypergraph, where each hyper-edge represents a set of nearest neighbors of a vertex in the original graph, indicated by (1), (2), (3) and (4). g) Nearest-neighbor mapping of the graph in e) onto a bipartite graph. The original graph vertices are represented by 1, 2, 3 and 4. The augmented bipartite graph vertices, representing nearest-neighbor sets, are represented by (1), (2), (3) and (4).[]{data-label="fig_hg_bg"}](fig_hg_bg.eps){width="3.2in"}
Depending on our aim, the Boolean matrix can be represented either by a hypergraph or a bipartite graph. When we aim to cluster the samples without attempting to cluster the Boolean variables, $a_{ij}$ is better interpreted as the adjacency matrix of a hypergraph. A hypergraph is an intuitive extension of the concept of graph to allow for connections between more than two elements. In our case, the hypergraph vertices represent samples and hyper-edges, one associated which each Boolean variable, represent the set of all samples with the answer YES to the corresponding Boolean variable (Fig. \[fig\_hg\_bg\]c). On the other hand, when we aim to cluster both the samples and Boolean variables then a bipartite graph interpretation is more appropriate, with one class of vertices for the samples and another one for the Boolean variables, and an edge connecting sample $i$ and variable $j$ whenever $a_{ij}=1$ (\[fig\_hg\_bg\]d). The differences between these two approaches will become clear below.
One side clustering: Statistical model on hypergraphs
-----------------------------------------------------
In this case the samples are assumed to be divided in groups while the hypergraph edges are modeled as independent. Here we follow the statistical model introduced in [@vazquez08]:
[*Data:*]{} Consider a hypergraph with a vertex set representing $n$ samples and $m$ edges characterizing the relationships among them. The hypergraph is specified by its adjacency matrix $a$, where $a_{ij}=1$ if element $i$ belongs to edge $j$ and it is 0 otherwise.
[*Likelihood:*]{} The adjacency matrix elements are generated by a binomial model with sample group and variable dependent probabilities $\theta_{kj}$, $k=1,\ldots,K$ and $j=1,\dots,m$, resulting in
$$\label{Phg}
P(a|g,\theta) = \prod_{ij} \theta_{g_ij}^{a_{ij}}
\left(1- \theta_{g_ij}\right)^{1-a_{ij}}\ ,$$
[*Priors:*]{} As priors we use the renormalized invariant prior of the binomial model (Table \[priors\]). Taking into account that we have a binomial model for each pair of sample group and edge, we obtain
$$\label{P_hg}
P(\theta) = \prod_{kj}{\rm Beta}(\theta_{kj};\tilde{\alpha}_{kj},\tilde{\beta}_{kj})$$
with $\tilde{\alpha}_{kj}\rightarrow0$ and $\tilde{\beta}\rightarrow0$.
Substitute the likelihood (\[Phg\]), the priors (\[Ppi\]) and (\[P\_hg\]), and the MF variational function (\[MF1\]) into (\[F\]), and integrating over $\phi$ (summing over $g_i$ and integrating over $\theta_{kl}$ and $\pi_k$) we obtain
$$\begin{aligned}
\label{F_hg}
F &\leq& - \sum_{jk} \left(\sum_ip_{ik}a_{ij}+\tilde{\alpha}_{kj}-1\right)
\langle\ln\theta_{kj}\rangle
\nonumber\\
&-& \sum_{jk}\left(\sum_ip_{ik}(1-a_{ij})+\tilde{\beta}_{kj}-1\right)
\langle\ln(1-\theta_{kj})\rangle
\nonumber\\
&+& \sum_{ik} p_{ik}\ln p_{ik}
+ \int d\theta R(\theta)\ln R(\theta)
\nonumber\\
&+& \int d\pi R(\pi)\ln R(\pi)
+{\rm const.}\end{aligned}$$
Minimizing (\[F\_hg\]) with respect to $p_{il}$, $R(\theta)$ and $R(\pi)$ we obtain (VB-2)
$$\label{p_hg}
p_{ik} = \frac{ e^{ \langle\ln\pi_k\rangle +
\sum_j \left[
a_{ij}\langle\ln\theta_{kj}\rangle
+(1-a_{ij})\langle\ln(1-\theta_{kj})\rangle
\right] } }
{ \sum_s e^{ \langle\ln\pi_s\rangle +
\sum_j \left[
a_{ij}\langle\ln\theta_{sj}\rangle
+(1-a_{ij})\langle\ln(1-\theta_{sj})\rangle
\right] } }$$
$$\label{Qtheta_hg}
R(\theta) = \prod_{kj}
{\rm B}(\theta_{kj};\alpha_{kj},\beta_{kj})\ ,$$
$$\label{alpha_hg}
\alpha_{kj} = \tilde{\alpha}_{kj}+\sum_{ij}p_{ik}a_{ij}$$
$$\label{beta_hg}
\beta_{kj} = \tilde{\beta}_{kj}+\sum_{ij}p_{ik}(1-a_{ij})\ .$$
$$\label{Qpi_hg}
R(\pi)={\rm D}(\pi;\gamma)\ ,\ \ \ \
\gamma_k = \tilde{\gamma}_k + \sum_ip_{ik}$$
$$\begin{aligned}
\label{F_hg_min}
F^*&=& {\rm const.} +
\sum_{ik} p_{ik}\ln p_{ik}
\nonumber\\
&-& \sum_{kj}\ln {\rm B}(\alpha_{kj},\beta_{kj})
-\ln{\rm B}(\gamma)\end{aligned}$$
These equations represent the VB algorithm for the statistical model on hypergraphs. In this case we have not been able to disentangle the contributions weighting the fit to the data and the model bias, both being included in the averages $\langle\ln(\theta_{kj})\rangle$ and $\langle\ln(1-\theta_{kj})\rangle$.
VB algorithm implementation, statistical model on hypergraphs
-------------------------------------------------------------
The implementation of the VB algorithm for the statistical model on hypergraphs proceeds as follows. Set sufficiently large values for $K$, larger than our expectation for the actual values of $K$. We use $K=20$ in the following test examples. Set the parameters $\tilde{\alpha}_{kj}$, $\tilde{\beta}_{kj}$ and $\tilde{\gamma}_k$. We set the parameters $\tilde{\alpha}_{kj}=\tilde{\beta}_{kj}=\tilde{\gamma}_k=10^{-6}$. Set random initial conditions for $p_{ik}$. Starting from these initial conditions iterate equations (\[p\_hg\])-(\[F\_hg\_min\]) until the solution converges up to some predefined accuracy. We use relative error of $F^*$ smaller than $10^{-6}$. In practice, compute $\alpha_{kj}$, $\beta_{kj}$, $\langle\ln\theta_{kj}\rangle$, $\langle\ln(1-\theta_{kj})\rangle$, $\gamma_k$, $\langle\ln\pi_k\rangle$, $p_{ik}$ and $F^*$ in that order. To explore different potential local minima use different initial conditions and select the solution with lowest $F^*$. Since this algorithm penalizes groups with few members it turns out that, for sufficiently large $K$, some sample and condition groups result empty. If this is not the case then increase $K$ until at least one group is empty. A matlab code implementing this algorithm can be found at http://www.sns.ias.edu/ vazquez/hgc.html.
{width="5.8in"}
### Test example: zoo problem
Consider the animal population in Fig. \[fig\_zoo\]a together with their attributes: habitat, nutrition behavior, etc. Figure \[fig\_zoo\]b shows the mapping of this dataset onto a hypergraph. The hypergraph vertices represent animals and the edges represent the association between all animals with a given attribute: edge 1, all non-airborne animals; edge 2, all airborne animals, and so on.
The animal population stratification was already addressed in [@vazquez08], finding the solution in Fig. \[fig\_zoo\]c. Although the starting statistical model is the same, the solution in [@vazquez08] was found assuming fixed the number of groups and estimating the group assignment using the EM algorithm (essentially a maximum likelihood estimate). Then, in an an attempt to focus in the solution with better consensus, solutions for different number of groups were obtained and the most representative solution was selected.
Here we address the same problem using a Bayesian approach and the variational solution. We start from the same statistical model on hypergraphs but now obtain a solution using the VB-2 algorithm (\[p\_hg\])-(\[F\_hg\_min\]), sampling 10,000 initial conditions as in [@vazquez08]. The solution found by the VB-2 algorithm (Fig. \[fig\_zoo\]d) is quite similar to that previously found in [@vazquez08] (Fig. \[fig\_zoo\]c). The main differences are the splitting of the terrestrial mammals, the exclusion of the platypus and the tortoise from the amphibia-reptiles group and the scorpion from the terrestrial arthropods. More important, in both cases the main groups represent terrestrial mammals, aquatic mammals, birds, fishes, amphibia-reptiles, terrestrial arthropods and aquatic arthropods. The VB-2 (\[p\_hg\])-(\[F\_hg\]) algorithm represents, however, a significant improvement over the approach followed in [@vazquez08]. It finds the consensus solution in one run, because it has built in the balance between better fitting and less bias.
![[**Finding graph modules, hypergraph model:**]{} Mutual information $I=I(p^O,p^*)$ between the original $p^O$ and estimated $p^*$ groups assignments, relative to its maximum value $I_0$ when $p^*=p^O$. The original data was made of a graph with $n=100$ vertices divided in $K=2$ groups, with an intra- and inter-community connection probabilities $p_1$ and $p_2$, respectively. The figure shows the mutual information, between the original groups and the group assignment estimated by the VB-2 algorithm (\[p\_hg\])-(\[F\_hg\_min\]), as a function of the inter-community connectivity $p_2$. The dashed-dotted, solid and dashed lines corresponds with the worst, average and best case on 100 test examples. In a) we deal with dense communities ($p_1=0.9$) and the algorithm performs well ($I/I_0\approx1$) for small values of the inter-community connectivity probability $p_2$. In b) we deal with sparse communities ($p_1=0.1$) and the algorithm performs well for large values of the inter-community connectivity probability $p_2$.[]{data-label="fig_hg"}](bc.fig.hypergraph.eps){width="3.2in"}
### Test example: finding network modules
The work by Newman and Leicht [@newman07] provides a hint on how to apply the hypergraph clustering to the problem of finding modules or communities in a graph or network. A graph is made by a set of vertices and a set of edges, the latter being pairs of connected vertices. The idea of Leicht and Newman is a “guilty by association” principle: vertices between the same module of a graph will tend to have connections to the same other vertices. This problem can be translated to a hypergraph problem, where the vertices are the graphs vertices, the hyper-edges are the set of nearest neighbors and the Boolean variables characterize whether or not a vertex belongs to the a set of nearest neighbors [@vazquez08] (Fig. \[fig\_hg\_bg\]e and f). More precisely, to each vertex we associate a hyper-edge, given by the set of its nearest neighbors. Therefore, there are $m=n$ hyper-edges, one for every vertex in the original graph. The hypergraph adjacency matrix has the matrix element $a_{ij}=1$ if vertex $i$ belongs to hyper-edge $j$, i.e. if vertex $i$ belongs to the nearest-neighbor set of vertex $j$, and $a_{ij}=0$ otherwise. If we label the nearest-neighbor sets with the same label as the vertices then the hypergraph adjacency matrix coincides with the adjacency matrix of the original graph. Thus, there is an exact mapping from the statistical model proposed by Newman and Leicht [@newman07] to the statistical model on hypergraphs.
Having specified this mapping we use the VB-2 algorithm (\[p\_hg\])-(\[F\_hg\_min\]), sampling one initial condition, to find the graph modules in the original graph. To illustrate its performance we consider as a case study a graph composed by two communities, with probabilities $p_1$ and $p_2$ that two vertices within the same or different communities are connected, respectively. As already anticipated by Newman and Leicht [@newman07], the nearest-neighbor approach can resolve both dense communities with lesser inter-community connections ($p_1\gg p_2$) and sparse communities with more inter-community connections ($p_1\ll p_2$). Figure \[fig\_hg\] shows that the VB-2 algorithm performs quite well in those two regimes.
Two sides clustering: statistical model on bipartite graphs
-----------------------------------------------------------
We can face situations where there are groups of Boolean variables as well, requiring the clustering of both samples and Boolean variables. In this case the bipartite graph representation is more appropriate, with a class of vertices representing the samples and a class of vertices representing the Boolean variables. More precisely,
[*Data:*]{} Consider a bipartite graph with two vertex subsets, representing $n$ samples and $m$ Boolean variables. The graph is specified by its adjacency matrix $a$, where $a_{ij}=1$ when sample $i$ is connected to Boolean variable $j$, i.e. if Boolean variable $j$ is true for sample $i$, and $a_{ij}=0$ otherwise.
[*Likelihood:*]{} The adjacency matrix elements are generated by a binomial model with sample group and variable group dependent probabilities $\theta_{kl}$, $k=1,\ldots,K$ and $l=1,\dots,L$, resulting in
$$\label{Pbg}
P(a|g,c,\theta) = \prod_{ij} \theta_{g_ic_j}^{a_{ij}}
\left(1- \theta_{g_ic_j}\right)^{1-a_{ij}}\ .$$
[*Priors:*]{} For $P(\theta)$ we use the renormalized invariant prior of the binomial model. Taking into account that we have one binomial model per each pair of sample and variable group we obtain
$$\label{P_bg}
P(\phi) =
\prod_{kj}{\rm B}(\theta_{kl};\tilde{\alpha}_{kl},\tilde{\beta}_{kl})$$
with $\tilde{\alpha}_{kl}\rightarrow0$ and $\tilde{\beta}_{k}\rightarrow0$.
The likelihood (\[Pbg\]) is quite similar to (\[Phg\]), the main difference being that now the statistical properties of the Boolean variables appear through their corresponding group assignments $c_j$. This increases the model complexity by considering a group structure for the Boolean variables and, at the same time, reduces the number of $\theta$ parameters. Furthermore, (\[Pbg\]) contains (\[Phg\]) as the particular case where $L=n$ and one group associated to each Boolean variable.
Substituting the likelihood (\[Pbg\]), the priors (\[P\_bg\]), (\[Ppi\]) and (\[Pkappa\]), and the MF variational function (\[MF2\]) in (\[F\]), and integrating over $\phi$ (summing over $g_i$ and $c_j$ and integrating over $\theta_{kl}$, $\pi_k$ and $\kappa_l$) we obtain
$$\begin{aligned}
\label{F_bg}
F &\leq& - \sum_{kl}
\left(\sum_{ij}p_{ik}q_{jl}a_{ij}+\tilde{\alpha}_{kl}-1\right)
\langle\ln\theta_{kl}\rangle
\nonumber\\
&+&
\sum_{kl}\left(\sum_{ij}p_{ik}q_{jl}(1-a_{ij})+\tilde{\beta}_{kl}-1\right)
\langle\ln(1-\theta_{kl})\rangle
\nonumber\\
&+& \sum_{ik} p_{ik}\ln p_{ik} +
\sum_{jl} q_{jl}\ln q_{jl}
\nonumber\\
&+& \int d\theta R(\theta)\ln R(\theta)
+ \int d\pi R(\pi)\ln R(\pi)
\nonumber\\
&+& \int d\kappa R(\kappa)\ln R(\kappa)
+{\rm const.}\end{aligned}$$
Minimizing (\[F\_bg\]) with respect to $p_{il}$, $q_{jl}$, $R(\theta)$, $R(\pi)$ and $R(\kappa)$ we obtain (VB-3)
$$\label{p_bg}
p_{ik} = \frac{ e^{ \langle\pi_k\rangle +
\sum_{jl} q_{jl} \left[
a_{ij}\langle\ln\theta_{kl}\rangle
+(1-a_{ij})\langle\ln(1-\theta_{kl})\rangle
\right] } }
{ \sum_s e^{ \langle\pi_s\rangle +
\sum_{jl} q_{jl} \left[
a_{ij}\langle\ln\theta_{sl}\rangle
+(1-a_{ij})\langle\ln(1-\theta_{sl})\rangle
\right] } }$$
$$\label{q_bg}
q_{jl} = \frac{ e^{ \langle\kappa_l\rangle +
\sum_{ik} p_{ik} \left[
a_{ij}\langle\ln\theta_{kl}\rangle
+(1-a_{ij})\langle\ln(1-\theta_{kl})\rangle
\right] } }
{ \sum_s e^{ \langle\kappa_s\rangle +
\sum_{ik} p_{ik} \left[
a_{ij}\langle\ln\theta_{ks}\rangle
+(1-a_{ij})\langle\ln(1-\theta_{ks})\rangle
\right] } }$$
$$\label{Qtheta_bg}
R(\theta) = \prod_{kl}
{\rm B}(\theta_{kl};\alpha_{kl},\beta_{kl})\ ,$$
$$\label{alpha_bg}
\alpha_{kl} = 1 + \sum_{ij}p_{ik}q_{jl}a_{ij}$$
$$\label{beta_bg}
\beta_{kl} = 1 + \sum_{ij}p_{ik}q_{jl}(1-a_{ij})\ .$$
$$\label{P_pi_bg}
R(\pi)={\rm D}(\pi;\gamma)\ ,\ \ \ \
\gamma_k = \tilde{\gamma}_k + \sum_ip_{ik}$$
$$\label{P_kappa_bg}
R(\pi)={\rm D}(\kappa;\epsilon)\ ,\ \ \ \
\epsilon_l = \tilde{\epsilon}_l + \sum_jq_{jl}$$
$$\begin{aligned}
\label{F_bg_min}
F^* &=& {\rm const.} +
\sum_{ik} p_{ik}\ln p_{ik} + \sum_{jl} q_{jl}\ln q_{jl}
\nonumber\\
&-& \sum_{kl}\ln {\rm B}(\alpha_{kl},\beta_{kl})
- \ln{\rm B}(\gamma) -\ln{\rm B}(\epsilon)\end{aligned}$$
Equations (\[p\_bg\])-(\[F\_bg\_min\]) represent the VB algorithm for the statistical model on bipartite graphs. They can be used to found modules or communities in graphs with a bipartite structure, including those representing samples and Boolean variables.
VB algorithm implementation, statistical model on bipartite graphs
------------------------------------------------------------------
The implementation of the VB-2 algorithm (\[p\_bg\])-(\[F\_bg\_min\]) for the statistical model on bipartite graphs proceeds as follows. Set sufficiently large values for $K$ and $L$, larger than our expectation for the actual values of $K$ and $L$. Set the parameters $\tilde{\alpha}_{kl}$, $\tilde{\beta}_{kl}$, $\tilde{\gamma}_k$ and $\tilde{\epsilon}_l$. We set the parameters $\tilde{\alpha}_{kl} = \tilde{\beta}_{kl} = \tilde{\gamma}_k =
\tilde{\epsilon}_l = 10^{-6}$. Set random initial conditions for $p_{ik}$ and $q_{jl}$. Starting from these initial conditions iterate equations (\[p\_bg\])-(\[F\_bg\_min\]) until the solution converges up to some predefined accuracy. We use relative error of $F^*$ smaller than $10^{-6}$. In practice, compute $\alpha_{kj}$, $\beta_{kj}$, $\langle\ln\theta_{kj}\rangle$, $\langle\ln(1-\theta_{kj})\rangle$, $\gamma_k$, $\langle\ln\pi_k\rangle$, $\epsilon_l$, $\langle\ln\kappa_l\rangle$, $p_{ik}$, $q_{jl}$ and $F^*$ in that order. To explore different potential local minima use different initial conditions and select the solution with lowest $F^*$. Since this algorithm penalizes groups with few members it turns out that, for sufficiently large $K$ and $L$ some sample and/or variable groups result empty. If this is not the case, increase $K$ and/or $L$ until at least one sample group and one variable group results empty.
### Test example: zoo problem
Let us go back to the zoo problem (Fig. \[fig\_zoo\]a). Now we represent this dataset by a bipartitite graph, with one class of vertices representing the animals and the other class the Boolean variables (e.g. Fig. \[fig\_hg\_bg\]a,b and d) Using the VB-3 algorithm (\[p\_bg\])-(\[F\_bg\_min\]), sampling 10,000 initial conditions as in [@vazquez08], we perform a two-sides clustering of the bipartite graph obtaining the animal population stratification in Fig. \[fig\_zoo\]e and the Boolean variables stratification in Fig. \[fig\_zoo\]f. The animal clusters are similar to those previously obtained using the statistical model on hypergraphs (Fig. \[fig\_zoo\]c and d). The main difference is the more refined subdivision of terrestrial mammals, now split in four groups (1, 2, 3 and 4).
In addition to the animal population stratification the two-sides clustering provides association groups between the Boolean variables (Fig. \[fig\_zoo\]f). These associations reflect the fact that not all Boolean variables are independent, some of them are linked. For example, group 2 cluster four typical attributes of terrestrial mammals, they have hair, do not put eggs, milk and have four legs. In the same way, group 3 clusters attributes of fishes and group four of birds. Thus, in general, the bipartite graph model and the resulting two-sides clustering provides more information than the hypergraph approach.
![[**Finding graph modules, bipartite model:**]{} Mutual information $I=I(p^O,p^*)$ between the original $p^O$ and estimated $p^*$ groups assignments, relative to its maximum value $I_0$ when $p^*=p^O$. The original data was made of a graph with $n=100$ vertices divided in $K=2$ groups, with an intra and inter-community connection probabilities $p_1$ and $p_2$, respectively. The figure shows the mutual information, between the original groups and the group assignment estimate by the VEM-3 algorithm (\[p\_bg\])-(\[F\_bg\_min\]), as a function of the inter-community connectivity $p_2$. The dashed-dotted, solid and dashed lines corresponds with the worst, average and best case on 100 test examples. In a) we deal with dense communities ($p_1=0.9$) and the algorithm performs well ($I/I_0\approx1$) for small values of the inter-community connectivity probability $p_2$. In b) we deal with sparse communities ($p_1=0.1$) and the algorithm performs well for large values of the inter-community connectivity probability $p_2$.[]{data-label="fig_bg"}](bc.fig.bipartite_graph.eps){width="3.2in"}
### Test example: finding network modules
The bipartite graph model can be use to find network modules as well. In this case one class of vertices represents the original graph vertices and the other represents sets of nearest neighbors (Fig. \[fig\_hg\_bg\]g). The two-sides clustering thus attempts to cluster both the original graph vertices and the sets of nearest neighbors. When the original graph is undirected the problem is symmetric (e.g. see Fig. \[fig\_hg\_bg\]g). Indeed, if vertex $i$ belongs to the nearest-neighbor set of vertex $j$ then vertex $j$ belongs to the nearest-neighbor set of vertex $i$. As a consequence the clustering on the original vertices side cannot be differentiated from the clustering of nearest-neighbor sets. Intuitively this means that when two vertices belong to the same graph module we can say that their nearest-neighbor sets belong to the same nearest-neighbor set group.
Having specified this mapping we use the VB-3 algorithm (\[p\_bg\])-(\[F\_bg\_min\]), sampling one initial condition, to find the graph modules in the original graph. To illustrate its performance we consider once again a graph composed by two communities, with probabilities $p_1$ and $p_2$ that two vertices within the same or different communities are connected, respectively. Figure \[fig\_bg\] shows that the VB-3 algorithm can resolve both dense communities with lesser inter-community connections ($p_1\gg p_2$) and sparse communities with more inter-community connections ($p_1\ll p_2$).
The comparison of Fig. \[fig\_bg\] and \[fig\_hg\] indicates that the bipartite graph model performs slightly better than the hypergraph model. For example, focusing on the average performance, for $p_1=0.9$ the VB-3 algorithm performs almost perfectly till $p_2=0.6$, while the VB-2 algorithm does till $p_2=0.5$. This could be, however, specific to the tested set of examples. Further research is required to determine which version performs better depending on the dataset under consideration.
Discussion and conclusions
==========================
The Bayesian approach allows for a systematic solution of data analysis problems. Its starting point is a statistical model of the data under consideration. From there, using Bayes rule, we can invert the statistical model to obtain the posterior distribution of the model parameters. The latter can be use, in principle, to calculate or compute averages or other magnitudes of interest.
One of the main criticisms to the Bayesian approach is the apparent ambiguity in selecting the prior distributions. Here we have worked further on Jaynes method [@jaynes68], claiming that the prior distributions are given by the most general distribution dictated by the symmetries of the problem under consideration. One undesired consequence of this method is that when the symmetries are not sufficient constraints we obtain improper prior distributions. Yet, the use of improper priors can be avoided by working with renormalized distributions that are proper, and approach the improper prior in a certain limit. Using this approach we report here a correction to Jaynes prior for a likelihood with translation and scale invariance and a generalization of Jaynes prior for the binomial model to the multinomial model.
Having resolve the issue about the prior distributions, we can proceed to the application of the Bayesian approach to resolve a population structured. Taking inspiration from mixture models [@maclachlan00], in particular Dirichlet mixture models [@blei03], we introduce general statistical models with a built in population structure at the sample, and sample and variable, level. The model with a structure at the sample level aims one-side clustering problems, where the variables are assumed to be independent measurements. The model with a structure at both sample and variable level aims two-side clustering problems, where there are classes of variables. These statistical models are then postulated as generative models of some dataset. Introducing a MF approximation as variational function, we then resolve the population structure by solving the inverse problem, i.e. determining the sample and/or variable groups and model parameters from the data.
To illustrate the applicability and systematicity of the variational method, here we study the problem of data clustering, in the context of real value and Boolean variables. The outcome is a variational Bayes (VB) algorithm, a self-consistent set of equations to determine the group assignments and the model parameters. The VB algorithm is based on recursive equations similar to those for the EM algorithm, but with some intrinsic penalization for model bias. In the case of real value data, and under the assumption of normal distributions, the contributions favoring fitting and penalizing model bias are clearly disentangled. The fitting is quantified, as it is expected for normally distributed variables, by the mean square deviation. The model bias is quantified by the inverse of the square root of the mean cluster sizes. The tendency to reduce the mean square deviation is thus balanced by a tendency to increase the cluster sizes.
In the case of Boolean variables our analysis is based on a mapping into a hypergraph or bipartite graph. When we cluster the samples but not the Boolean variables the problem is mapped onto a statistical model on hypergraphs [@vazquez08]. On the other hand, when we perform a two-side clustering, clustering both the samples and the Boolean variables, the problem is mapped onto a statistical model on bipartite graphs.
The VB algorithms associated with the statistical model on hypergraphs and bipartite graphs can be used to find modules on a graph. Starting on an idea by Newman and Leicht [@newman07], we show that the problem of graph modules can be mapped onto the problem of finding hypergraph modules or bipartite graph modules, where the hypergraph edges and the augmented bipartite graph vertices represent nearest-neighbor sets in the original graph. The resulting VB algorithms represent a significant improvement over the maximum likelihood approaches followed in [@newman07] and [@vazquez08], by including a self-consistent correction for model complexity and bias.
It is worth mentioning that, depending on the starting statistical model, we could arrive to different versions of the VB algorithm. Indeed, for the finding graph modules problem we could use both the hypergraph and bipartite graph models. Furthermore, Hofman and Wiggins [@hofman07] have obtained another version based on a statistical model with different intra and inter-community connection probabilities. These approaches differ in the definition of what constitutes a group, community or module. We use the definition by Newman and Leicht [@newman07] based on topological similarity, i.e. two vertixes are topologically identical if they are connected to the same other vertices in the graph. Thus, we obtain group of vertices whose patterns of connectivity are similar. On the other hand, the definition used by Hofman and Wiggins [@hofman07] is based on the existence of two edge densities, characterizing the tendency of having an edge between intra- and inter-group pairs of vertices. Depending on the problem and the question we are asking we may adopt one or the other definition, and use the corresponding clustering method.
[19]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ** (, ).
, , , ****, ().
, ****, ().
, ****, ().
, ** (, ).
, , , ** (, ).
, ** (, ).
, ** (, ).
, , , ** (, ).
, .
, , , ****, ().
, ****, ().
, , , ****, ().
, .
, ****, ().
, ** (, ).
, .
, ** (), <http://www.ics.uci.edu/~mlearn/MLRepository.html>.
, ****, ().
|
---
author:
- Nicholas Jennings
bibliography:
- 'Mendeley.bib'
title: 'Estimates for a DM$\rightarrow a \rightarrow \gamma$ 3.55keV line in the radio lobes of Centaurus A'
---
\[1\][$\times 10^{#1}$]{}
Introduction
============
The claimed detections of an unidentified emission line at 3.55keV in stacked samples of galaxy clusters and individual galaxies [@Bulbul2014DETECTIONCLUSTERS; @Boyarsky2014UnidentifiedCluster.] have generated a huge amount of interest among particle physicists. While the possibility remains that this could be an astrophysical effect, no single theory has been able to account for all features of the data. This has led to many models of a Dark Matter (DM) particle decaying to photons to explain the line (see [@Iakubovskyi:2015wma] for a review). However, the differing strengths of the line in different systems are in tension with a direct decay to photons.\
\
This discrepancy could be resolved by a DM $\rightarrow a \rightarrow \gamma$ decay, where an intermediate ALP particle then converts to photons in the presence of a magnetic field. Such a decay would explain many features of the data [@Conlon:2014xsa; @Conlon:2014wna], including: why the strength of the line is bounded to be weak for dwarf spheroidal galaxies (dSph), where there are no substantial magnetic fields [@Malyshev:2014xqa; @Ruchayskiy:2015onc; @Jeltema:2015mee]; and why it is measured to be strongest in the Perseus cluster, where the presence of an extended $\mathcal{O}(\mu\rm{G})$ magnetic field in the intracluster medium has been established [@Alvarez:2014gua]. It therefore behoves us to consider targets where the morphology for a DM $\rightarrow a \rightarrow \gamma$ decay differs substantially from direct DM $\rightarrow \gamma$.\
\
The giant lobes of radio galaxies represent a promising environment to test the DM $\rightarrow a \rightarrow \gamma$ model. These lobes can extend for hundreds of kiloparsecs and contain magnetic fields $\mathcal{O}(\mu\rm{G})$ and low electron densities ($\lesssim\,10^{-4}\,\rm{cm}^{-3}$), similar to galaxy clusters that have been shown to be efficient ALP-photon converters [@Conlon:2015uwa]. Objects such as dSphs in or behind the lobes, with large DM to baryonic matter ratios, might produce a 3.55keV line competitive with the small X-ray background. In contrast, dSphs not along the l.o.s. to to the lobes will have no associated 3.55keV.\
\
Centaurus A is the best radio galaxy candidate to search for a line as it is the closest, with 500kpc long lobes and an associated population of dSphs [@Karachentsev2007TheComplex]. There is also some evidence that the magnetic field is very strong ($13.4\,\mu\rm{G})$ in a region of the southern lobe far removed from the host galaxy, while other regions have magnetic fields $\sim 1\,\mu\rm{G}$ [@Sun:2016ibh]. This could allow for a direct test of the DM$\rightarrow a \rightarrow \gamma$ model, as the signal from a dSph here should be much stronger than for a dSph in another part of the lobe. It would also be interesting to determine the strength of a signal from the DM halo of Centaurus A and how this compares to dSphs.\
\
We review the current status of the 3.55keV line in Section \[3.5kev\], and discuss potential radio galaxy targets in Section \[radio\]. In Section \[conversion\], we review ALP-photon conversion in magnetic fields, and describe the magnetic field model used to derive conversion probabilities. We compare 3.55keV signatures for dSphs behind the lobes and the DM halo of Centaurus A in Section \[calculation\], and conclude in Section \[conclusion\].
Sample Instrument Energy $\sin^2(2\theta) \times 10^{-11}$
------------------------------ ---------------- ----------------- -----------------------------------
Perseus XMM-MOS $3.57$ $23.3^{+7.6}_{-8.9}$
XMM-PN $< 18~(90\%)$
Chandra ACIS-I $3.56 \pm 0.02$ $28.3^{+11.8}_{-12.1}$
Chandra ACIS-S 3.56 $40.1^{+14.5}_{-13.7}$
Coma + Centaurus + Ophiuchus XMM-MOS 3.57 $18.2^{+4.4}_{-5.9}$
XMM-PN $< 11~(90\%)$
69 stacked clusters (Bulbul) XMM-MOS 3.57 $6.0^{+1.1}_{-1.4}$
XMM-PN 3.57 $5.4^{+0.8}_{-1.3}$
M31 on-centre XMM-MOS $3.53 \pm 0.03$ 2–20
Stacked galaxies XMM-Newton $< 2.5~(99\%)$
Stacked galaxies Chandra $< 5~(99\%)$
Stacked dwarves XMM-Newton $< 4~(95\%)$
Draco XMM-Newton $\lesssim 2 - 5~(95\%)$
: The inferred line strength in different systems, observed with different instruments. References for the values can be found in Section \[3.5kev\].[]{data-label="linestrengths"}
Review of the 3.55keV line {#3.5kev}
==========================
The 3.55keV line was initially detected in 2014 in a stacked sample of 73 galaxy clusters and in the Perseus cluster [@Bulbul2014DETECTIONCLUSTERS], and by a separate group in M31 and the Perseus cluster [@Boyarsky2014UnidentifiedCluster.]. These studies involved a total of 4 separate detectors on two different satellites (ACIS-I and ACIS-S onboard [*Chandra*]{}, and MOS and PN onboard [*XMM-Newton*]{}), making it difficult to explain as a systematic effect. A potential New Physics (NP) explanation proposed for the 3.55keV excess was a 7.1keV DM sterile neutrino $\psi$ that decays to a 3.55keV photon and neutrino. The decay rate is given by:
\_ = \^2(2)G\_F\^2m\_\^5.\
Since then, analyses have disagreed on the existence of the line in the centre of the Milky Way [@Jeltema:2014qfa; @Boyarsky:2014ska; @Riemer-Sorensen:2014yda] and in [*Suzaku*]{} observations of the Perseus cluster [@Urban:2014yda; @Tamura:2014mta; @Franse:2016dln]. Observations of dwarf spheroidals [@Malyshev:2014xqa; @Jeltema:2015mee] have not found evidence for a 3.55keV line. The different inferred decay rates from the various (non-)observations are in tension (although not excluded). They are summarised in Table \[linestrengths\], parametrised in terms of $\sin^2(2\theta)$.\
\
Several astrophysical explanations of the 3.55keV line have been proposed. Of these, the most compelling have been: an emission line caused by higher than expected abundances of ionised potassium (K XVIII) at 3.51keV [@Jeltema:2014qfa]; or sulphur charge exchange at 3.44–3.47keV, where highly ionised gas interacts with cold neutral gas, causing transitions from high n orbitals to the ground state [@Gu:2015gqm; @Shah:2016efh]. The latter explanation would require an almost identical gain miscalibration across 4 different detectors, in the same direction, to explain why the line appears at 3.55keV.\
\
In 2016 the [*Hitomi*]{} satellite observed the Perseus cluster for 230ks, and found no evidence for a 3.55keV line [@Aharonian:2016gzq]. This observation ruled out the potassium explanation, and was in $3\sigma$ disagreement with the higher inferred line strength from [*XMM-Newton’s*]{} MOS instrument (it was, however, consistent with line strengths inferred from other instruments and sources). Unfortunately, with the loss of [*Hitomi*]{}, confirmation that the line has a New Physics origin will have to wait either for the [*Hitomi*]{} replacement X-ray Astronomy Recovery Mission [*XARM*]{} or, looking further ahead, the [*Athena*]{} X-ray observatory due to launch in 2028 [@Nandra:2013shg].\
\
The variation in the inferred line strengths, and the lack of a definitive astrophysics explanation, motivates consideration of alternative NP models to explain the line. One possibility is that there is no overall emission of 3.55keV photons, but instead an absorption and rapid re-emission of 3.55keV photons from point sources that were excluded from the above analyses. This would point to a 2-state Fluorescent Dark Matter model [@Berg:2016ese; @Conlon:2016lxl]. Another model that explains the data well is a decay from DM to an intermediate particle, whose conversion to photons depends on the astrophysical environment between the DM halo and us. If the intermediate particle is an axion-like particle (ALP), the probability of conversion to photons is greatly enhanced in the presence of large astrophysical magnetic fields [@Raffelt1988MixingParticles], producing a substantially different morphology for the line [@Cicoli:2014bfa], as the decay rate:
$$\label{decayrate}
\Gamma_{\rm{DM} \rightarrow a \rightarrow \gamma}({\bf B}) = \Gamma_{\rm{DM} \rightarrow a} P_{a \rightarrow \gamma}({\bf B})$$
\
depends on the magnetic field ${\bf B}$ through the ALP-photon conversion term $P_{a \rightarrow \gamma}$.\
\
The dependence on ${\bf B}$ could explain the variety of inferred values for $\sin^2(2\theta)$ in Table \[linestrengths\]. The Perseus cluster is known to host a large magnetic field that efficiently converts ALPs to photons, so it would be likely to produce a strong signal [@Berg:2016ese]. An extremely weak signal would be expected from dSphs, as these small objects cannot host a magnetic field that could cause substantial ALP-photon conversion [@Beck:2013bxa; @Spekkens:2013ik]. The predictions for the Milky Way are less clear due to the uncertainties in the structure of the magnetic field towards the centre. In the case that the magnetic field is $\mathcal{O}(10-100\,\mu\rm{G})$, no 3.55keV line would be expected; in the case of a 1mG poloidal central magnetic field, a line could be produced [@Alvarez:2014gua]. However, the model does predict a stronger line in M31 than the Milky Way [@Conlon:2014xsa], consistent with the positive detection of [@Boyarsky2014UnidentifiedCluster.].\
\
The unique morphology of a DM $\rightarrow a \rightarrow \gamma$ line would allow future observations to differentiate it from a DM $\rightarrow \gamma$ scenario. Predicted line strengths differ for cool-core and non-cool-core clusters [@Conlon:2014wna]. Stacked samples of nearly edge-on galaxies could provide good targets, as the distance the ALP travels through the galactic magnetic field is maximised [@Alvarez:2014gua]. Recently, it has been proposed that a DM $\rightarrow a \rightarrow \gamma$ line could be detected in upcoming polarisation satellite experiments, such as IXPE [@Gong:2016zsb], albeit with a very long exposure time. The focus of this paper will be the potential to detect a 3.55keV line in the giant lobes of radio galaxies. These provide good environments for ALP-photon conversion, and therefore merit a detailed consideration to determine the expected flux from DM halos in or behind the lobes.
Candidate Radio Galaxies {#radio}
========================
We consider nearby elliptical galaxies hosting AGNs with 10-100kpc scale jets, and residing in poor groups or in cluster outskirts. These jets can produce giant radio lobes expanding for up to 500kpc, which contain a tangled relic magnetic field. For most radio galaxies, the strength and coherence lengths of the magnetic fields are not known. NGC 6251 and DA240 have lobes with magnetic field strengths estimated [@Sambruna2004TheNGC6251; @Takeuchi2012SUZAKUJ1629.4+8236; @Isobe2011Suzaku240], however we have no knowledge of their coherence lengths.\
\
The nearest radio galaxy to us, Centaurus A, represents the best candidate to search for a DM $\rightarrow a \rightarrow \gamma$ line. Its outer lobes are around 500kpc long, and 200kpc wide. The coherence length of the magnetic field are limited to be $\gtrsim 10\,\rm{kpc}$ by [@OSullivan2009StochasticGalaxies] and $\gtrsim 30\,\rm{kpc}$ by [@Wykes2013MassCentaurusA]). The electron number density is limited to be $n_{e} \leq 7 \times 10^{-5}\,$cm$^{-3}$ by [@Wykes2013MassCentaurusA], while [@Feain2009FARADAYA] limit it to $n_{e} \leq 5 \times 10^{-5}\,$cm$^{-3}$. While the AGN and jets will produce significant 0.2-10 keV emission, the lobes produce very little. In an analysis of two patches of the southern lobe using the Hard X-ray detector on Suzaku (XIS) [@Koyama2007X-RaySuzaku], the absorbed X-Ray energy flux in the 2-10 keV range was found to be $F = 6.5 \times 10^{-12}$ erg cm$^{-2}$/0.35deg$^{2}$, a value consistent with the lobe emission being no higher than 10% of the CXB [@Stawarz2013GIANTSATELLITE].\
\
Estimates of its magnetic field strength are typically $\sim 1\,\mu\rm{G}$, derived from: Fermi-LAT detection of gamma rays interpreted as inverse Compton scattering of CMB photons ($0.89\,\mu\rm{G}$ in the northern lobe, $0.85\,\mu\rm{G}$ in the southern lobe [@Abdo2010FermiGalaxy.b]); synchrotron emission ($1\,\mu\rm{G}$ [@Hardcastle2013SynchrotronDistributions]) and equipartition arguments ($1.3\,\mu\rm{G}$ [@Hardcastle2009High-energyA]). More recently, Fermi-LAT data was used to infer magnetic field strengths of $1\,\mu\rm{G}$ for most of the radio lobes, apart from a region in the southern lobe furthest from the galaxy, where the best-fit value was $13.4\,\mu\rm{G}$ [@Sun:2016ibh]. It remains to be seen whether this high value will be corroborated by other studies, and the subtleties of disentangling electron distributions from magnetic field strengths mean this should be treated with caution. Possible contributions from hadronic processes could push the value down to $1\,\mu\rm{G}$. We will therefore consider magnetic field strengths of $1\,\mu\rm{G}$ and $13.4\,\mu\rm{G}$ in our analysis, and stress that the higher value may not be robust[^1].\
\
Crucially, Centaurus A has a number of galaxies, including identified dSphs, in its vicinity [@Karachentsev2007TheComplex]. Currently there are 40 confirmed dSphs [@1997AJ....114.1313C; @2000AJ....119..609V; @2014ApJ...795L..35C; @2016ApJ...823...19C; @2015ApJ...802L..25T], some of which lie along the line of sight to the radio lobes, which extend between approximately $-38^{\circ}$ and $-48^{\circ}$ Declination and between 13h40m and 13h20m Right Ascension [@Keivani2015MagneticA] (a list of objects that could lie within the radio lobes based on the Catalog of Neighboring Galaxies are shown in Table \[dsph\] [@Jerjen2000SurfaceGroups; @Karachentsev2004AGalaxies]). Inferred distances indicate some could lie in or behind the lobes, providing a potential source for a 3.55keV ALP line. A more recent survey with the [*Dark Energy Camera*]{} [@2015AJ....150..150F] has uncovered potentially 41 new dSph candidates . Dwarf Spheroidals make ideal sources of DM decay processes due to their high mass-to-luminosity ratios and negligible X-ray emission. In addition, the velocity broadening of a 3.55keV line from a dSph is far below 1eV, making it easier to discriminate the line from the background than in galaxy clusters [@Walker:2007ju]. However, currently there is no information on the DM profiles of these objects. We therefore estimate a signal from a dSph based on DM profiles for the classical dSphs.\
\
**Object** **Type** **R.A.** **Dec.** **Distance(Mpc)**
------------ ---------- ------------ ----------- -------------------
Cen A 13 25 28.9 -43 01 00 $3.77 \pm 0.38$
KK 196 dIrr 13 21 47.1 -45 03 48 $3.98 \pm 0.29$
KK 197 dSph 13 22 01.8 -42 32 08 $3.87 \pm 0.27$
KKs 55 dSph 13 22 12.4 -42 43 51 $3.94 \pm 0.27$
KK 203 ? 13 27 28.1 -45 21 09 3.8
E324-24 lsb 13 27 37.4 -41 28 50 $3.73 \pm 0.43$
E270-17 SBm 13 34 47.3 -45 32 51 $4.3 \pm 0.8$
: Coordinates and distances of objects near the lobes of Centaurus A[]{data-label="dsph"}
Modelling ALP-photon conversion in giant radio lobes {#conversion}
====================================================
Review of ALP-photon conversion
-------------------------------
An ALP couples to electromagnetism through the Lagrangian term:
$$\mathcal{L} \supset \frac{1}{8M} aF_{\mu\nu}\tilde{F}^{\mu\nu} \equiv \frac{1}{M}a\vec{E}\cdot\vec{B}$$
where $M^{-1} = g_{a\gamma\gamma}$ is the ALP-photon coupling. For a homogeneous magnetic field domain of length $L$, the probability of ALP-photon conversion is [@Raffelt1988MixingParticles; @Sikivie:1983ip]:
$$P(a \to \gamma) = \sin^2(2\theta)\sin^2\bigg(\frac{\Delta}{\cos(2\theta)}\bigg)$$
where $\tan(2\theta) = \frac{2B_\perp\omega}{Mm_{eff}^2}, \Delta = \frac{m_{eff}^2L}{4\omega}$, for energy $\omega$ and magnetic field perpendicular to the ALP wave vector $B_\perp$, and $m_{eff}^2 = |m_a^2 - \omega_{pl}^2|$ for an ALP mass $m_a$ and plasma frequency $\omega_{pl} = \sqrt{4\pi\alpha n_e/m_e}$. Henceforth we assume $m_a \ll \omega_{pl}$ and set it to zero. After plugging in constants, $\tan(2\theta)$ and $\Delta$ evaluate to:
$$\label{theta}
\tan(2\theta) = 4.9 \times 10^{-2} \bigg( \frac{10^{-4} \rm{cm}^{-3}}{n_e} \bigg) \bigg( \frac{B_\perp}{1 \mu \mathrm{G}} \bigg)\bigg( \frac{\omega}{3.5 \mathrm{keV}} \bigg)\bigg( \frac{10^{13} \mathrm{GeV}}{M} \bigg)$$
$$\label{Delta}
\Delta = 1.5 \times 10^{-2}\bigg( \frac{n_e}{10^{-4} \rm{cm}^{-3}} \bigg)\bigg( \frac{3.5 \mathrm{keV}}{\omega} \bigg)\bigg( \frac{L}{10\,\mathrm{kpc}} \bigg)$$
\
For $\Delta \ll 1$ and $\theta \ll 1$ the conversion probability simplifies to:
$$\label{Pag}
P(a \to \gamma) = 2.3 \times 10^{-8}\bigg(\frac{B_\perp}{1 \mu \mathrm{G}}\frac{L}{1 \mathrm{kpc}}\frac{10^{13}\mathrm{GeV}}{M}\bigg)^2$$
\
For $M \gtrsim 10^{13}\,\rm{GeV}$, this condition holds in radio lobes, as well as galaxy clusters. The inferred value of $\Gamma_{\rm{DM} \rightarrow a}$ from equation \[decayrate\] therefore is proportional to $M^2$. Our calculations for the 3.55keV line strength in radio galaxies is therefore independent of $M$.
Magnetic field model {#magnetic}
--------------------
We model the magnetic field of the radio lobes far from the jet region as a multi-scale, random-domain tangled field which was used to model synchrotron emission from Centaurus A in [@Hardcastle2013SynchrotronDistributions]. These models have also been used for magnetic fields of galaxy clusters [@Murgia2004MagneticGalaxies; @Angus2014SoftBackground]. We generate a random vector potential with a power spectrum:
$$\langle |\tilde{A}_k|^2\rangle \sim |k|^{-n}$$
\
The magnitude and phase are uniform for each domain. The magnitude is randomly selected from a Rayleigh distribution:
$$p(\tilde{A}_k) = \frac{\tilde{A}_k}{|k|^{-n}} \exp{\bigg(-\frac{\tilde{A}_k^2}{2|k|^{-n}}\bigg)}$$
\
while the phase is uniformly distributed from 0 to $2\pi$. The one-dimensional power spectrum of the magnetic field is then: $$\mathcal{P}(k) \sim 2\pi k^2 |\tilde{B}_k|^2 \propto k^{-n+4}$$ for $\tilde{B}_k = ik \times \tilde{A}_k$. The value of $n$ is inferred from synchrotron data [@Hardcastle2013SynchrotronDistributions], which supports a value close to Kolmogorov ($n = 17/3$). We model the magnetic field along a 200kpc line of sight, which is the width of the radio lobes. We truncate the power spectrum, $k_{min} < k < k_{max}$ where $k_{min} = 2\pi/\Lambda_{max}$ and $k_{max} = 2\pi/\Lambda_{min}$. The minimum length scale $\Lambda_{min} = 10\,\rm{kpc}$ uses the value derived in [@OSullivan2009StochasticGalaxies], while we examine the effect of allowing $\Lambda_{max}$ to vary. We conservatively take the electron number density $n_e = 10^{-4}\,\rm{cm}^{-3}$ to be constant throughout the lobe. We therefore also take the magnetic field strength to be constant across 200kpc, and 0 outside the lobe. We model both $1\,\mu\rm{G}$ and $13.4\,\mu\rm{G}$.\
\
We generated 1000 different magnetic field configurations for each field strength. We propagated a 3.55keV ALP from cell-to-cell and calculated the total conversion probability for each configuration. In all cases we used $M = 10^{13}\,\rm{GeV}$. For a 20 cell model with 10kpc length domains (i.e. $\Lambda_{max} = \Lambda_{min} = 10\,\rm{kpc}$), an average conversion probability of $(2.5 \pm 0.1) \times 10^{-5}$ was derived for a magnetic field strength of $1\,\mu\rm{G}$, and $(4.4 \pm 0.1) \times 10^{-3}$ for a magnetic field strength of $13.4\,\mu\rm{G}$. Here we quote value $\pm$ standard deviation based on the variation of magnetic field configurations: this is not a full account of the error. If we take $\Lambda_{max} = 30\,\rm{kpc}$, over 200kpc the conversion probabilities are enhanced by no more than a factor of 2. The conversion probability for $1\,\mu\rm{G}$ is 2 orders of magnitude lower than that typical for galaxy clusters, while for $13.4\,\mu\rm{G}$ it is of the same order.
Estimating X-ray signals {#calculation}
========================
Signal from dark matter in a dSph
---------------------------------
The brightnesses of the dSphs near Centaurus A are not well constrained, making an estimation of their DM profiles challenging [@Karachentsev2002NewGroup]. We therefore estimate DM profiles based on those of the classical dSphs. The astrophysical D-factors of the classical and ultrafaint dSphs have been calculated in [@Bonnivard2015DarkDSphs]. As the strongest signal will come from the centre of the dSph, we wish to use the best-fit Einasto profiles calculated in this paper rather than just the D-factor, where the Einasto density profile is given by: $$\rho_{Ein}(r) = \rho_{-2}\exp{\Big\{-\frac{2}{\alpha}\Big[\Big( \frac{r}{r\,_{-2}}\Big)^\alpha - 1 \Big]\Big\}}.$$\
We used the best constrained dSph density profiles (Leo I and II, CVI, Carina, Fornax, Sculptor, Draco, Ursa Minor and Sextans) and integrate along the l.o.s. to produce a 2D integrated DM column density profile. While the DM density profile at the centre is model-dependent, the integrated density profile receives a small contribution from the central volume of the dSph. We found the densities of the central kpc$^{2}$ of the dSphs lie between $10^7 - 10^8$ M$_{\odot}$/kpc$^2$. We use this to estimate the ALP flux from the central kpc$^2$, which corresponds to 1arcmin at 3.8Mpc. The decay rate from the DM particle to ALPs is inferred from Perseus observations to be:
$$\Gamma_{\rm{DM} \rightarrow a} \sim 2 \times 10^{-25}\bigg(\frac{M}{10^{13} \, \mathrm{GeV}}\bigg)^2 \,\rm{s}^{-1}$$
\
where the dependence on $M$ compensates for the fact that $P_{a \rightarrow \gamma} \propto 1/M^2$. Using , and the values of $P_{a \rightarrow \gamma}$ calculated in Section \[magnetic\], we calculate the total 3.55keV flux:
\[DM\_flux1\] F\_[ a ]{}= P\_[a]{}\_ V\
where the DM density $\rho_{DM}$ is integrated over a volume equal to 1kpc$^2 \times l$ where $(4/3)\pi(l/2)^3$ is large enough to include more than 99% of the DM mass, and $d$ is the distance to the dSph. From a typical dSph, the flux is found to be $1 - 10 \times 10^{-20}$ergs$^{-1}$cm$^{-2}/\rm{arcmin}^2$ in the case of a $1\,\mu\rm{G}$ magnetic field strength and $1 - 10 \times 10^{-18}$ergs$^{-1}$cm$^{-2}/\rm{arcmin}^2$ for $13.4\,\mu\rm{G}$. The X-ray background between 2-10 keV is $1.2 \times 10^{-14}$ erg s$^{-1}$ cm$^{-2}/\rm{arcmin}^2$. Therefore we estimate that for a 100eV detector resolution (such as for the instruments onboard [*Chandra*]{} and [*XMM-Newton*]{}) a background flux of $\sim 1.5 \times 10^{-16}$ergs$^{-1}$cm$^{-2}/\rm{arcmin}^2$, and for a 2.5eV detector resolution (such as for the X-ray Integral Field Unit onboard [*Athena*]{} [@Barret:2016ett]) a background flux of $\sim 4 \times 10^{-18}$ergs$^{-1}$cm$^{-2}/\rm{arcmin}^2$.\
\
In the case that we have a dSph behind a region of the radio lobe where the magnetic field strength is $1\,\mu\rm{G}$, it would be very challenging to detect a 3.55keV line, as we would need to be sensitive to a 1% effect with a satellite like [*Athena*]{}. This would require substantial satellite time and also precise modelling of the contribution from X-ray emission from the radio lobe. In the case of a magnetic field strength of $13.4\,\mu\rm{G}$, the situation is more promising. With [*Athena*]{}, the 3.55keV line would be an effect of the same order as the CXB, which could be detectable. However, a $13.4\,\mu\rm{G}$ magnetic field strength is anomalously high compared to the rest of the radio lobe, and the data may be explained by other sources of emission not taken into account by the model. Were this value to be confirmed, it would be worth observing dSphs within this region to search for a 3.55keV line. If the magnetic field strength is found to be $\sim 1\,\mu\rm{G}$ everywhere in the lobe, our analysis shows that dSphs are not promising targets for a 3.55keV DM $\rightarrow a \rightarrow \gamma$ line.
Signal from the Dark Matter halo of Centaurus A {#dmcena}
-----------------------------------------------
It is instructive to compare the signals we would expect from dSphs to that produced by the DM halo of Centaurus A itself. In the case of a DM$\rightarrow \gamma$ decay the line strength should be substantially greater in Centaurus A as it hosts a much larger DM halo. However, with a $1\,\mu\rm{G}$ magnetic field strengths in the regions of the lobes near the galaxy, we might expect the DM$\rightarrow a \rightarrow \gamma$ line to be weaker.\
\
To estimate the DM profile of Centaurus A we follow the procedure used in [@Anderson2015Non-detectionSpectra]. In order to calculate an NFW profile: \_=,\
we infer the parameters $\rho_0$ and $r_s$ from the K-band apparent magnitudes listed in the 2MASS All-Sky Extended Source Catalog. From the K-band magnitudes ($m_{\scriptscriptstyle{K}}$) we infer the stellar masses $m_s$, setting: \[mL\] =0.5.\
We then determine the total DM mass ${M_{\scriptscriptstyle{DM}}}$ within the virial radius using eq. 13 in [@Moster2010CONSTRAINTSREDSHIFT]: m\_s=2 [M\_]{}()\_0 ,\
where $\left(\frac{m_s}{{M_{\scriptscriptstyle{DM}}}}\right)_0=0.0282$, $\beta=1.06$, $\gamma=0.556$, $\text{log}M_1=11.884$. We estimate the virial radius as: R\_=( ),\
with $\rho_c=9.1\cdot\,10^{-30}\, \text{g\, cm}^{-3}$. Following [@Prada2012HaloCosmology], we compute the halo concentration $c=c({M_{\scriptscriptstyle{DM}}}, z)$.\
Finally we compute $\rho_0$ and $r_s$ from c= and [M\_]{}=4\_0\^[R\_]{} \_ r\^2 dr.\
For Centaurus A we find that $\rho_0 = 0.003\,(M_{\odot} /\text{pc}^3)$ and $r_s = 29.3\,\rm{kpc}$. Alternative derivations from direct kinematic measurements and an isothermal fit produce similar results [@Peng2004The5128]. The three profiles are compared in Figure \[CentA\_nfw\].\
\
In this case, the 3.55keV flux calculation is subtly different to Equation \[DM\_flux1\], as we must consider the ALP flux from the DM halo behind the lobe separately from the DM halo within the lobe:
(img) at (0,0) [![Comparison of Centaurus A dark matter profiles. Green represents the NFW profile derived from the K-band magnitude [@Anderson2015Non-detectionSpectra]; blue, NFW fitted from kinematic measurements in [@Peng2004The5128]; purple, isothermal fitted in [@Peng2004The5128].[]{data-label="CentA_nfw"}](CentA_nfw "fig:")]{}; at (0,6.5) [$\rho$]{}; at (6.5,0) [$r\,$(pc)]{};
F\_[ a ]{}\~ ( P\_[a]{}\^l \_[V\_1]{} \_ dV + \_[V\_2]{} \_ P\_[a]{}(V) dV ) \[DM\_flux2\]\
where $P_{a\rightarrow \gamma}^l$ is the ALP-photon conversion probability across the propagation length $l$, $P_{a\rightarrow \gamma}(V)$ the ALP-photon conversion probability across a volume $V$, $V_1$ is the non magnetised volume within the field of view back to the magnetised lobes, and $V_2$ is the lobes volume within the FOV approximated as a cylinder, as shown in Figure \[lobe\]. $P_{a \gamma \gamma}$ is calculated using the magnetic field model of Section \[magnetic\] with a field strength of $1\,\mu\rm{G}$.\
\
The expected 3.55keV line flux from the DM halo of Centaurus A is plotted against distance in Figure \[CentA\] for two different NFW profiles. It is also plotted against the expected fluxes for a DM$\rightarrow \gamma$ line of the strength observed in the Perseus Cluster, and an order of magnitude weaker as observed in other galaxy clusters. As can be seen, the DM $\rightarrow a \rightarrow \gamma$ flux is substantially smaller than that of a direct decay to photons, which is over 10% of the CXB up to 50kpc from the centre for the higher value of $\sin^2(2\theta)$. Therefore a non-observation of the 3.55keV line in the DM halo of Centaurus A could provide a consistency check on an observation of a 3.55keV line in dSphs behind the radio lobes for a DM $\rightarrow a \rightarrow \gamma$ model.
![Diagram of the regions $V_1$ and $V_2$ as defined in Equation \[DM\_flux2\]. The observer is to the right of the diagram, with $V_1$ being the volume of the Centaurus A DM halo behind the lobe, and $V_2$ the volume within the lobe.[]{data-label="lobe"}](lobe.pdf)
(img) at (0,0) [![The 3.55keV line surface brightness as a function of the distance from the core of Centaurus A. The solid black line corresponds to a $DM\rightarrow \gamma$ direct decay, , computed with two different NFW profiles (see Section \[dmcena\]) while for the dotted black line . The solid red line corresponds to a $DM\rightarrow a \rightarrow \gamma$ decay. The dotted blue lines show the brightness as a percentage of the background emission within a 100eV range.[]{data-label="CentA"}](CentA "fig:")]{}; at (0,6.9) [$S \left (\frac{\text{cnts}\,\, \text{s}^{-1}}{\text{arcmin}^{2} \text{cm}^{2} }\right)$]{}; at (6.6,0.2) [$r\,$(Kpc)]{};
at (5.1, 5.4) [$\scriptstyle{DM\rightarrow \gamma}$]{}; at (2.8, 3.5) [$\scriptstyle{DM\rightarrow a \rightarrow \gamma}$]{};
at (7.5, 6.2) [$\scriptstyle{10\%}$]{}; at (7.5,4.7) [$\scriptstyle{1\%}$]{}; at (7.5,3.2) [$\scriptstyle{0.1\%}$]{};
Conclusion
==========
The origin of the 3.55keV line remains an intriguing open problem. The DM $\rightarrow a \rightarrow \gamma$ decay model produces a unique morphology for the line that allows for unique testable predictions for different astrophysical environments. It is therefore important to analyse all promising targets to search for a DM $\rightarrow a \rightarrow \gamma$ line. The giant radio lobes of Centaurus A contain a relic magnetic field that extends over hundreds of kiloparsecs, and could have a magnetic field strength up to $13.4\,\mu\rm{G}$. We have discussed the possibility to detect a signal by observing dwarf spheroidal galaxies behind these radio lobes. For a magnetic field strength of $1\, \mu\rm{G}$, and a galaxy of similar dark matter profile to the classical dSphs, the ALP-photon conversion probability is $\mathcal{O}(10^{-5})$, which would produce a signal $\mathcal{O}(10^{-4} - 10^{-3})$ of the CXB within a 100eV energy range, and $\mathcal{O}(10^{-2} - 10^{-1})$ for a 2.5eV range, meaning that we are unlikely to be able to detect a signal. If the anomalously high magnetic field strength of $13.4\,\mu\rm{G}$ inferred for a region in the southern lobe is confirmed, this could boost the signal to be of the same order as the CXB in a 2.5eV range. In this case, it could be possible to detect a 3.55keV line from a dSph, which would provide a compelling test of the DM $\rightarrow a \rightarrow \gamma$ hypothesis. Further observations to better determine the dSph density profiles, and the magnetic field strength of the radio lobes, are needed to determine a DM $\rightarrow a \rightarrow \gamma$ could be observed.
[^1]: The jets likely have much stronger magnetic fields associated with them but over shorter coherence lengths. In addition there is much more background X-ray emission. The complexities of modelling such magnetic fields are beyond the scope of this paper, therefore we do not consider conversion in regions near the jets.
|
---
abstract: 'Let $\mathcal{A}$ be a Noetherian ring and $\mathcal{B}$ be a finitely generated $\mathcal{A}$-algebra. Denote by $\overline{\mathcal{A}}$ the integral closure of $\mathcal{A}$ in $\mathcal{B}$. We give necessary and sufficient conditions for a prime $\mathfrak{p}$ in $\mathcal{A}$ to be in $\mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}})$ generalizing and strengthening classical results for rings of special type.'
address: |
Department of Mathematics\
University of Chicago\
Chicago, IL 60637\
Institute of Mathematics\
and Informatics, Bulgarian Academy of Sciences\
Akad. G. Bonchev, Sofia 1113, Bulgaria
author:
- Antoni Rangachev
title: Associated primes and integral closure of Noetherian rings
---
Introduction
============
Let $\mathcal{A} \subset \mathcal{B}$ be commutative rings with identity. Denote the integral closure of $\mathcal{A}$ in $\mathcal{B}$ by $\overline{\mathcal{A}}$. The main result of the paper is the following theorem.
\[main\] Suppose $\mathcal{A}$ is Noetherian and $\mathcal{B}$ is a finitely generated $\mathcal{A}$-algebra.
1. Suppose that the minimal primes of $\mathcal{B}$ contract to primes of height at most one in $\mathcal{A}$. If $\mathfrak{q} \in \mathrm{Ass}_{\overline{\mathcal{A}}}(\mathcal{B}/\overline{\mathcal{A}})$, then $\mathrm{ht}(\mathfrak{q}) \leq 1$. If $\mathfrak{p} \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}})$, then there exists a prime $\mathfrak{q}$ in $\overline{\mathcal{A}}$ with $\mathrm{ht}(\mathfrak{q}) \leq 1$ that contracts to $\mathfrak{p}$.
2. Suppose that the minimal primes of $\mathcal{B}$ contract to minimal primes of $\mathcal{A}$ and $\mathcal{A}_{\mathfrak{p}_{\mathrm{min}}}=\overline{\mathcal{A}}_{\mathfrak{p}_{\mathrm{min}}}$ for each minimal prime $\mathfrak{p}_{\mathrm{min}}$ of $\mathcal{A}$. Then there exists $f \in \mathcal{A}$ such that $$\mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}}) \subseteq \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}_{f}/\mathcal{A}_f) \cup \mathrm{Ass}_{\mathcal{A}}(\mathcal{A}_{\mathrm{red}}/f\mathcal{A}_{\mathrm{red}}).$$ Furthermore, $\mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}})$ is finite.
3. Suppose $\mathcal{A}$ is equidimensional and universally catenary and suppose that the minimal primes of $\mathcal{B}$ contract to minimal primes of $\mathcal{A}$. If $\mathfrak{p} \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}})$, then $\mathrm{ht}(\mathfrak{p}) \leq 1$.
If $\mathcal{A}$ is reduced, and if $\mathcal{B}$ is contained in the ring of fractions of $\mathcal{A}$ and is integrally closed, then $\overline{\mathcal{A}}$ is the integral closure of $\mathcal{A}$ in its ring of fractions. In this case Thm. \[main\] [(i)]{} is part of the Mori–Nagata theorem (see Prp. 4.10.2 and Thm. 4.10.5 in [@Huneke]).
Suppose $R$ is a Noetherian ring and $I$ is an ideal in $R$, and $t$ is a variable. The [*Rees algebra*]{} of $I$, denoted $\mathcal{R}(I)$, is the subring of $R[t]$ defined as $\oplus_{n=0}^{\infty} I^{n}t^n$. Then Thm. \[main\] [(ii)]{} with $\mathcal{A}:= \mathcal{R}(I)$ and $\mathcal{B}:= R[t]$ along with Prp. \[finite\] imply that the set $\mathrm{Ass}_{R}(R/\overline{I^n})$ is finite. This is a result of Rees [@Rees81], which he derived as a consequence of his valuation theorem (cf. Sct. 8 in [@Sharp]), and of Ratliff [@Ratliff], McAdam and Eakin [@Eakin], who treated the case $\mathrm{ht}(I) \geq 1$. In Cor. \[Rees finite\] we show that Thm. \[main\] [(ii)]{} leads to an explicit description of $\mathrm{Ass}_{R}(R/\overline{I^n})$. An application of Thm. \[main\] ([ii]{}) to valuation theory will be considered in a sequal to this paper ([@Rangachev3]).
Suppose $(R,\mathfrak{m})$ is a local Noetherian universally catenary ring. Theorem \[main\] [(iii)]{} was proved by various authors in the case when $\mathcal{A}$ is the Rees algebra of an ideal or an $R$-module and $\mathcal{B}$ is a polynomial ring over $R$.
Concretely, if $\mathcal{A}:= \mathcal{R}(I)$ is the Rees algebra of an ideal $I$ in $R$ and $\mathcal{B}:=R[t]$ is the polynomial ring in one variable, then Thm. \[main\] [(iii)]{} is a classical result of McAdam [@McAdam]. His result was generalized by Katz and Rice (Thm. 3.5.1 in [@Katz2]) to the case when $\mathcal{A}$ is the Rees algebra of a finitely generated module $\mathcal{M}$ and $\mathcal{B}$ is the symmetric algebra of a free $R$-module $\mathcal{F}$ of finite rank that contains $\mathcal{M}$ with $\mathcal{M}$ and $\mathcal{F}$ generically equal. The author (Thm. 5.4 in [@Rangachev]) proved Thm. \[main\] [(iii)]{} in the case when $\mathcal{A}$ and $\mathcal{B}$ are the Rees algebras of a pair of finitely generated $R$-modules $\mathcal{M} \subset \mathcal{N}$. In Cor. \[suv\] we show that Thm. \[main\] [(iii)]{} recovers at once a criterion for integral dependence of Simis, Ulrich and Vasconcelos (Thm. 4.1 in [@SUV]).
We prove a converse to Thm. \[main\] ([iii]{}) under some additional hypothesis on $\mathcal{B}$ without requiring $\mathcal{A}$ to be universally catenary.
\[converse\] Let $(R,\mathfrak{m})$ be a local Noetherian ring contained in $\mathcal{A}$. Suppose $\mathfrak{m}\mathcal{B}$ is of height at least $2$. If $\mathfrak{p}$ is a minimal prime of $\mathfrak{m}\mathcal{A}$ with $\mathrm{ht}(\mathfrak{p}) \leq 1$, then $\mathfrak{p} \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}})$.
A typical situation when such $\mathcal{B}$ arises is when $\mathcal{A}$ is a Rees algebra of a module $\mathcal{M}$ that sits inside a free module $\mathcal{F}$ over a local ring $(R,\mathfrak{m})$ of dimension at least $2$. Then $\mathcal{B}$ can be taken to be the symmetric algebra of $\mathcal{F}$, which is a polynomial ring over $R$ and thus $\mathrm{ht}(\mathfrak{m}\mathcal{B}) \geq 2$.
In this setup Thm. \[converse\] along with Prp. 8.5 in [@Rangachev], which treats the case $\dim R=1$, recovers results of Burch [@Burch] for Rees algebras of ideals (cf. Thm. 5.4.7 in [@Huneke]), and of Rees [@Rees81], Katz and Rice [@Katz2] (Thm. 3.5.1 in [@Katz3]) for Rees algebras of modules embedded in free modules. More generally, the author proved Thm. \[converse\] assuming that $\mathcal{A}$ and $\mathcal{B}$ are standard graded $R$-algebras (Thm. 8.3 in [@Rangachev]).
In Prp. \[nice embedding\] we prove that if $\mathcal{A}$ is a graded domain over $R$ with $\dim R \geq 2$, an embedding of $\mathcal{A}$ in a graded $R$-algebra $\mathcal{B}$ satisfying the hypothesis of Thm. \[converse\] exists as a consequence of Noether normalization. Such embeddings play an important role in the theory of local volumes [@Rangachev2].
[**Acknowledgements.**]{} I would like to thank Steven Kleiman and Madhav Nori for helpful and stimulating conversations, and for providing me with comments on improving the exposition of this paper. I was partially supported by the University of Chicago FACCTS grant “Conormal and Arc Spaces in the Deformation Theory of Singularities.”
Proofs
======
We begin with two propositions. Prp. \[finite\] generalizes Lem. 3.1 in [@Katz3] (cf. Prp. 7.1 in [@Rangachev]). It’s central to the proof of Thm. \[main\] [(ii)]{} and Thm. \[converse\]. The proof of Prp. \[finite\] is based on the generic freeness lemma of Hochster and Roberts (see Lem. 8.1 in [@Hochster] or the lemma preceeding Thm. 24.1 in [@Matsumura]).
The second proposition Prp. \[faithful flatness\] and the corollary that follows it generalize Lem. 5.3 (1) in [@Rangachev]. We use these statements in the proof of Thm. \[main\] to pass to the situation when $\mathcal{A}$ is a reduced Nagata (pseudo-geometric) ring, which guarantees that $\overline{\mathcal{A}}$ is Noetherian.
\[finite\] Let $R$ be a Noetherian ring and let $\mathcal{A} \subset \mathcal{B}$ be Noetherian $R$-algebras. Assume $\mathcal{B}$ is a finitely generated $\mathcal{A}$-algebra. Then $\mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\mathcal{A})$ is finite and each prime in $\mathrm{Ass}_{R}(\mathcal{B}/\mathcal{A})$ is a contraction of a prime in $\mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\mathcal{A})$.
First, we show that $\mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\mathcal{A})$ is finite. Let $t_1, \ldots, t_k$ be the generators of $\mathcal{B}$ as an $\mathcal{A}$-algebra. Then by breaking the filtration $$\mathcal{A} \subset \mathcal{A}[t_1] \subset \ldots
\subset \mathcal{A}[t_1, \ldots, t_k] = \mathcal{B}$$ into short exact sequences, we see it’s enough to show that $$\bigcup_{i=1}^{k}\mathrm{Ass}_{\mathcal{A}}(\mathcal{A}[t_1, \ldots, t_{i}]/\mathcal{A}[t_1, \ldots, t_{i-1}])$$ is finite. Hence we can assume that $\mathcal{B}=\mathcal{A}[t]$ for $t \in \mathcal{B}$. Set $\mathcal{A}_i := \mathcal{A} + \mathcal{A}t+ \cdots + \mathcal{A}t^i$. Note that each $\mathcal{A}_i$ is a finitely generated $\mathcal{A}$-module. Consider the exact sequence $$\mathcal{A}_1/\mathcal{A}_0 \rightarrow \mathcal{A}_{2}/\mathcal{A}_{1} \rightarrow \cdots \rightarrow \mathcal{A}_{n+1}/\mathcal{A}_{n} \rightarrow \cdots$$ where each arrow is given by multiplication by $t$ and thus it’s surjective. Denote by $\phi_{n}$ the composite map from $\mathcal{A}_1/\mathcal{A}_0$ to $\mathcal{A}_{n+1}/\mathcal{A}_{n}$. Then the chain $$\mathrm{Ker}(\phi_1)\subset \ldots \subset \mathrm{Ker}(\phi_{n}) \subset \ldots$$ must stabilize eventually as $\mathcal{A}_1/\mathcal{A}_0$ is a Noetherian module. This shows that for $n \gg 0$ the map $\mathcal{A}_n/\mathcal{A}_{n-1} \rightarrow \mathcal{A}_{n+1}/\mathcal{A}_{n}$ is an isomorphism. Hence $\bigcup_{i=0}^{\infty} \mathrm{Ass}_{\mathcal{A}}(\mathcal{A}_{i+1}/\mathcal{A}_{i})$ is finite.
Let $b \in \mathcal{B}$ whose image $\tilde{b}$ in $\mathcal{B}/\mathcal{A}$ is such that $\mathrm{Ann}_{\mathcal{A}}(\tilde{b}) = \mathfrak{p}$ where $\mathfrak{p}$ is a prime ideal in $\mathcal{A}$. There exists $j \geq 1$ such that $b \in \mathcal{A}_j$ but $b \not \in \mathcal{A}_{j-1}$. Then $\mathfrak{p} \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{A}_j/\mathcal{A}_{j-1})$. Hence $\mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\mathcal{A})$ is finite.
The second part of the proposition is [@Stacks [Tag 05DZ](http://stacks.math.columbia.edu/tag/05DZ)]. For completeness we include our own proof which is part of Prp. 7.1 in [@Rangachev]. Let $\mathfrak{q} \in \mathrm{Ass}_{R}(\mathcal{B}/\mathcal{A})$ and let $\tilde{b} \in \mathcal{B}/\mathcal{A}$ is such $\mathfrak{q} = \mathrm{Ann}_{R}(\tilde{b})$. Set $\mathcal{I}(\tilde{b})=\mathrm{Ann}_{\mathcal{A}}(\tilde{b})$. Then $\mathcal{I}(\tilde{b}) \cap R = \mathfrak{q}$. Because $\mathfrak{q}$ is prime in $R$, then the contraction of the radical of $\mathcal{I}(\tilde{b})$ to $R$ is $\mathfrak{q}$. But the radical of $\mathcal{I}(\tilde{b})$ is the intersection of finitely many primes in $\mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\mathcal{A})$. Thus, there exists an $Q \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\mathcal{A})$ whose contraction to $R$ is $\mathfrak{q}$.
\[faithful flatness\] Let $R$ be a ring and let $\mathcal{A} \subset \mathcal{B}$ be $R$-algebras. Let $R \rightarrow S$ be a faithfully flat ring map. Denote by $\overline{\mathcal{A} \otimes_{R}S}$ the integral closure of $\mathcal{A} \otimes_{R}S$ in $\mathcal{B} \otimes_{R}S$. Then $$\overline{\mathcal{A} \otimes_{R}S} \cap \mathcal{B} = \overline{\mathcal{A}}$$
By flatness of $R \rightarrow S$ it follows that $\mathcal{A} \otimes_{R} S$ is contained in $\mathcal{B} \otimes_{R} S$. By faithful flatness of $R \rightarrow S$ it follows that $\mathcal{A}$ and $\mathcal{B}$ inject into $\mathcal{A} \otimes_{R} S$ and $\mathcal{B} \otimes_{R} S$ respectively. Suppose $b \in \mathcal{B}$ is integral over $\mathcal{A} \otimes_{R}S$, i.e. $b$ satisfies the following relation $$\label{int. dep.}
b^n + a_{n-1}b^{n-1}+ \cdots + a_0 = 0$$ where $a_i \in \mathcal{A} \otimes_{R}S$. Set $\mathcal{M}:= \mathcal{A}+ \mathcal{A}b+ \cdots + \mathcal{A}b^{n-1}$. Then $\mathcal{M} \subset \mathcal{A}[b]$. By (\[int. dep.\]) we have $\mathcal{M} \otimes_{R}S=\mathcal{A}[b] \otimes_{R}S$, or $(\mathcal{A}[b]/\mathcal{M}) \otimes_{R}S=0$. Because $R \rightarrow S$ is faithfully flat, then $\mathcal{A}[b]=\mathcal{M}$ which implies that $b$ is integral over $\mathcal{A}$. Hence $\overline{\mathcal{A} \otimes_{R}S} \cap \mathcal{B} \subset \overline{\mathcal{A}}$. The opposite inclusion follows trivially from persistence of integral dependence.
\[ass. ff.\] Assume $(R,\mathfrak{m})$ is a Noetherian local ring. Denote by $\widehat{R}$ the completion of $R$ with respect to $\mathfrak{m}$. Set $\widehat{\mathcal{A}}=\mathcal{A} \otimes_{R}\widehat{R}$ and $\widehat{\mathcal{B}}=\mathcal{B} \otimes_{R}\widehat{R}$. If $\widehat{\mathfrak{m}} \not \in \mathrm{Ass}_{\widehat{R}}(\widehat{\mathcal{B}}/\overline{\widehat{\mathcal{A}}})$, then $\mathfrak{m} \not \in \mathrm{Ass}_{R} (\mathcal{B}/\overline{\mathcal{A}})$.
Suppose that there exists $b \in \mathcal{B}$ such that $\mathfrak{m}b \in \overline{\mathcal{A}}$ with $b \not \in \overline{\mathcal{A}}$. Then $\widehat{\mathfrak{m}}b \in \overline{\widehat{\mathcal{A}}}$ by persistence of integral closure. But $\widehat{\mathfrak{m}} \not \in \mathrm{Ass}_{\widehat{R}}(\widehat{\mathcal{B}}/\overline{\widehat{\mathcal{A}}})$. Thus $b \in \overline{\widehat{\mathcal{A}}}$. By Prp. \[faithful flatness\] $b \in \overline{\widehat{\mathcal{A}}} \cap \mathcal{B} = \overline{\mathcal{A}}$ which is a contradiction.
[*Proof of Theorem \[main\]*]{}
Consider $\rm{(i)}$. We perform several reduction steps that will allow us to assume that $\mathcal{A}$ is a reduced local complete ring. Let $b \in \mathcal{B}$ be such that $\mathfrak{q}$ is the annihilator of the image of $b$ in $\mathcal{B}/\overline{\mathcal{A}}$. Set $\mathfrak{p}:=\mathfrak{q} \cap \mathcal{B}$. Suppose $\mathrm{ht}(\mathfrak{q}) \geq 2$. By incomparability $\mathrm{ht}(\mathfrak{p}) \geq 2$. Then by prime avoidance we select $h \in \mathfrak{q}$ that avoids the minimal primes of $\mathcal{B}$. By Prop. 2.16 in [@Huneke] $\overline{\mathcal{A}_\mathfrak{p}}=\overline{\mathcal{A}}_{\mathfrak{p}}$. Consider $\mathcal{A}_{\mathfrak{p}} \subset \overline{\mathcal{A}_\mathfrak{p}} \subset \mathcal{B}_\mathfrak{p}$. Then $\mathfrak{q}\overline{\mathcal{A}_\mathfrak{p}}$ is associated to $\mathcal{B}_\mathfrak{p}/\overline{\mathcal{A}_\mathfrak{p}}$. Moreover, $\mathfrak{q}\overline{\mathcal{A}_\mathfrak{p}}$ is maximal because $\overline{\mathcal{A}_\mathfrak{p}}$ is integral over $\mathcal{A}_\mathfrak{p}$ and $\mathfrak{q}\overline{\mathcal{A}_\mathfrak{p}} \cap \mathcal{A}_\mathfrak{p} = \mathfrak{p}\mathcal{A}_\mathfrak{p}$. So we can assume that $\mathcal{A}$ is local with maximal ideal $\mathfrak{p}$.
Let $\widehat{\mathcal{A}}$ be the completion of $\mathcal{A}$ with respect to $\mathfrak{p}$. Set $\mathcal{A}':=\overline{\mathcal{A}} \otimes_{\mathcal{A}} \widehat{\mathcal{A}}$ and $\mathcal{B}':= \mathcal{B} \otimes_{\mathcal{A}} \widehat{\mathcal{A}}$. By flatness $\mathcal{A}'/\mathfrak{q}\mathcal{A}' = (\overline{\mathcal{A}}/\mathfrak{q}\overline{\mathcal{A}}) \otimes_{\mathcal{A}} \widehat{\mathcal{A}}.$ But $\overline{\mathcal{A}}/\mathfrak{q}\overline{\mathcal{A}} \otimes_{\mathcal{A}} \mathcal{A}/\mathfrak{p}=\overline{\mathcal{A}}/\mathfrak{q}\overline{\mathcal{A}}$ and $\mathcal{A}/\mathfrak{p} \otimes_{\mathcal{A}} \widehat{\mathcal{A}}=\mathcal{A}/\mathfrak{p}.$ Thus by the associativity of the tensor product $(\overline{\mathcal{A}}/\mathfrak{q}\overline{\mathcal{A}}) \otimes_{\mathcal{A}} \widehat{\mathcal{A}}= \overline{\mathcal{A}}/\mathfrak{q}\overline{\mathcal{A}}$. Therefore, $\mathfrak{q}\mathcal{A}'$ is maximal. Because $\mathcal{B} \rightarrow \mathcal{B}'$ and $\overline{\mathcal{A}} \rightarrow \mathcal{A}'$ are faithfully flat, then by Lem. B.1.3 and Prp. B.2.4 in [@Huneke] $h$ avoids the minimal primes of $\mathcal{B}'$ and $\mathrm{ht}(\mathfrak{q}\mathcal{A}') \geq 2$. By Lem. 5.2 in [@Rangachev] we can replace $\widehat{\mathcal{A}}, \mathcal{A}'$ and $\mathcal{B}'$ by their reduced structures. The height hypothesis remain intact. Because $\widehat{\mathcal{A}}$ is a reduced complete local ring and $\mathcal{B}'$ is a finitely generated $\widehat{\mathcal{A}}$-algebra, then $\overline{\widehat{\mathcal{A}}}$ is module-finite over $\widehat{\mathcal{A}}$ by [@Stacks [Tag 03GH](http://stacks.math.columbia.edu/tag/03GH)] (cf. Ex. 9.7 in [@Huneke] and [@Stacks [Tag 037J](http://stacks.math.columbia.edu/tag/037J)]). But so is $\mathcal{A}'$ as $\mathcal{A}' \subset \overline{\widehat{\mathcal{A}}}$. In particular, $\mathcal{A}'$ is Noetherian and so $\mathfrak{q}\mathcal{A}'$ is finitely generated.
Clearly $b \cdot \mathfrak{q}\mathcal{A}' \subset \mathcal{A}'.$ So, either $b \in \mathcal{A}'$ or $\mathfrak{q}\mathcal{A}'$ is the annihilator of the the image of $b$ in $\mathcal{B}'/\mathcal{A}'$ viewed as an $\mathcal{A}'$-module. The former is impossible because if $b \in \mathcal{A}'$, then by Prp. \[faithful flatness\] we would get $b \in \overline{\mathcal{A}}$.
As $h$ avoids the minimal primes of $\mathcal{B}'$ and the latter is reduced, then $h$ is regular in $\mathcal{B}'$. Suppose $b\cdot \mathfrak{q}\mathcal{A}' \subset \mathfrak{q}\mathcal{A}'$. As $h \in \mathfrak{q}\mathcal{A}'$, then $\mathfrak{q}\mathcal{A}'$ is a finitely generated faithful $\mathcal{A}'$-module. Then by the Determinantal Trick Lemma (see Lem. 2.1.8 in [@Huneke]) $b$ is integral over $\mathcal{A}'$. Thus by Prp. \[faithful flatness\] we get $b \in \overline{\widehat{\mathcal{A}}} \cap \mathcal{B} = \overline{\mathcal{A}}$ which is impossible.
Therefore, $b\cdot \mathfrak{q}\mathcal{A}' \not \subset \mathfrak{q}\mathcal{A}'.$ Because $h$ is regular in $\mathcal{B}'$ we have $$\label{saturation}
\mathfrak{q}\mathcal{A}' = (h\mathcal{A}':_{ \mathcal{A}'} hb).$$
Replace $\mathcal{A}'$ by its localization at $\mathfrak{q}\mathcal{A}'$. Then there exists $z \in \mathfrak{q}\mathcal{A}'$ such that $bz=1$ which yields $hbz=h$. Because $h$ is regular, then so is $hb$. Let $z' \in \mathfrak{q}\mathcal{A}'$. Then by (\[saturation\]) there exists $u \in h\mathcal{A}'$ such that $hbz'=hu$ or $hb(z'-uz)=0$. But $hb$ is regular, so $z'=uz$. Thus $\mathfrak{q}\mathcal{A}'=(z)$. But by assumption $\mathrm{ht}(\mathfrak{q}\mathcal{A}') \geq 2$. We reached a contradiction. Therefore, $\mathrm{ht}(\mathfrak{q})\leq 1$. This completes the proof of the first part of $(\rm{i})$.
Suppose $\mathfrak{p} \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}})$. As above, we can assume that $\mathcal{A}$ is local with maximal ideal $\mathfrak{p}$. We want to show that there exists a maximal ideal $\mathfrak{q} \in \overline{\mathcal{A}}$ with $\mathrm{ht}(\mathfrak{q}) \leq 1$. Let $b_1 \in \mathcal{B}$ be such that the annihilator of the image of $b_1$ in $\mathcal{B}/\overline{\mathcal{A}}$ viewed as an $\mathcal{A}$-module is $\mathfrak{p}$. Preserve the notation from above. Let $\mathcal{I}$ be the annihilator of the image of $b_1$ in $\mathcal{B}'_{\mathrm{red}}/\mathcal{A}'_{\mathrm{red}}$ viewed as an $\mathcal{A}'_{\mathrm{red}}$-module. As $\mathcal{A}'_{\mathrm{red}}$ is Noetherian, there exists a primary decomposition of $\mathcal{I}$ in $\mathcal{A}'_{\mathrm{red}}$ $$\mathcal{I} = V_1 \cap \ldots \cap V_s.$$ Because $\mathcal{I}$ contains the maximal ideal $\mathfrak{p}\widehat{\mathcal{A}}_{\mathrm{red}}$, each $V_i$ is primary to a maximal ideal $\mathfrak{m}_i$ in $\mathcal{A}'_{\mathrm{red}}$. By faithful flatness each $\mathfrak{m}_i$ is equal to $\mathfrak{q}_i\mathcal{A}'_{\mathrm{red}}$ where $\mathfrak{q}_i$ is a maximal ideal in $\overline{\mathcal{A}}$. Indeed, suppose $\mathfrak{m}_{i}'$ is maximal in $\mathcal{A}'$ such that $\mathfrak{m}_{i}'\mathcal{A}_{\mathrm{red}}' = \mathfrak{m}_i$. Then $\mathfrak{m}_{i}' \cap \mathcal{A} = \mathfrak{p}$ and so $\mathfrak{m}_{i}' \cap \overline{\mathcal{A}} = \mathfrak{q}_i$ where $\mathfrak{q}_i$ is maximal in $\overline{\mathcal{A}}$. But as shown above $\mathfrak{q}_i\mathcal{A}'$ is maximal and is contained in $\mathfrak{m}_{i}'$. Thus $\mathfrak{q}_i \mathcal{A}' = \mathfrak{m}_{i}'$ and so $\mathfrak{q}_i \mathcal{A}_{\mathrm{red}}' = \mathfrak{m}_i$.
For each $i=2, \ldots, s$ select $c_i \in \mathfrak{q}_{i}^{n_i}$ with $c_i \not \in \mathfrak{q}_1$ and $n_i$ sufficiently large so that $\Pi_{i=2}^{s} c_i$ is in $V_2 \cap \ldots \cap V_s$. Let $n_1$ be the smallest positive integer such $\mathfrak{m}_{1}^{n_1}=\mathfrak{q}_1^{n_1}\mathcal{A}'_{\mathrm{red}} \subset V_1$. If $n_{1}>1$, select $c_1 \in \mathfrak{q}_{1}^{n_{1}-1}$ and $c_1 \not \in V_1$. If $n_1=1$, set $c_1:=1$. Set $c:= \Pi_{i=1}^{s}c_i$ and $b_2:=cb_1$. Then $b_2 \not \in \mathcal{A}'_{\mathrm{red}}$ as $c \not \in \mathcal{I}$. Thus the annihilator of $b_2$ in $\mathcal{B}'_{\mathrm{red}}/\mathcal{A}'_{\mathrm{red}}$ is $\mathfrak{m}_1$. Repeating the argument from above we get $\mathrm{ht}(\mathfrak{m}_1) \leq 1$. By faithful flatness $ \mathrm{ht}(\mathfrak{m}_1)= \mathrm{ht}(\mathfrak{q}_1)$. So $\mathrm{ht}(\mathfrak{q}_1)\leq 1$. This proves the existence of $\mathfrak{q}:= \mathfrak{q}_1$ with the desired properties.
Consider $\rm{(ii)}$. By Prp. \[finite\] there exist finitely many primes in $\mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\mathcal{A})$. But $\mathrm{Ass}_{\mathcal{A}}(\overline{\mathcal{A}}/\mathcal{A}) \subset \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\mathcal{A})$. So $\mathrm{Ass}_{\mathcal{A}}(\overline{\mathcal{A}}/\mathcal{A})$ is finite, too. Because $\overline{\mathcal{A}}_{\mathfrak{p}_{\mathrm{min}}} = \mathcal{A}_{\mathfrak{p}_{\mathrm{min}}}$ for each minimal prime $\mathfrak{p}_{\mathrm{min}}$ of $\mathcal{A}$, then by prime avoidance we can select $f$ from the intersection of the minimal primes in $\mathrm{Ass}_{\mathcal{A}}(\overline{\mathcal{A}}/\mathcal{A})$ so that $f$ avoids each $\mathfrak{p}_{\mathrm{min}}$. Let $g \in \overline{\mathcal{A}}$. Then $\mathcal{A}[g]$ is module-finite over $\mathcal{A}$. Hence there exists $u$ such that $f^u$ annihilates $\mathcal{A}[g]/\mathcal{A}$. In particular, $f^{u}g \in \mathcal{A}$. Thus $\mathcal{A}_f=\overline{\mathcal{A}}_f$.
Suppose $\mathfrak{p} \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}})$. If $f \not \in \mathfrak{p}$, then $\mathfrak{p} \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}_f/\overline{\mathcal{A}}_f)$. But $\mathcal{A}_f=\overline{\mathcal{A}}_f$. So $\mathfrak{p} \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}_f/\mathcal{A}_f)$, which is a finite set by Prp. \[finite\].
Next suppose $f \in \mathfrak{p}$. Preserve the setup from the proof of part $\rm{(i)}$. Assume $\mathcal{A}$ is local with maximal ideal $\mathfrak{p}$. First we show that there exists a module-finite extension of $\mathcal{A}$ and a prime ideal $\mathfrak{P}$ in it of height at most one that contracts to $\mathfrak{p}$. By part $\rm{(i)}$ there exists $\mathfrak{q}_1 \in \overline{\mathcal{A}}$ with $\mathrm{ht}(\mathfrak{q}_1) \leq 1$ and $\mathfrak{q}_1 \cap \mathcal{A} = \mathfrak{p}$. By prime avoidance we can select $x \in \mathfrak{q}_1$ whose image in $\mathcal{A}'_{\mathrm{red}}$ avoids $\mathfrak{m}_i$ for $i=2, \ldots, s$. Consider $\mathcal{A}[x]$. Because $x$ is integral over $\mathcal{A}$, then $\mathcal{A}[x]$ is module-finite over $\mathcal{A}$. Set $\mathfrak{P}:= \mathfrak{q}_1 \cap \mathcal{A}[x]$. All maximal ideals in $\overline{\mathcal{A}}$ that contract to $\mathfrak{P}$ contain $x$. Thus they all have $\mathfrak{m}_1$ as their image in $\mathcal{A}'$. But $\mathrm{ht}(\mathfrak{m}_1) \leq 1$. So by faithful flatness all maximal ideals in $\overline{\mathcal{A}}$ that contract to $\mathfrak{P}$ are of height at most one. Consider the integral extension $\mathcal{A}[x]_{\mathfrak{P}} \hookrightarrow \overline{\mathcal{A}}_{\mathfrak{P}}$. Then $$\mathrm{ht}(\mathfrak{P})=\dim \mathcal{A}[x]_{\mathfrak{P}}=\dim \overline{\mathcal{A}}_{\mathfrak{P}} = \sup_{j \in J} \{\mathrm{ht}(\mathfrak{q}_j) \}$$ where $J$ is an index set for the maximal ideals in $\overline{\mathcal{A}}$ contracting to $\mathfrak{P}$ (we show in [@Rangachev3] that the number of maximal ideals in $\overline{\mathcal{A}}$ is finite as a direct consequence of the proof of part $\rm{(i)}$). But $\mathrm{ht}(\mathfrak{q}_j) \leq 1$, so $\mathrm{ht}(\mathfrak{P}) \leq 1$.
Because $f$ avoids the minimal primes of $\mathcal{A}$, then if there exists $d \in \mathcal{B}$ such that $fd=0$, then $d$ must be nilpotent. After passing to the reductions of $\mathcal{A}$ and $\mathcal{A}[x]$ the contraction of $\mathfrak{P}$ to $\mathcal{A}$ is still $\mathfrak{p}$ and the height of $\mathfrak{P}$ remains intact. To keep the notation simple we will identify $\mathcal{A}$ and $\mathcal{A}[x]$ with their corresponding reductions. Because $\mathcal{A}[x]$ is module-finite over $\mathcal{A}$ there exists a positive integer $n$ such that $f^{n}\mathcal{A}[x] \subset \mathcal{A}$. Observe that $f$ is regular in $\mathcal{A}[x]$ because $\mathcal{A}[x]$ is reduced.
Next we proceed as in the proof of Lem. 4.9.5 in [@Huneke]. Set $\mathfrak{J}:= (\mathcal{A}: _{\mathcal{A}} \mathcal{A}[x])$. Suppose $\mathfrak{J} \not \subset \mathfrak{P}$. Then $\mathfrak{J}$ contains a unit, so $\mathcal{A}[x]=\mathcal{A}$ and $\mathfrak{P} =\mathfrak{p}$. But $\mathrm{ht}(\mathfrak{P}) \leq 1$. So $\mathrm{ht}(\mathfrak{p}) \leq 1$. But $f \in \mathfrak{p}$. Moreover, $f$ avoids the minimal primes of $\mathcal{B}$. So $f$ avoids the minimal primes of $\mathcal{A}$. Thus $\mathfrak{p}$ is an associated prime of $\mathcal{A}/f\mathcal{A}$.
Assume $\mathfrak{J} \subset \mathfrak{P}$. Suppose that $\mathfrak{p}$ is not an associated prime of $\mathcal{A}/f^{n}\mathcal{A}$. Then there exists $z \in \mathfrak{p}$ that is regular in $\mathcal{A}/f^{n}\mathcal{A}$. Let $\mathfrak{P}=\mathfrak{P}_1, \ldots, \mathfrak{P}_k$ be the minimal primes of $f^{n}\mathcal{A}[x]$. Select $e \in \mathfrak{P}_2 \cap \ldots \cap \mathfrak{P}_k$ and $e \not \in \mathfrak{P}$ such that $z^{l}e \in f^{n}\mathcal{A}[x]$ for some $l$. Then $$(f^{n}e\mathcal{A}[x])z^l = f^n(z^{l}e\mathcal{A}[x]) \subseteq f^{n}(f^{n}\mathcal{A}[x]) \subseteq f^{n}\mathcal{A}.$$ But $z$ is regular in $\mathcal{A}/f^{n}\mathcal{A}$. So $f^{n}e\mathcal{A}[x] \subseteq f^{n}\mathcal{A}$. But $f$ is regular in $\mathcal{A}[x]$, so $e\mathcal{A}[x] \subseteq \mathcal{A}$. Thus $e \in \mathfrak{J}$. But $e \not \in \mathfrak{P}$ which is a contradiction. Hence $\mathfrak{p} \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{A}/f^{n}\mathcal{A})$. But $f$ is regular in $\mathcal{A}$. So $\mathrm{Ass}_{\mathcal{A}}(\mathcal{A}/f^{n}\mathcal{A})= \mathrm{Ass}_{\mathcal{A}}(\mathcal{A}/f\mathcal{A})$. Therefore, $\mathfrak{p} \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{A}/f\mathcal{A})$.
Finally, we obtained that if $\mathfrak{p} \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}})$, then $$\mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}}) \subseteq \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}_{f}/\mathcal{A}_f) \cup \mathrm{Ass}_{\mathcal{A}}(\mathcal{A}_{\mathrm{red}}/f\mathcal{A}_{\mathrm{red}}).$$ Because $\mathrm{Ass}_{\mathcal{A}}(\mathcal{B}_{f}/\mathcal{A}_f)$ and $\mathrm{Ass}_{\mathcal{A}}(\mathcal{A}_{\mathrm{red}}/f\mathcal{A}_{\mathrm{red}})$ are finite, then so is $\mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}})$.
Consider [(iii)]{}. Let $\mathfrak{p} \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}})$. Suppose $\mathrm{ht}(\mathfrak{p}) \geq 2$. Following the steps in the proof of part [(i)]{} we can assume that $\widehat{\mathcal{A}}$ is a local reduced ring of dimension at least $2$ with maximal ideal $\mathfrak{p}\widehat{A}$. Because $\widehat{\mathcal{A}}$ is Nagata and reduced, then $\overline{\widehat{\mathcal{A}}}$ is module-finite over $\widehat{\mathcal{A}}$. Because $\mathcal{A}$ is universally catenary and equidimensional, then by Thm. 31.7 in [@Matsumura] $\widehat{\mathcal{A}}$ is equidimensional, too.
Now $\overline{\widehat{\mathcal{A}}}$ is semi-local with each of its maximal ideals contracting to $\mathfrak{p}\widehat{\mathcal{A}}$. Let $\mathfrak{m}$ be a maximal ideal of $\overline{\widehat{\mathcal{A}}}$ and let $\mathfrak{q}_{\mathrm{min}}$ be a minimal prime of $\overline{\widehat{\mathcal{A}}}$ contained in $\mathfrak{m}$. Then $\mathfrak{p}_{\mathrm{min}}=\mathfrak{q}_{\mathrm{min}} \cap \widehat{\mathcal{A}}$ is a minimal prime of $\widehat{\mathcal{A}}$ as the minimal primes of $\widehat{\mathcal{B}}$ contract to minimal primes of $\widehat{\mathcal{A}}$ by our assumptions and faithful flatness. Because $\widehat{\mathcal{A}}$ is equidimensional, then $\mathrm{ht}(\mathfrak{p}(\widehat{\mathcal{A}}/\mathfrak{p}_{\mathrm{min}})) \geq 2$. Applying the dimension formula (Thm. B.5.1 in [@Huneke]) for the extension $\widehat{\mathcal{A}}/\mathfrak{p}_{\mathrm{min}} \hookrightarrow \overline{\widehat{\mathcal{A}}}/\mathfrak{q}_{\mathrm{min}}$ we get $$\mathrm{ht}(\mathfrak{p}(\widehat{\mathcal{A}}/\mathfrak{p}_{\mathrm{min}})) = \mathrm{ht}(\mathfrak{m}(\overline{\widehat{\mathcal{A}}}/\mathfrak{q}_{\mathrm{min}})) \geq 2.$$
Thus $\mathrm{ht}(\mathfrak{m}) \geq 2$. Following the proof of part [(i)]{} we conclude that $\mathfrak{m} \not \in \mathrm{Ass}_{\overline{\widehat{\mathcal{A}}}}(\widehat{\mathcal{B}}/\overline{\widehat{\mathcal{A}}})$ for each maximal ideal $\mathfrak{m}$ of $\overline{\widehat{\mathcal{A}}}$. But $\overline{\widehat{\mathcal{A}}}$ is Noetherian. So by Prp. \[finite\] each prime in $\mathrm{Ass}_{\widehat{\mathcal{A}}}(\widehat{B}/\overline{\widehat{\mathcal{A}}})$ is a contraction from a prime in $\mathrm{Ass}_{\overline{\widehat{\mathcal{A}}}}(\widehat{\mathcal{B}}/\overline{\widehat{\mathcal{A}}})$. Thus $\mathfrak{p}\widehat{\mathcal{A}} \not \in \mathrm{Ass}_{\widehat{\mathcal{A}}}(\widehat{B}/\overline{\widehat{\mathcal{A}}})$. By Cor. \[ass. ff.\] $\mathfrak{p} \not \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}})$ which is a contradiction. Thus $\mathrm{ht}(\mathfrak{p}) \leq 1$.
Let $R$ be a Noetherian ring and $I$ an ideal in $R$ and $t$ a variable. The [*Rees algebra*]{} of $I$, denoted $\mathcal{R}(I)$, is the subring of $R[t]$ defined as $\oplus_{n=0}^{\infty} I^{n}t^n$. Denote the $k$th graded pieces of $R[t]$ and $\mathcal{R}(I)$ by $R[t]_{k}$ and $\mathcal{R}(I)_k$, respectively. Denote by $\overline{I^n}$ the integral closure of $I^n$ in $R$. The integral closure $\overline{\mathcal{R}(I)}$ of $\mathcal{R}(I)$ in $R[t]$ is $\oplus_{n=0}^{\infty} \overline{I^{n}}t^n$.
Note that if $P \in \mathrm{Ass}_{R}(R/\overline{I^n})$, then there exists a minimal prime $P_{\mathrm{min}}$ of $R$ with $P_{\mathrm{min}} \subseteq P$ such that $P/P_{\mathrm{min}}$ is associated to the integral closure of $I^{n}(R/P_{\mathrm{min}})$ in $R/P_{\mathrm{min}}$ (see Lem. 5.4.4 in [@Huneke]). So to keep the exposition as simple as possible we assume that $R$ is a domain.
\[Rees finite\] Let $R$ be a Noetherian domain, $I$ an ideal in $R$ and $a \in I$. Suppose $P \in \mathrm{Ass}_{R}(R/\overline{I^n})$ for some $n$. Then $P$ is a contraction of an associated prime of $a\mathcal{R}(I)$.
Apply Thm. \[main\] [(ii)]{} with $\mathcal{A}:= \mathcal{R}(I)$ and $\mathcal{B}:= R[t]$. Because $a^kR[t]_{k} = \mathcal{R}(I)_k$ for each $k \geq 1$ we can select $f:=a$. By [@Stacks [Tag 05DZ](http://stacks.math.columbia.edu/tag/05DZ)] or the proof of the second part of Prp. \[finite\], $P$ is a contraction of $\mathfrak{p} \in \mathrm{Ass}_{\mathcal{R}(I)}(R[t]/\overline{\mathcal{R}(I)})$. As $R[t]_{a}=\mathcal{R}(I)_a$ and $\mathcal{R}(I)$ is a domain, Thm. \[main\] [(ii)]{} implies that $\mathfrak{p}$ is associated to $a\mathcal{R}(I)$.
Next we show that Thm. \[main\] [(iii)]{} recovers a criterion for integral dependence due to Simis, Ulrich and Vasconcelos (Thm. 4.1 in [@SUV]).
(Simis–Ulrich–Vasconcelos)\[suv\] Let $\mathcal{A} \subset \mathcal{B}$ be an extension of rings with $\mathcal{A}$ Noetherian, equidimensional and universally catenary. Assume that $\mathcal{A}_{\mathfrak{p}} \subset \mathcal{B}_{\mathfrak{p}}$ is integral for each prime $\mathfrak{p}$ in $\mathcal{A}$ with $\mathrm{ht}(\mathfrak{p}) \leq 1$. Further, assume that each minimal prime of $\mathcal{B}$ contracts to a minimal prime of $\mathcal{A}$. Then $\mathcal{B}$ is integral over $\mathcal{A}$.
We can assume that $\mathcal{B}$ is reduced as each nilpotent element of $\mathcal{B}$ is integral over $\mathcal{A}$. Suppose $b \in \mathcal{B}$ and $b$ is not integral over $\mathcal{A}$. Denote by $\mathcal{A}[b]$ the algebra generated by $\mathcal{A}$ and $b$. Denote by $\overline{\mathcal{A}}$ the integral closure of $\mathcal{A}$ in $\mathcal{A}[b]$. Each minimal prime of $\mathcal{A}[b]$ is contracted from a minimal prime of $\mathcal{B}$ and thus each minimal prime of $\mathcal{A}[b]$ contracts to a minimal prime of $\mathcal{A}$. By Thm. \[main\] [(iii)]{} we know that the minimal primes in $\mathrm{Supp}_{\mathcal{A}}(\mathcal{A}[b]/\overline{\mathcal{A}})$ are of height at most one. But by assumption none of the primes in $\mathcal{A}$ of height at most one is in $\mathrm{Supp}_{\mathcal{A}}(\mathcal{A}[b]/\overline{\mathcal{A}})$. We reached a contradiction. Thus $\mathcal{B}$ is integral over $\mathcal{A}$.
[*Proof of Theorem \[converse\]*]{}
The proof is an improvement of the proof of Thm. 8.3 in [@Rangachev].
Suppose $\mathfrak{m} \not \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}})$. By prime avoidance and Prp. \[finite\] we can select $h_1 \in \mathfrak{m}$ so that $h_1$ avoids all primes in $\mathrm{Ass}_{R}(\mathcal{B}/\mathcal{A}) \setminus \{\mathfrak{m}\}$, and the minimal primes of $\mathcal{A}$ and $\mathcal{B}$. Consider the map $$\psi_{h_1} \colon \mathcal{A}/h_{1}\mathcal{A} \rightarrow \mathcal{B}/h_{1}\mathcal{B}.$$ Set $\mathcal{A}(h_1):= \operatorname{Im}\psi_{h_{1}}$. Observe that $\operatorname{Ker}\psi_{h_1} = (h_{1}\mathcal{B} \cap \mathcal{A})/h_{1}\mathcal{A}$. Suppose $b \in \mathcal{B}$ with $h_{1}b \in \mathcal{A}$. Then $b \in \overline{\mathcal{A}}$. Indeed, if $\mathcal{I}:= \mathrm{Ann}(\tilde{b})$ where $\tilde{b}$ is the image of $b$ in the $\mathcal{B}/\mathcal{A}$ viewed as an $R$-module, then the radical of $\mathcal{I}$ is the intersection of associated primes of $\mathcal{B}/\mathcal{A}$. But $h_1$ avoids all of them but $\mathfrak{m}$. Hence $\mathcal{I}$ must be $\mathfrak{m}$-primary. But $\mathfrak{m} \not \in \mathrm{Ass}_{\mathcal{A}}(\mathcal{B}/\overline{\mathcal{A}})$. So $b \in \overline{\mathcal{A}}$. Then $$b^{s}+a_{1}b^{s-1}+\cdots+a_{s}=0$$ for some positive integer $s$ and $a_i \in \mathcal{A}$. Multiplying both sides of the last equation by $h_{1}^s$ we obtain $$(h_{1}b)^{s}+ha_{1}(h_{1}b)^{s-1}+\cdots+h_{1}^{s}a_{s}=0.$$ Thus $(h_{1}b)^s=0$ in $\mathcal{A}/h_{1}\mathcal{A}$. Then $\operatorname{Ker}\psi_{h_1}$ consists of nilpotents and thus $\mathcal{A}(h_1)$ and $\mathcal{A}/h_{1}\mathcal{A}$ have the same reduced structures. Because each minimal prime of $\mathfrak{m}\mathcal{B}$ is of height at least $2$, we can find $h_2 \in \mathfrak{m}$ such that $h_2$ avoids the minimal primes of $\mathcal{B}/h_{1}\mathcal{B}$. Because $\operatorname{Ker}\psi_{h_1}$ consists of nilpotents, $h_2$ avoids the minimal primes of $\mathcal{A}/h_{1}\mathcal{A}$. Thus each minimal prime of $\mathfrak{m}\mathcal{A}$ is of height at least $2$ which contradicts our assumption that there exists a minimal prime $\mathfrak{p}$ of $\mathfrak{m}\mathcal{A}$ of height at most one. Thus $\mathfrak{m} \in \mathrm{Ass}_{R}(\mathcal{B}/\overline{\mathcal{A}})$. Let $b$ such that $\mathrm{Ann}_{R}(\tilde{b}) = \mathfrak{m}$ where $\tilde{b}$ is the image of $b$ in $\mathcal{B}/\overline{\mathcal{A}}$. Then we can select $c \in \mathcal{A}$ so that $\mathrm{Ann}_{\mathcal{A}}(c \tilde{b})= \mathfrak{p}$ as we did in the proof of Thm. \[main\] [(i)]{}.
When $\mathcal{A}$ is the Rees algebra of a module, then $\mathcal{A}$ comes equipped with an embedding in a polynomial ring $\mathcal{B}$. For a general finitely generated $R$-algebra $\mathcal{A}$ an embedding into a polynomial ring over $R$ may not exist. However, in the next proposition we show that under mild assumptions, $\mathcal{A}$ always has an embedding in a finitely generated graded $R$-algebra $\mathcal{B}$ satisfying the hypothesis of Thm. \[converse\] provided that $\dim R \geq 2$.
\[nice embedding\] Suppose $R$ is a reduced equidimensional universally catenary Noetherian ring of positive dimension or an infinite field. Assume $\mathcal{A} = \oplus_{i=0}^{\infty} \mathcal{A}_{i}$ is a reduced equidimensional standard graded algebra over $R$. Assume that the minimal primes of $\mathcal{A}$ contract to minimal primes of $R$. Then there exists a standard graded $R$-algebra $\mathcal{B} = \oplus_{i=0}^{\infty} \mathcal{B}_{i}$ such that
1. $\mathcal{B}$ is a birational extension of $\mathcal{A}$, and the inclusion $\mathcal{A}
\subset \mathcal{B}$ is homogeneous;
2. For each prime $\mathfrak{p}$ in $R$ the minimal primes of $\mathfrak{p}\mathcal{B}$ are of height at least $\mathrm{ht}(\mathfrak{p}/\mathfrak{p}_{\mathrm{min}})$ where $\mathfrak{p}_{\mathrm{min}}$ is a minimal prime of $R$ contained in $\mathfrak{p}$.
Denote by $K$ the total ring of fractions of $R$. Because $\mathcal{A}$ is reduced and its minimal primes contract to minimal primes of $R$, then $\mathcal{A}$ is $R$-torsion free. Hence $\mathcal{A}$ injects into $\mathcal{A} \otimes K$. Set $e:= \dim \mathcal{A} \otimes K$. Let $\mathfrak{p}_1, \ldots, \mathfrak{p}_l$ be the minimal primes of $R$ and $\mathfrak{q}_1, \ldots, \mathfrak{q}_t$ be the minimal primes of $\mathcal{A}$. Fix a minimal prime $\mathfrak{q}_u$. Assume $\mathfrak{q}_u$ contracts to $\mathfrak{p}_i$. Set $\kappa(\mathfrak{q}_u):=\mathrm{Frac}(\mathcal{A}/\mathfrak{q}_u)$ and $\kappa(\mathfrak{p}_i):=\mathrm{Frac}(R/\mathfrak{p}_i)$. Then by [@Stacks [Tag 02JX](http://stacks.math.columbia.edu/tag/02JX)] or Lem. 3.1 (ii) in [@KT-Al] we get $$\dim \mathcal{A}/\mathfrak{q}_u=\dim R/\mathfrak{p}_i + \mathrm{tr.\ deg}_{\kappa(\mathfrak{p}_i)}\kappa(\mathfrak{q}_u).$$ Because $\mathcal{A}$ and $R$ are equidimensional we obtain that $\mathcal{A}_{\mathfrak{p}_i}$ is equidimensional with $\dim A_{\mathfrak{p}_i}= \dim \mathcal{A}/\mathfrak{q}_{u}-\dim R/\mathfrak{p}_i=e$. Because $R$ is reduced, $K=R_{\mathfrak{p}_1} \times \cdots \times R_{\mathfrak{p}_l}$. Thus $\mathcal{A} \otimes K = \mathcal{A}_{\mathfrak{p}_1} \times \cdots \times \mathcal{A}_{\mathfrak{p}_l}$.
For each $i=1, \ldots, l$ denote by $\pi_{i}$ the projection homomorphism $\mathcal{A} \otimes K \rightarrow \mathcal{A}_{\mathfrak{p}_i}$. Because $R$ is equdimensional of positive dimension or $R$ is an infinite field, then each field $R_{\mathfrak{p}_i}$ is infinite. Thus by Noether normalization we can select $e$ elements $b_{1}', \ldots, b_{e}'$ in $\mathcal{A}_1 \otimes K$ such that $\mathcal{A}_{\mathfrak{p}_i}$ is integral over $R_{\mathfrak{p}_i}[\pi_{i}(b_{1}'), \ldots, \pi_{i}(b_{e}')]$ for each $i$.
Let $a_1, \ldots, a_s$ be degree one generators of $\mathcal{A}$ over $R$. Then each $\pi_{i}(a_j)$ for $j=1, \ldots, s$ satisfies an equation of integral dependence over $R_{\mathfrak{p}_i}[\pi_{i}(b_{1}'), \ldots, \pi_{i}(b_{e}')]$. For each $i=1, \ldots, l$ let $d_i \in R_{\mathfrak{p}_i}$ be the product over $j$ of all (nonzero) denominators appearing in the relation of integral dependence of $\pi_{i}(a_j)$. For each $k=1, \ldots, e$ set $$b_{k}:= \Big( \frac{\pi_{1}(b_{k}')}{d_1}, \ldots, \frac{\pi_{l}(b_{k}')}{d_l} \Big) \ \text{and} \ \mathcal{B}:= \mathcal{A}[b_1, \ldots, b_e].$$
As each $b_k$ is a fraction with enumerator in $\mathcal{A}_1$ and denominator in $R$, then $\mathcal{B}$ inherits naturally a grading from $\mathcal{A}$ with $\deg (b_k)=1$ for each $k=1, \ldots e$. Thus $\mathcal{A} \subset \mathcal{B}$ is a homogeneous inclusion. Because $b_{k}$ is in the total ring of fractions of $\mathcal{A}$, then $\mathcal{B}$ is a birational extension of $\mathcal{A}$. This proves $\rm{(1)}$.
Observe that $\mathcal{B}$ is integral over $R[b_1, \ldots, b_e]$ because each $a_j$ for $j=1, \ldots, s$ is integral over $R[b_1, \ldots, b_e]$. Note that $\mathfrak{p}\mathcal{B} \neq \mathcal{B}$. Indeed, $\mathcal{A}_0 = R$, so $\mathcal{B}_0 = R$ and so $\mathfrak{p}\mathcal{B}_0 = \mathfrak{p} \neq R$. Let $Q$ be a minimal prime of $\mathfrak{p}\mathcal{B}$. Set $\mathfrak{p}':=Q \cap R$. Denote by $\widetilde{b_k}$ the images of $b_k$ in $\mathcal{B}/Q\mathcal{B}$. Then $$\label{int. dim.}
(R/\mathfrak{p}')[\widetilde{b_1}, \ldots, \widetilde{b_e}] \hookrightarrow \mathcal{B}/Q\mathcal{B}$$ is an integral extension. Thus, $\mathrm{tr.\ deg}_{\kappa(\mathfrak{p}')}\kappa(Q) \leq e$, where $\kappa(\mathfrak{p}'):= \mathrm{Frac}(R/\mathfrak{p}')$ and $\kappa(Q):= \mathrm{Frac}(\mathcal{B}/Q\mathcal{B})$.
Let $Q_{\mathrm{min}}$ be a minimal prime of $\mathcal{B}$ contained in $Q$. Set $\mathfrak{p}_{\mathrm{min}}:= Q_{\mathrm{min}} \cap R$. Because $R$ is a universally catenary and $\mathcal{B}$ is a finitely generated over $R$, the dimension formula (Thm. B.5.1 in [@Huneke]) gives $$\mathrm{ht}(Q/Q_{\mathrm{min}}) + \mathrm{tr.\ deg}_{\kappa(\mathfrak{p}')}\kappa(Q) = \mathrm{ht}(\mathfrak{p}'/\mathfrak{p}_{\mathrm{min}}) + \mathrm{tr.\ deg}_{\kappa(R/\mathfrak{p}_{\mathrm{min}})}\kappa(\mathcal{B}/Q_{\mathrm{min}})= \mathrm{ht}(\mathfrak{p}'/\mathfrak{p}_{\mathrm{min}}) + e.$$ But $\mathrm{tr.\ deg}_{\kappa(\mathfrak{p}')}\kappa(Q) \leq e$ and $\mathfrak{p} \subset \mathfrak{p}'$. Therefore, $\mathrm{ht}(Q/Q_{\mathrm{min}}) \geq \mathrm{ht}(\mathfrak{p}'/\mathfrak{p}_{\mathrm{min}}) \geq \mathrm{ht}(\mathfrak{p}/\mathfrak{p}_{\mathrm{min}})$.
[abcdef]{} Burch, L., [*On ideals of finite homological dimension in local rings,*]{} Proc. Cambridge Phil. Soc., [**64**]{} (1968), 941–948.
Hochster, M., Roberts, J., [*Rings of invariants of reductive groups acting on regular rings are Cohen-Macaulay,*]{} Adv. in Math. (1974), 115–175.
Katz, D, Rice, G., [*Asymptotic prime divisors of torsion-free symmetric powers of a module,*]{} Journal of Algebra, [**319**]{} (2008), 2209-–2234.
Katz, D., Puthenpurakal, T., [*Quasi-finite modules and asymptotic prime divisors,*]{} Journal of Algebra [**380**]{} (2013), 18–29.
Kleiman, S., Thorup, A., [*A geometric theory of the Buchsbaum-Rim multiplicity,*]{} J. Algebra [**167**]{} (1994), 168–-231.
Matsumura, H., “Commutative ring theory.” Cambridge University Press, 1987.
McAdam, S., [*Asymptotic prime divisors and analytic spread*]{}, Proc. Amer. Math. Soc., [**90**]{} (1980), 555–559.
McAdam S., Eakin P., [*The asymptotic Ass,*]{} J. Algebra, [**61**]{} (1979), 71–-81.
Rangachev, A., [*Associated points and integral closure of modules*]{}, Journal of Algebra, [**508**]{} (2018), 301–338.
Rangachev, A., [*Local volumes and equisingularity*]{} (in preparation).
Rangachev, A., [*Relative integral closure of Noetherian rings*]{} (in preparation).
Ratliff, L. J. Jr., [*On prime divisors of $I^{n}$, $n$ large,*]{} Michigan Math. J.,[**23**]{} (1976), 337–352.
Rees, D., [*Rings associated with ideals and analytic spread,*]{} Math. Proc. Cambridge Philos. Soc., [**89**]{} (1981), 423–-432.
The [Stacks Project Authors]{}, [*Stacks Project*]{}, <http://stacks.math.columbia.edu>, 2016.
Swanson, I., and Huneke, C., “Integral closure of ideals, rings, and modules.” London Mathematical Society Lecture Note Series, vol. 336, Cambridge University Press, Cambridge, 2006.
Sharp, R., [*David Rees, FRS 1918–2013,*]{} Bull. London Math. Soc., [**48**]{} (3) (2016), 557–576.
Simis, A., Ulrich, B., Vasconcelos, W., [*Codimension, multiplicy and integral extensions,*]{} Math. Proc. Camb. Phil. Soc., [**130**]{} (2001), 237–257.
|
---
abstract: 'One model for the origin of typical galactic star clusters such as the Orion Nebula Cluster (ONC) is that they form via the rapid, efficient collapse of a bound gas clump within a larger, gravitationally-unbound giant molecular cloud. However, simulations in support of this scenario have thus far have not included the radiation feedback produced by the stars; radiative simulations have been limited to significantly smaller or lower density regions. Here we use the ORION adaptive mesh refinement code to conduct the first ever radiation-hydrodynamic simulations of the global collapse scenario for the formation of an ONC-like cluster. We show that radiative feedback has a dramatic effect on the evolution: once the first $\sim 10-20\%$ of the gas mass is incorporated into stars, their radiative feedback raises the gas temperature high enough to suppress any further fragmentation. However, gas continues to accrete onto existing stars, and, as a result, the stellar mass distribution becomes increasingly top-heavy, eventually rendering it incompatible with the observed IMF. Systematic variation in the location of the IMF peak as star formation proceeds is incompatible with the observed invariance of the IMF between star clusters, unless some unknown mechanism synchronizes the IMFs in different clusters by ensuring that star formation is always truncated when the IMF peak reaches a particular value. We therefore conclude that the global collapse scenario, at least in its simplest form, is not compatible with the observed stellar IMF. We speculate that processes that slow down star formation, and thus reduce the accretion luminosity, may be able to resolve the problem.'
author:
- 'Mark R. Krumholz, Richard I. Klein, and Christopher F. McKee'
bibliography:
- 'refs.bib'
title: |
Radiation-Hydrodynamic Simulations of the Formation of Orion-Like Star Clusters\
I. Implications for the Origin of the Initial Mass Function
---
Introduction {#sec:intro}
============
The origin of the stellar initial mass function (IMF) is one of the outstanding problems in the modern theory of star formation. While there have been numerous analytic and numerical studies purporting to explain its origin (e.g. see the review by @mckee07a, and references therein), much of this work has been hampered by the limited number of physical processes that are included in models of how gas fragments. In particular, while both simulations and analytic work reveal that how gas fragments into stars is extremely sensitive to how the temperature of the gas varies with its density [@larson05a; @jappsen05a], it has been common until very recently to approximate this relationship with a simple equation of state [e.g. @bate05a; @bonnell06d; @offner08a; @hennebelle11a]. Since the characteristic masses of the stars formed in a collapse are largely determined by the temperature-density relationship, predictions about the location of the IMF peak in these simulations are only as good as their adopted equations of state. Given this realization, attention in recent years has shifted to models that attempt to determine the temperature-density relationship from first principles, or to include a self-consistent treatment of the thermal evolution of the gas in numerical simulations. In the former category, much work has focused on the effects of imperfect coupling between gas and dust grains on gas thermodynamics. For example, @larson05a and @elmegreen08a both argue that the characteristic stellar mass is set by the Jeans mass at the density and temperature where dust grains and gas become thermally coupled due to collisions. According to these models, at low densities where grain-gas coupling is poor, the gas is slightly sub-isothermal, while at higher densities it is slightly super-isothermal, and this effect favors fragmentation near the coupling density.
However, this argument faces a major difficulty in explaining the IMF in the dense, cluster-forming regions where much Galactic star formation appears to take place. The density at which grains and gas become well-coupled is $\sim 10^4 - 10^5$ H$_2$ molecules cm$^{-3}$ [@goldsmith01a], roughly independent of the metallicity and of ambient radiation field intensity [@krumholz08a; @elmegreen08a; @krumholz11b]. In comparison, observations now show that the typical site of star cluster formation has a mass of $\sim 10^3-10^4$ ${M_{\odot}}$, and a radius of $\sim 0.3-0.5$ pc (e.g. see @shirley03a, @faundez04a, @fontani05a, or the summary plot combining these data sets in @fall10a), giving a mean density $\sim 10^5$ cm$^{-3}$. Similarly, the present-day Orion Nebula Cluster has a mass of $2400$ ${M_{\odot}}$ within a half mass radius of $0.8$ pc, corresponding to $2\times 10^4$ cm$^{-3}$, and within the $\sim 0.2$ pc core the mean density reaches $4\times 10^5$ cm$^{-3}$ [@hillenbrand98a]. Since the star formation efficiency was certainly less than unity, and the cluster has likely expanded some since the gas was expelled [@kroupa01b; @tan06a], the density at which most of the stars formed must have been higher by at least a factor of a few. Thus the typical site of star cluster formation in the Galaxy, of which the ONC is an example, is in the regime where essentially all the mass is at densities where grain-gas coupling is very strong. It is therefore hard to see how grain-gas coupling could be relevant for determining how this gas fragments. This argument can be made even stronger by noting that globular clusters with mean densities $\sim 10^7$ cm$^{-3}$ in their centers, $2-3$ orders of magnitude above the grain-gas decoupling density, also appear to have the same IMF peak as the Galactic field [@marchesini09a].
A second class of models for the temperature-density relationship focuses on the interaction of gas with the radiation produced by stars in the star formation process. In these models one assumes good grain-gas coupling, as is appropriate at the high densities where most stars form. The gas temperature and its relationship with the density is then determined primarily by the light produced by stars in the process of formation. Conceptually, the idea is that the luminosity from an accreting star warms the gas in its immediate vicinity, inhibiting the ability of that gas to fragment, and that this process determines characteristic stellar masses. Analytically, @krumholz06b and @krumholz08a have argued that this process explains how massive stars are able to form under certain circumstances, while @bate09a argues that it can explain the characteristic peak of the IMF.
However, numerical studies of the second class of models have thus far been limited in various ways. @krumholz07a [@krumholz10a] and @myers11a conduct simulations including stellar feedback and radiative transfer (including re-radiation by dust grains, which is the critical process in determining the gas temperature), but focus on single massive cores that do not (and are not expected to) form a full IMF. @commercon10a report similar simulations focusing on single low-mass cores. @bate09a, @offner09a, @price09a, and @peters10b [@peters11a] simulate the formation of star clusters, but consider only low-density regions similar to those found in nearby clouds like Taurus, rather than conditions typical of Galactic star formation sites.[^1] As we will see, this makes a large difference in the outcome, because under low-density conditions the regions of heating around each star are non-overlapping, while in denser conditions they are not. Moreover, of these simulations, only @offner09a and @peters10b include stellar luminosity, so the amount of heating in the other two simulations is underestimated.
Other simulations of star cluster formation do not include radiative transfer at all, and instead approximate it in various ways. For example, @smith09a and @urban10a study the fragmentation of dense gas clouds similar to typical star-forming regions, but they determine the gas temperature around each star via a rough fitting formula based on static radiative transfer calculations. This approximation may be reasonable as long as the heating at a given point is dominated by a single star, but it almost certainly fails once the regions of heating around stars begin to overlap, as occurs in dense regions. In summary, to date there have been no simulations capable of studying how the peak of the IMF under typical Galactic conditions, including the all-important effects of stellar feedback and re-radiation by dust grains. The goal of this paper, the first in a series, is to remedy that lack. We use the ORION adaptive mesh refinement radiation-hydrodynamics code to simulate a typical galactic star-forming clump including stellar feedback and radiative transfer. As this is a first attack on the problem, we choose the simplest possible scenario. We do not include magnetic fields or any form of feedback other than radiation, and we allow the initial turbulence in the cloud to decay freely, leading to a rapid global collapse. Our simulation therefore represents a minimalist scenario for the formation of a star cluster such as the ONC similar to that proposed by, for example @bonnell03a. Previous authors who have studied such conditions report that they produce stellar mass distributions consistent with the observed IMF at all times in the simulation, but it is clear in retrospect that this result simply reflects the imposed equation of state. Our work therefore revisits the critical question of whether such a scenario is capable of reproducing the observed IMF.
The remainder of this paper proceeds as follows. In Section \[sec:method\] we describe our numerical method and simulation setup. In Section \[sec:results\] we report the results of our simulations. In Section \[sec:discussion\] we discuss the implications of our findings, and present simple analytic models to aid in understanding them. Finally, we summarize in Section \[sec:summary\].
Simulation Description {#sec:method}
======================
Simulation Initial Conditions {#sec:ic}
-----------------------------
[cccccccccccc]{} LR & Yes & 1000 & 1.0 & $9.4\times 10^{-19}$ & 68.6 & 1.9 & 256 & 4 & 98 & 0.94 & 0.51\
HR & Yes & 1000 & 1.0 & $9.4\times 10^{-19}$ & 68.6 & 1.9 & 256 & 5 & 49 & 0.94 & 0.52\
ISO & No & 1000 & 1.0 & $9.4\times 10^{-19}$ & 68.6 & 1.9 & 256 & 5 & 49 & 0.94 & 0.65\
We conduct two simulations that are identical in every respect except that they have different maximum AMR levels, meaning that the peak resolution is different in the two runs. We refer to these as the low-resolution (LR) and high-resolution (HR) simulations. The two simulations enable us to determine to what extent our results are converged, although we caution that the two simulations differ only in their peak resolution, which is deployed near stars and in regions of high density or large radiation gradients (see Section \[sec:conditions\]). Thus we have not tested the sensitivity of our results to variations in the resolution used in low density regions far from stars. We also carry out a third simulation with identical initial conditions at the same resolution as run HR, but with an isothermal equation of state, i.e. with the radiative transfer module in our code disabled. We refer to this as run ISO. This simulation enables us to determine what effects in our simulation are due to radiative transfer.
We summarize the key parameters of the runs in Table \[runs\]. The initial conditions for both consist of a $M_c = 1000$ ${M_{\odot}}$ spherical gas cloud with a mean surface density $\Sigma_c = 1$ g cm$^{-2}$, corresponding to a mean volume density of $9.4\times 10^{-19}$ g cm$^{-3}$, or $2.4\times 10^5$ H$_2$ molecules cm$^{-3}$. The corresponding cloud radius is $R_c = \sqrt{M_c / (\pi \Sigma_c)}=0.26$ pc, and we place the cloud in a cubical computational domain of size $L_{\rm box} = 1.9$ pc, roughly four times the cloud diameter. We have chosen this mass and surface density because they are typical of regions of clustered star formation in the Galaxy (e.g. see @shirley03a, @faundez04a, @fontani05a, and a summary of the data in Figure 1 of @fall10a.). They are also roughly the estimated parameters of the progenitor of the core of the Orion Nebula Cluster [e.g. @kroupa01b; @tan06a]. It is worth noting that our initial conditions are significantly denser than has been used for some previous simulations of massive star formation. For example, @bonnell03a use initial mean volume and column densities of $1.3\times 10^{-19}$ g cm$^{-3}$ ($3.3\times 10^4$ cm$^{-3}$) and $0.26$ g cm$^{-2}$, respectively; @peters10b use $3.9\times 10^{-21}$ g cm$^{-3}$ ($1.0\times 10^3$ cm$^{-3}$) and $0.026$ g cm$^{-2}$. However, our parameter choices are much closer to what is actually observed in regions of massive star formation. For example, in their survey of $146$ Southern massive star-forming regions, @faundez04a find a typical mass and radius of $5000$ ${M_{\odot}}$ and $0.4$ pc, corresponding to a volume density of $1.2\times 10^{-18}$ g cm$^{-3}$ ($3.1\times 10^5$ cm$^{-3}$) and a column density of $2.1$ g cm$^{-2}$, similar to what we use.
Our initial cloud has a density structure described by $$\rho = \left\{
\begin{array}{ll}
\rho_c, & r < R_c/2 \\
\rho_c (2r/R_c)^{-1.5}, \qquad & R_c/2 \leq r < R_c \\
2^{-1.5}\rho_c/100 , & r \geq R_c
\end{array}
\right.,$$ where $r$ is the distance from the cloud center and $\rho_c = 6\Sigma_c/[(2^{2.5}-1) R_c]=1.6\times 10^{-18}$ g cm$^{-3}$ is the core density. Thus our density profile consists of a constant density in the inner half of the radius, coupled with a powerlaw falloff in the outer half of the radius. Outside this cloud we place a low density ambient medium with a density that is 100 times smaller than the cloud edge density. We choose this density structure because observations indicate the presence of a roughly $r^{-1.5}$ density gradient on large-scales in star-forming clumps [e.g. @caselli95a; @beuther02c; @beuther05b; @beuther06a; @mueller02a; @sridharan05a]. By choosing a flat inner density profile, however, we minimize tidal forces in the cloud core, thereby ensuring maximum opportunity for fragmentation.
We initialize the cloud velocity with a Gaussian-random velocity field with a power spectrum $P(k)\propto k^{-2}$ and a one-dimensional velocity dispersion $\sigma_c = \sqrt{G M_c/ 2 R_c}=2.9$ km s$^{-1}$. The corresponding virial parameter is $\alpha = 5 \sigma_c^2 G M_c/R_c = 5$, so that the turbulent kinetic energy is larger than the potential energy at time zero. However, we do not include any feedback processes (e.g. winds or H <span style="font-variant:small-caps;">ii</span> regions) capable of driving the turbulence, nor do we have other potential driving mechanisms, such as a turbulent cascade from larger scales or ongoing infall. As a result, the turbulence undergoes a rapid decay, which quickly renders the cloud gravitationally bound.
Throughout the computational domain, we initialize the radiation energy density to that of a blackbody with a temperature $T_r = 10$ K. Thus we have $E = a T_r^4 = 7.56 \times 10^{-11}$ erg cm$^{-3}$. Similarly, we initialize the gas temperature within the cloud ($r<R_c)$ to $T_g = 10$ K. Outside the cloud ($r>R_c$), we set the temperature to $T_g = 1000$ K. Since the density outside the cloud is $1/100$ that of the density at the cloud edge, this ensures thermal pressure balance across the cloud boundary. We also set the Planck and Rosseland opacities of the material with $T_g > 500$ K and $\rho < 2^{-1.5}\rho_c/50$ to zero, to ensure that the host ambient medium does not interact with the radiation field, and is not able to cool.
Evolution Equations
-------------------
The simulations we present in this paper use the ORION adaptive mesh refinement code. The numerical method is nearly identical to that in our previous papers [e.g. @krumholz07a; @krumholz09c; @krumholz10a; @myers11a; @cunningham11a]. Here we only summarize the physics, and we refer readers to the numerical method papers referenced in Section \[sec:numerics\] for a full description of ORION’s workings. ORION works by solving the equations of compressible gas dynamics including self-gravity, radiative transfer, and radiating star particles, all on an adaptive grid. In our computational domain, we describe every cell with a vector of conserved quantities $(\rho, \rho{\mathbi{v}}, \rho e, E)$, where $\rho$ is the density, $\rho{\mathbi{v}}$ is the momentum density, $\rho e$ is the total internal plus kinetic gas energy density, and $E$ is the radiation energy density in the rest frame of the computational grid. In addition to gas quantities, we also track an arbitrary number of point mass star particles, each of which is described by a position ${\mathbi{x}}_i$, a momentum ${\mathbi{p}}_i$, and an instantaneous luminosity $L_i$, where the subscript $i$ refers to the particle number.
Given this description of the problem, the full set of evolution equations is $$\begin{aligned}
\label{masscons}
\frac{\partial}{\partial t}\rho & = & - \nabla\cdot(\rho{\mathbi{v}}) - \sum_i \dot{M}_i W({\mathbi{x}}-{\mathbi{x}}_i) \\
\frac{\partial}{\partial t}(\rho {\mathbi{v}}) & = & -\nabla\cdot(\rho {\mathbi{v}}{\mathbi{v}}) - \nabla P - \rho \nabla \phi - \lambda \nabla E
\nonumber \\
& & {} - \sum_i \dot{{\mathbi{p}}}_i W({\mathbi{x}}-{\mathbi{x}}_i)
\label{momcons}
\\
\frac{\partial}{\partial t}(\rho e) & = & -\nabla \cdot [(\rho e+P){\mathbi{v}}] - \rho {\mathbi{v}}\cdot \nabla \phi - \kappa_{\rm 0P} \rho (4 \pi B - c E)
\nonumber \\
& & {} + \lambda\left(2 \frac{\kappa_{\rm 0P}}{\kappa_{\rm 0R}} - 1\right) {\mathbi{v}}\cdot \nabla E - \left(\frac{\rho}{m_p}\right)^2 \Lambda(T_g)
\nonumber \\
& & {} - \sum_i \dot{\mathcal{E}}_i W({\mathbi{x}}- {\mathbi{x}}_i)
\label{econsgas}
\\
\frac{\partial}{\partial t}E & = & \nabla \cdot \left(\frac{c\lambda}{\kappa_{\rm 0R} \rho} \nabla E\right) + \kappa_{\rm 0P} \rho (4 \pi B - c E)
\nonumber \\
& & {} - \lambda \left(2\frac{\kappa_{\rm 0P}}{\kappa_{\rm 0R}} - 1\right) {\mathbi{v}}\cdot \nabla E - \nabla \cdot \left(\frac{3 - R_2}{2} {\mathbi{v}}E\right)
\nonumber \\
& & {}
+ \left(\frac{\rho}{m_p}\right)^2 \Lambda(T_g) + \sum_i L_i W({\mathbi{x}}- {\mathbi{x}}_i)
\label{econsrad}
\\
\label{starmass}
\frac{d}{dt} M_i &= & \dot{M}_i \\
\label{starpos}
\frac{d}{dt} {\mathbi{x}}_i & = & \frac{{\mathbi{p}}_i}{M_i} \\
\label{starmom}
\frac{d}{dt} {\mathbi{p}}_i & = & -M_i \nabla \phi + \dot{{\mathbi{p}}}_i
\\
\label{poisson}
\nabla^2\phi & = &-4\pi G \left[ \rho + \sum_i M_i \delta({\mathbi{x}}-{\mathbi{x}}_i)\right].\end{aligned}$$ Equations (\[masscons\]), (\[momcons\]), and (\[econsgas\]) represent the conservation laws for gas mass, momentum, and energy, including terms describing the exchange of these quantities with star particles and with the radiation field. Equation (\[econsrad\]) is the corresponding conservation of energy equation for the radiation field. Similarly, equations (\[starmass\]), (\[starpos\]), and (\[starmom\]) are the equations of mass and momentum conservation, and the equation of motion, for the point particles. Finally, equation (\[poisson\]) is the Poisson equation for the gravitational potential $\phi$. Note that we compute the gas-radiation exchange terms using the mixed frame formulation [@mihalas82a; @krumholz07b], allowing us to write them in a form that is manifestly and exactly energy-conserving.
In these equations, the terms $\dot{M}_i$, $\dot{p}_i$, and $\dot{\mathcal{E}}_i$ represent the rate at which mass, momentum, and energy accrete from the gas onto the $i$th star, and $L_i$ represents the luminosity of that star. We describe how we compute these quantities in Section \[sec:numerics\]. The quantities $P$ and $T_g$ are the pressure and gas temperature, respectively. These are related by the equation of state $$P = \frac{\rho k_B T_g}{\mu m_{\rm H}} = (\gamma-1) \rho \left(e - \frac{v^2}{2}\right),$$ where $\mu = 2.33$ is the mean molecular weight for molecular gas of Solar composition and $\gamma$ is the ratio of specific heats. We adopt $\gamma=5/3$, appropriate for gas too cool for hydrogen to be rotationally excited, but this choice is essentially irrelevant because $T_g$ is set almost purely by radiative effects. The quantities $\kappa_{\rm 0P}$ and $\kappa_{\rm 0R}$ are the specific Planck- and Rosseland-mean opacities in the rest frame of the gas, $B = c a_R T_g^4/4\pi$ is the Planck function, and $\lambda$ is the flux limiter, given by $$\begin{aligned}
\lambda & = & \frac{1}{R} \left(\mbox{coth} R - \frac{1}{R}\right) \\
R & = & \frac{|\nabla E|}{\kappa_{\rm 0R} \rho E} \\
R_2 & = & \lambda + \lambda^2 R^2.\end{aligned}$$ We compute the opacities as a function of the gas density and temperature using the iron normal, composite aggregates dust model of @semenov03a.
Finally, $\Lambda(T_g)$ represents the line cooling coefficient. We include this because the turbulence can be strong enough in our cluster so that, at isolated points, gas shock-heats to temperatures above a few thousand K. This exceeds the dust sublimation temperature, so the dust opacity becomes nearly zero in this gas. Instead, the gas in this temperature regime is cooled by line emission, which we cannot easily describe with a simple continuum opacity. In this gas, we transfer energy from the gas thermal reservoir to the radiation field at a rate $(\rho/m_p)^2 \Lambda(T_g)$, where $m_p$ is the proton mass, and the function $\Lambda(T_g)$ is taken from @cunningham06a. See @cunningham11a for more details of our line cooling approach.
An important subtlety in our evolution equations, which is worth noting, is that we [*do not*]{} differentiate between gas and dust grain temperatures. At low densities, gas-grain coupling can be imperfect, and it can be important to calculate the two temperatures separately, and to simulate the thermal exchange between dust and gas [e.g. @urban09a]. However, grains and gas become very tightly coupled in temperature once the density exceeds $n\sim 10^4 - 10^5$ cm$^{-3}$. For comparison, the mean density in our initial clouds is $n = \overline{\rho}/\mu = 4.0\times 10^5$ cm$^{-3}$. Thus our entire computation is in the strong coupling regime, and there is no need to treat dust and gas temperatures separately.
For simulation ISO, our isothermal run, we modify these equations as follows. First, we omit equation (\[econsrad\]) entirely. Second we set to zero all terms proportional to $E$ or $\Lambda(T_g)$ in equations (\[momcons\]) and (\[econsgas\]). Third, instead of $\gamma=5/3$, we use $\gamma = 1.0001$. This corresponds to neglecting the effects of radiative transfer, and simply keeping the gas temperature almost completely fixed to its initial value.
Numerical Method {#sec:numerics}
----------------
The ORION code solves equations (\[masscons\]) – (\[poisson\]) in a series of operator-split steps. In each time step, we first integrate the hydrodynamic equations (\[masscons\]) – (\[econsgas\]), excluding the terms describing stars and the radiation field. This update uses a conservative Godunov scheme with an approximate Riemann solver, and is second-order accurate in time and space [@truelove98a; @klein99a]. Next we solve the Poisson equation (\[poisson\]) using a multigrid iteration scheme [@truelove98a; @klein99a; @fisher02a]. Third, in the runs where we include radiation, we update the radiation energy equation (\[econsrad\]) and the radiation terms in the hydrodynamic equations (\[masscons\]) – (\[poisson\]). This update uses the @krumholz07b conservative update scheme, in which we handle the dominant terms implicitly and the non-dominant terms explicitly. The update for the implicit terms uses the @shestakov08a pseudo-transient continuation scheme. Finally, we update the stellar quantities, equations (\[starmass\]) – (\[starmom\]), and update gas quantities for the gas-star exchange terms on the right hand sides of equations (\[masscons\]) – (\[poisson\]). We determine the accretion rates of mass, momentum, and energy onto each star by fitting the flow within a radius of four finest-level cells of each star particle to a Bondi-Hoyle flow, following the procedure described by @krumholz04a. We update the luminosity $L_i$ of each star using the protostellar evolution model described in the appendices of @offner09a.
Each of these update modules operates within the overall adaptive mesh framework of ORION [@berger84a; @berger89a; @bell94a]. In this scheme, we discretize the computational domain onto a series of levels $l=0,1,2,\ldots, L$. The coarsest level, level 0, has cells of linear size $\Delta x_0$, and covers the entire computational domain. All subsequent levels, with cells of size $\Delta x_l = \Delta x_0/2^l$, cover subregions of the computational domain. Each level consists of a union of rectangular grids of cells, and grids on different levels are nested such that every level $l$ grid with $l>0$ is fully contained within one ore more level $l-1$ grids. To advance a level $l$ in time, we first advance all the grids on that level through a time step $\Delta t_l$, then advance grids on level $l+1$ by two timesteps of size $\Delta t_{l+1} = \Delta t_l / 2$. After the two level $l+1$ advances, we synchronize the boundaries between levels $l$ and $l+1$ to ensure exact conservation of mass, momentum, and energy across level boundaries. The entire update procedure is recursive, so a single advance on level $l+1$ entails two advances of size $\Delta t_{l+2} = \Delta t_{l+1}/2$ on level $l+2$, and so forth to the finest level present. The coarse level timestep $\Delta t_0$ is set by computing the Courant condition on each level (including a contribution to the signal speed from radiation pressure – @krumholz07b) and setting $\Delta t_0 = \min(2^l \Delta t_l)$.
Boundary, Refinement, and Star Particle Conditions {#sec:conditions}
--------------------------------------------------
At the edge of the computational domain, we use reflecting boundary conditions for the hydrodynamics. However, this choice is irrelevant to the evolution, because our computational domain is large enough to ensure that no material from the cloud ever approaches it. For the gravity, we adopt Dirichlet boundary conditions, with the potential at the computational domain boundary set equal to a multipole expansion of the potential due to the matter in the domain interior, including terms up to the quadrupole. Finally, for radiation we adopt Marshak boundary conditions, with the flux into the computational domain set equal to the flux of an isotropic 10 K blackbody: $F_{\rm in} = c a T_r^4/4 = 0.57$ erg cm$^{-2}$ s$^{-1}$. The boundary condition is equivalent to allowing any radiation generated within the computational domain to escape freely, but also to bathing the computational domain in a 10 K blackbody radiation field.
In order to determine when AMR levels are added or removed, we must also specify refinement conditions. The conditions we use in our simulations are as follows. First, we refine any cell with a density greater than half the edge density of the initial cloud to at least level 1. This ensures that our initial cloud is well resolved. Second, we refine any cell on level $l$ that is within a distance $16\Delta x_l$ or less from any star particle. This ensures that the region around each star is resolved by at least 32 cells on all levels by the finest one. Third, we refine any cell where the density exceeds the local Jeans density [@truelove97a], $$\label{eq:jeans}
\rho > \rho_J = J^2 \frac{\pi c_s^2}{G \Delta x_l^2},$$ where $c_s = \sqrt{k_B T_g/\mu}$ is the sound speed. We use a Jeans number $J = 1/8$. Finally, we refine any cell where the local radiation energy gradient satisfied the condition $|\Delta E|/E > 0.15/\Delta x_l$. This ensures that gradients of radiation energy density are always well-resolved. If any of these conditions are met, we refine that point in the computational domain to a higher AMR level, up to the maximum level $L$ for that simulation (see Table \[runs\]).
Finally, we create a new star in any cell on the maximum level $L$ that violates the Jeans condition, equation (\[eq:jeans\]), using a Jeans number $J=1/4$. In contrast to previous runs, where we merged star particles together if they approached closer than a certain limit, here we do not allow any star particle that has a mass greater than $0.05$ ${M_{\odot}}$ to be destroyed by merging. Our motivation for choosing this mass limit is that it is roughly the mass at which second collapse to stellar densities occurs [e.g. @masunaga98a; @masunaga00a]. Objects of lower mass remain extended gas balls with physical sizes of a few AU, and thus are much more likely to merge than the much smaller, more compact protostars they become once they complete their collapse. Complete suppression of mergers for more massive objects is probably an extreme assumption, but as we will see in discussion our results, allowing mergers would only strengthen our conclusions, by moving the stellar mass distribution to higher values.
Simulation Results {#sec:results}
==================
For convenience, throughout this section we will report our results in terms of mean-density free-fall times, where the mean density is $\overline{\rho} = 3 M_c/(4 \pi R_c^3)= 9.4\times 10^{-19}$ g cm$^{-3}$ and the corresponding free-fall time is $t_{\rm ff} = \sqrt{3\pi/32 G \overline{\rho}} = 68.6$ kyr. The free-fall time in the high-density initial core is $\sim 30\%$ shorter, $t_{\rm ff,c} = 52.3$ kyr. In reporting stellar quantities, we only count as stars those star particles with masses above $0.05$ ${M_{\odot}}$, the mass at which second collapse to stellar dimensions occurs. However, this has little effect on our results, since objects below this mass never constitute more than a tiny fraction of the total mass in star particles.
We ran these simulations on a combination of the supercomputers Pleiades at the NASA Advanced Supercomputing facility and Ranger at the Texas Advanced Computing Center. Runs LR, HR, and ISO required roughly 200,000, 850,000, and 60,000 CPU hours, respectively, and ran on between 256 and 960 CPUs (32 to 120 nodes), with the number of CPUs used increasing as a run progressed and the number of high resolution grids increased.
Large-Scale Evolution
---------------------
Figures \[evol\_lr\], \[evol\_hr\], and \[evol\_iso\] show the large-scale evolution of the cloud as it collapses in our simulations. As the plots make clear, the overall distribution of the gas and stellar mass, and the gas temperature structure, is very similar in all runs. As we will see in more quantitative detail later, the evolution of the two radiative runs is very similar in almost every respect, so that we may have confidence that the behavior we are seeing is physical and not a result of resolution effects. Even at late times, the only noticeable difference is the exact positions of individual stars on the periphery of the cloud. These differ primarily because the N-body interactions that occur late in the simulation are chaotic. They can therefore be changed significantly because the amount of gravitational softening in the gas-particle and particle-particle interactions is resolution-dependent.
In both radiative runs, we see that, for the first $(0.3-0.4)t_{\rm ff}$, the initial velocity perturbations we have injected are developing and creating structure, but that no stars have yet formed. The gas temperature remains locked at 10 K, the value imposed by the radiation field. Around $t/t_{\rm ff} = 0.45$, the first stars start to appear at the densest peaks created by the turbulent compression. The mass in stars is still tiny, well under 1% of the gas mass, and the stars themselves are all quite small. Nonetheless, the effects on the temperature are immediately apparent. Each star is surrounded by a clear region of gas at elevated temperatures. These regions are localized, so that the the bulk of the gas remains cold, and the heated regions around different stars are, for the most part, non-overlapping.
It is not surprising that the formation of stars has such a strong effect. As pointed out by @offner09a, the energy budget of a star-forming cloud is dominated by the gravitational potential energy released by star formation, even when those stars constitute a tiny fraction of the total mass. This continues to be true up until the point when massive stars with short Kelvin times begin to dominate the bolometric output of the stellar population. In our simulations, even though we do produce $\sim 20$ ${M_{\odot}}$ stars with significant internal luminosities toward the end of the simulations, accretion luminosity is the dominant energy source over most of the simulation time.
This morphology of small regions of warm gas strung out along filaments continues to hold to some extent even at time $t/t_{\rm ff} = 0.6$, when the stellar mass has increased to a few percent of the gas mass. We can still identify distinct heated regions associated with individual stars or small stellar groups, and the bulk of the mass remains near 10 K. In the last two time slices, however, as a larger and larger fraction of the cloud mass is converted into stars, this ceases to be true. Even the coldest gas anywhere in the cloud is now at temperatures noticeably larger than the original background temperature, and the regions of very warm gas, $T{\protect\raisebox{-0.5ex}{$\:\stackrel{\textstyle >}
{\sim}\:$}}100$ K, are beginning to overlap and merge. In the last time slice, the coldest gas anywhere in the computational domain is at $\sim 30$ K, and much of the mass is concentrated in a few compact regions where the temperature is significantly higher. Rather than a few warm, dense regions around individual stars [cf. @offner09a] the bulk of the gas is now concentrated into a smaller number of more massive regions that are heated by the collective effects of large numbers of stars.
Star Formation History and IMF
------------------------------
Figure \[starhist1\] shows the total mass of all stars as a function of time in the runs. Examining the figure shows that the total mass in stars is nearly identical in the two radiative runs, indicating that this aspect of the simulations is very well converged. Run ISO begins to form stars somewhat earlier, and the mass in stars present at equal times is somewhat higher. However, this difference mostly appears to be a time offset. The overall shape in Figure \[starhist1\] is the same, indicating a generally similar star formation history. The time offset is likely a result of the faster collapse that occurs in the isothermal run, where cooling is assumed to be infinitely rapid and efficient, compared to the radiative run.
Figure \[starhist2\] shows the number of stars as a function of the total stellar mass in each simulation. The total number of stars is somewhat larger in run HR than in run LR, which is not surprising given the increased resolution. Observations indicate that the binary period distribution is extremely broad, covering separations from only a few stellar radii to $\ga 10^4$ AU [@duquennoy91a]. It is therefore not surprising that some binaries that might be resolved into two separate stars in run HR instead appear as a single star in run LR – indeed, we would expect this result in essentially any simulation that did not resolve the radii of individual stars. Nonetheless, notice that, if we normalize to the number of stars present at equal times and fractions of mass accreted, then the difference between the two runs disappears. The number of stars present at any given time in run HR is roughly $1.6$ times the number present at the same time in run LR. Thus the trend in terms of when the stars are formed in the simulations is nearly identical in the two cases, and we can regard as well-resolved the distribution in time of when stars form.
The trend of number of stars versus mass shown in Figure \[starhist2\] is interesting. In the radiative runs, when $M_{*,\rm tot}/M_c \la 0.1$, the number of stars increases roughly linearly with the total stellar mass, as we might expect if the mass per star were constant. However, the rate at which new stars appears drops sharply once $M_{*,\rm tot}/M_c \ga 0.2$. Indeed, we see that $60-70\%$ of all stars have formed at a time when only $\sim 10\%$ of the cloud mass has been incorporated into stars, By the time 20% of the cloud mass has gone into stars, nearly 90% of all the stars are in place. In effect, the fragmentation of the gas into new stars has completely shut down. Given that this effect occurs nearly identically in runs LR and HR, this cannot be a resolution effect. In contrast, run ISO shows very different behavior. The number of stars as a function of total stellar mass is almost the same as in run HR up to the point where $\sim 15\%$ of the mass has been incorporated into stars, but the two runs diverge after that. New stars continue forming all the way through run ISO, at a rate that is only slightly less after $M_{*,\rm tot}/M_c \ga 0.2$ than it was earlier in the simulation. This strongly suggests that the shutdown in new star formation we observe in runs LR and HR is a radiative effect, a topic to which we will return in Section \[sec:thermo\].
As one might expect, this shutoff of fragmentation into new stars in runs LR and HR even as the total stellar mass continues to increase produces a dramatic effect on the stellar mass distribution. Figures \[imfplot1\] and \[imfplot2\] show the cumulative and differential mass distributions of the stars formed in our simulations at the times when the total mass in stars is $10-50\%$ of the initial cluster mass. All these plots show that the stellar mass distribution in the radiative runs moves continuously to higher masses as the simulation proceeds. This is because mass is accreting onto existing stars, which rise in mass, but very few new, lower-mass stars are forming. Note that, while the mean stellar masses are slightly different in runs LR and HR, the systematic drift of these mean to higher masses as the total stellar mass rises appears to about occur equally in both runs. In run ISO, on the other hand, there is much less evolution in the shape of the IMF. The fraction of mass in very small objects does decrease slightly with time, but the IMF in run ISO peaks at $\sim 1$ ${M_{\odot}}$ in every time slice. Quantitatively, we find that, from the point where $M_{\rm *,tot}/M_c \approx 0.15$ and the star formation histories in runs ISO and HR begin to diverge, up to the point when $M_{\rm *,tot}/M_c \approx 0.5$ and run HR ends, the mass-weighted median stellar mass[^2] in run ISO increases by only a quarter of a dex, while in run HR it increases by half a dex. Thus the behavior of run ISO is similar to that in previous simulations done with prescribed equations of state[^3] (e.g. see Figure 1 of @bonnell04a, which a similar increase in median mass from $0.7-1.0$ free-fall times in their simulation.)[^4]
For comparison, we have generated 10,000 clusters each of mass 100, 200, 300, 400, and 500 ${M_{\odot}}$, randomly drawn from a @chabrier05a IMF,[^5] with a minimum mass of $0.05$ ${M_{\odot}}$ and a maximum of $150$ ${M_{\odot}}$. We properly account for finite sampling using the procedure described in Appendix \[app:imfsample\]. As the plots show, the mass distribution of stars formed in the radiative simulations drifts to systematically higher masses than the observed IMF once $\sim 30-50\%$ of the mass has been turned into stars. The disagreement is highly significant, and occurs at stellar masses that are extremely well-resolved in the simulations. For example, consider Figure \[imfplot2\] at the time when $M_{*,\rm tot}/M_c = 0.5$. For run HR at that time, the mass in almost every bin from $1-10$ ${M_{\odot}}$ is above the 90th percentile of random drawings from a Chabrier IMF, while the mass in almost every bin below $1$ ${M_{\odot}}$ is below the 10th percentile of random drawings from a Chabrier IMF. Indeed, a Kolmogorov-Smirnov comparison between the mass functions produced in the simulations and the Chabrier IMF shows that, with the exception of the HR run at the point when $M_{*,\rm tot}/M_c = 0.3$, all the mass functions shown in Figures \[imfplot1\] and \[imfplot2\] are inconsistent with having been drawn from the Chabrier IMF at confidence levels better than 1 part in $10^6$.
In contrast, run ISO is consistent with the IMF at the low mass end at essentially all times. It is deficient in massive stars compared to a Chabrier IMF, an effect that has been observed before in simulations without radiative transfer [@maschberger10a] and taken as evidence for the so-called “IGIMF (integrated galactic initial mass function) effect". We obtain the same result here, but find that it disappears in simulations that include radiation.
One might be tempted to fix this problem simply by scaling all the stellar masses by some factor less than unity, to account for mass ejected by protostellar outflows, which we have not included. However, because the peak of the IMF is evolving with time in our simulations, a scaling factor that produces agreement between the simulated IMF and the observed one at one time would not produce agreement at earlier or later times. The central problem is not so much that the IMF in the simulation is too top-heavy, but that the median mass increases continuously with time. However, it does seem likely that protostellar outflows can help solve the problem by reducing the star formation rate and thus the luminosity, as we discuss further in Section \[sec:solutions\]. Such an effect cannot be captured by a simple rescaling of the masses, though.
Gas Thermodynamics and Fragmentation {#sec:thermo}
------------------------------------
The reason for the shutdown in fragmentation and the drift to systematically higher stellar masses with time in the radiative runs becomes clear if we consider how the gas density and temperature evolve with time. Figures \[phaseplot\_lr\] and \[phaseplot\_hr\] show the distribution of gas mass in the density - temperature plane as star formation proceeds in runs LR and HR. For comparison, we also overlay lines of constant Bonnor-Ebert mass, where $$\label{eq:mbe}
M_{\rm BE} = 1.18 \frac{c_s^3}{\sqrt{G^3 \rho}},$$ and $c_s = \sqrt{k_B T/\mu}$ is the isothermal sound speed. The Bonnor-Ebert mass is significant because objects with masses below $M_{\rm BE}$ can be supported against collapse by thermal pressure. We therefore expect that the lowest mass stars to form will tend to have masses comparable to the smallest values of $M_{\rm BE}$ found in the gas. Even if turbulence does create fragments with masses below $M_{\rm BE}$, these will be stable against collapse as a result of their thermal pressure. Figure \[phaseplot\_1d\] summarizes this result by showing how the gas mass is distributed with respect to $M_{\rm BE}$ at different times in the simulation. As the plot shows the runs are not completely converged at the low $M_{\rm BE}$ end, but the general trend that the mean Bonnor-Ebert mass systematically increases is clear in both runs.
Figures \[phaseplot\_lr\] – \[phaseplot\_1d\] show that, immediately before any stars have formed, the great majority of the mass has a temperature within a few K of 10 K, the initial temperature and the temperature imposed by the background radiation field. Consequently, the densest material in the cloud is in a density and temperature regime where $M_{\rm BE} \sim 0.01$ ${M_{\odot}}$, and objects of this mass are able to collapse. Nearly half the mass in the cloud lies in the region between $M_{\rm BE} = 0.01$ ${M_{\odot}}$ and $0.1$ ${M_{\odot}}$, so there is plenty of material available to make low mass stars. Once stars begin to form, however, their radiation raises the temperature significantly, pushing to higher $M_{\rm BE}$. This increase is partly offset by an increase in the mean density, but the density does not increase quickly enough to compensate for the rapidly rising temperature – likely because the density rise occurs on a timescale associated with the mean density free-fall time, while the temperature rise is driven by stellar accretion occurring at the peaks of the density distribution, which operate on a much shorter timescale. As a result of this evolution, there is not much material that is dense and cold enough to make small stars. For example, if we look run at HR, we find that $20\%$ of the gas mass has $M_{\rm BE} < 0.05$ ${M_{\odot}}$ just before the first star forms, and thus is able to make the smallest stars we consider. In contrast, the mass fraction able to create such small stars drops to 10% at the point when $M_{*,\rm tot}/M_c = 0.2$, and to only 2% when $M_{*,\rm tot}/M_c = 0.5$. Thus we see that the formation of new stars has stopped because it is no longer possible for small fragments to gravitationally collapse. By the end of the run, the smallest gravitationally-unstable fragments are approaching 1 ${M_{\odot}}$ in size.
The underlying physical reason for this effect, of course, is the radiation released by the already-formed stars. This in turn, is primarily driven by accretion luminosity, with a subdominant contribution from nuclear burning and Kelvin-Helmholtz contraction later in the simulation.
Discussion {#sec:discussion}
==========
The Overheating Problem
-----------------------
A systematic increase in the mean stellar mass induced by heating of the gas due to accretion luminosity is a new phenomenon in simulations of star cluster formation. Radiative suppression of fragmentation has been reported in the literature before, but no previous simulation has observed it to shift the typical stellar mass scale in regions as large as entire star clusters. We emphasize that, even if we regard the absolute stellar mass peak we obtained as uncertain due to resolution effects, the trend of increasing mean mass is robust, and appears equally strong in both simulations. It has not been seen in previous work due to the limitations we outlined in Section \[sec:intro\]. Most simulations of large-scale cluster formation with initial conditions similar to ours, which are typical of Galactic star formation, have not included radiative transfer. They have adopted a simple equation of state, which puts in by hand the result that the peak stellar mass is invariant [e.g. @bonnell04a]. We effectively recover the same results in our run ISO: the median stellar mass does increase slightly with time, but the increase is significantly smaller than in the radiative runs, and is consistent with what has been found in earlier non-radiative simulations.
Those simulations that have included radiation have either focused on regions too small or with too few stars [e.g. @bate09a; @offner09a; @offner10a] to produce the overlap of the heating regions around many stars we observe, or have focused on single massive cores, where suppression of fragmentation is expected [e.g. @krumholz07a; @krumholz10a; @myers11a]. Indeed, in these contexts, radiative suppression of fragmentation is necessary to obtain agreement between simulations and observations. For single massive cores, suppressed fragmentation has tentatively been seen in high resolution interferometric observations [@bontemps10a; @longmore11a]. In the absence of radiation, the disks formed in simulations of low mass star formation tend to undergo excessive fragmentation, leading to an overproduction of brown dwarfs relative to stars and to various other conflicts with observation [@luhman07a]. Including radiation fixes this problem [@bate09a; @offner09a; @offner10a]. Indeed, @bate09a argues that the observed peak of the IMF can be explained as arising from the mass scale at which radiative feedback halts fragmentation. While this argument is plausible, it relies on the assumption that we can consider the bubble of radiatively-warmed gas around each star to be isolated from other stars, amidst a background of cool gas. This assumption holds in the low-mass, low-density regions simulated by @bate09a and @offner09a [@offner10a], where regions of heating are $\sim 0.05$ pc in size, much smaller than the interstellar separation. It clearly does not hold in our simulation, both because our stars are closer together than in a low mass star-forming region, and because our heating regions are larger due to the higher accretion rates produced by the higher gas densities. This suggests that the critical problem in our simulation is that the regions of warm gas around individual stars begin to overlap. As a result, all the gas in the cluster is heated, rather than simply discrete regions.
One might hope to avoid this problem by halting star formation early on, before enough mass goes into stars to allow the heated regions to overlap. However, such a solution seems to require improbable fine tuning. Examining figures \[evol\_lr\] and \[evol\_hr\], we see that the overlap of hot regions is well underway by the time 30-40% of the mass has been incorporated into stars. Figures \[imfplot1\], and \[imfplot2\] show that the shift of the simulation IMF to higher masses than the @chabrier05a IMF is also largely complete by this point. Since this is about the minimum star formation efficiency required to have any possibility of making bound clusters [@kroupa01b; @fall10a], the fact that at least some star formation does result in bound clusters suggests that the star formation efficiency cannot be vastly lower than this value most of the time.
Understanding the Problem
-------------------------
We can estimate the dividing line between the two cases of heating in discrete regions around single stars and heating in the bulk of the protocluster gas using the analytic radiative transfer approximation of @chakrabarti05a [@chakrabarti08a], coupled to the formalism developed by @krumholz08a. @chakrabarti05a consider a spherical cloud of dusty gas with radius $R$, mass $M$, and a density profile $\rho\propto r^{-k_\rho}$, surrounding a point source of radiation of luminosity $L$, with dust whose specific opacity depends on wavelength as $\kappa=\kappa_0 (\lambda_0/\lambda)^{\beta}$. In such a cloud, they show that the temperature profile approximately follows $$\label{eq:tprofile}
T = T_{\rm ch} \left(\frac{r}{R_{\rm ch}}\right)^{-k_T},$$ where $r$ is the distance from the cloud center, $R_{\rm ch}$ and $T_{\rm ch}$ are the characteristic radius and temperature of the dust photosphere formed within the cloud, and $k_T$ is a powerlaw index to be approximated by a numerical fit. For convenience we define $\Sigma = M/\pi R^2$, $\eta = L/M$ (measured in cgs units, not Solar units), $\alpha = 1/[2\beta+4(k_\rho-1)]$, $\tilde{R} = R/R_{\rm ch}$, and $T_0 = hc/k_B \lambda_0$. For Milky Way dust, $\beta\approx 2$ and $\kappa_0 \approx 0.54$ cm$^2$ g$^{-1}$ at $\lambda_0 = 100$ $\mu$m [@weingartner01a], but the results depend on these parameters very weakly. With these definitions, the characteristic radius and temperature and the powerlaw index are given by $$\begin{aligned}
\frac{R_{\rm ch}}{R} & = & \left\{ \left(\frac{\eta}{4\sigma_{\rm SB} \tilde{L}}\right)^{\beta} \Sigma^{4+\beta} \left[\frac{3-k_\rho)\kappa_0}{4(k_\rho-1)T_0^\beta}\right]^4\right\}^\alpha \\
T_{\rm ch} & = & \left\{ \left(\frac{\eta}{4\sigma_{\rm SB} \tilde{L}}\right)^{k_\rho-1} \Sigma^{k_\rho-3}
\left[\frac{4(k_\rho-1) T_0^\beta}{(3-k_\rho)\kappa_0}\right]^2\right\}^\alpha \\
\tilde{L} & \approx & 1.6 \tilde{R}^{0.1} \\
k_T & \approx & \frac{0.48 k_\rho^{0.05}}{\tilde{R}^{0.02k_\rho^{1.09}}} + \frac{0.1 k_{\rho}^{5.5}}{\tilde{R}^{0.7k_\rho^{1.9}}}.
\label{eq:kt}\end{aligned}$$ The latter two expressions are approximations based on fits to numerical solutions of the radiative transfer equation, and reproduce the numerical results with high accuracy.
A rough condition for the heating regions around individual protostars to merge and heat the bulk of the gas is that the combined luminosity $L$ of all the protostars, which we approximate as being near the cloud center, be high enough so that the temperature $T$ at the edge of the cloud, $r = R$, be higher than the background temperature $T_b\approx 10$ K to which gas settles when it is not heated by a nearby star. Thus, to avoid overheating we require that the luminosity to mass ratio $\eta$ be smaller than the value $\eta_{\rm crit}$ for which $T(R) = T_b$. For a given cloud mass $M$ and surface density $\Sigma$, it is straightforward to use equations (\[eq:tprofile\]) – (\[eq:kt\]) to numerically determine the value $\eta_{\rm crit}$ for which the condition $T(R) = T_b$ is satisfied. In what follows we do so for a background temperature $T_b = 10$ K and density profile $k_\rho = 3/2$, roughly what is seen in massive star-forming clumps [e.g. @mueller02a], but the result we obtain is not very sensitive to this choice.
The luminosity is in turn related to the star formation rate in the simulations. @krumholz08a show that accretion onto low mass stars yields an energy release per unit mass accreted $\psi \approx 2.1\times 10^{14}$ erg g$^{-1}$. This number is a result of stellar structure considerations, which fix the characteristic radii of protostars. Thus the light to mass ratio $\eta_{\rm acc}$ in a cloud of mass $M$ powered by accretion luminosity from stars forming at a rate $\dot{M}_*$ is $$\eta_{\rm acc} = \psi \frac{\dot{M}_*}{M} = \epsilon_{\rm ff} \frac{\psi}{t_{\rm ff}},$$ where $t_{\rm ff}=\sqrt{\pi R^3/8 G M}$ is the mean-density free-fall time of the cloud and $\epsilon_{\rm ff}$ is the dimensionless star formation rate introduced by @krumholz05c.
In Figure \[fig:fragsuppress\], we plot $\eta_{\rm crit}$ and $\eta$ for clouds of varying mass $M$ and surface density $\Sigma$ as a function of $\epsilon_{\rm ff}$. Our values of $M$ and $\Sigma$ are chosen to span the range of typical cluster-forming gas clumps in the Milky Way. The value of $\eta_{\rm crit}$ of course depends on $\Sigma$ alone, while that of $\eta_{\rm acc}$ is proportional to $\epsilon_{\rm ff}$. The plot shows that, for any plausible cloud mass and surface density, if $\epsilon_{\rm ff}\ga 0.1$ then $\eta > \eta_{\rm crit}$. This plot explains why, in our simulation, the stellar heating regions overlap. In our simulation, $\sim 50\%$ of the gas is in stars when $t/t_{\rm ff} \approx 1$, so we have $\epsilon_{\rm ff} \approx 0.5$. This puts us in the regime in which heating zones overlap, and fragmentation is suppressed. In contrast, the simulations of @bate09a, @offner09a, and @peters10b have substantially lower surface densities, $\Sigma \la 0.1$ g cm$^{-2}$, putting them in the regime where heating zones do not overlap and fragmentation is unlikely to be suppressed even with quite high $\epsilon_{\rm ff}$, except within the disks around each star. The simulations of @offner09a, since they include driven turbulence, also have a lower value of $\epsilon_{\rm ff}$.
Note that this argument is consistent with the one made by @elmegreen08a for why the Jeans mass should vary little in regions where the dust and gas temperatures are well-coupled, like those we study. The crux of their argument is that increases in the gas density lower the Jeans mass, but also produce a higher star formation rate, which in turn raises the dust temperature and the Jeans mass. The two effects nearly offset one another. However, this offset only occurs if the star formation rate and the density are related by a volumetric @schmidt59a [@schmidt63a] law with $\epsilon_{\rm ff} \approx 0.01$ (see their equation 6). If $\epsilon_{\rm ff}$ rises with time, as it does in our simulation, then the Jeans mass will not remain independent of density.
Possible Solutions to the Problem {#sec:solutions}
---------------------------------
Understanding the problem also suggests an immediate solution. @krumholz07e compile observations from the literature and find that, even in dense, cluster-forming gas clumps, $\epsilon_{\rm ff} \sim 0.01$. @evans09a obtain similar values of $\epsilon_{\rm ff}$ in cluster-forming regions observed by the c2d Survey, although c2d targets clusters considerably more diffuse and lower in mass than the one we have simulated here. Figure \[fig:fragsuppress\] shows that such clouds are not in the regime where heating zones overlap and fragmentation is suppressed, unless their masses are quite low, $M \la 10^2$ ${M_{\odot}}$. This explains why fragmentation is not suppressed in real clouds.
However, this result has important implications for simulations of star cluster formation. It implies that, once radiation physics is included in the simulations, one cannot expect to obtain the correct IMF without also obtaining the correct star formation rate, or at least a star formation rate that is roughly correct. In simulations that do not include radiation feedback, no such care is needed. Fortunately, obtaining the correct star formation rate in simulations is not terribly difficult. Simulations where the turbulence is driven artificially can reproduce the observed value $\epsilon_{\rm ff} \approx 0.01$. Even better, simulations that include stellar wind feedback naturally produce realistic, low values of $\epsilon_{\rm ff}$ without any artificial driving [e.g. @li06b; @nakamura07a; @wang10a], and preliminary evidence indicates that this does reduce accretion luminosities to the point where fragmentation is suppressed far less than we have found (Hansen et al., 2011, in preparation). It is not clear if this effect is scalable to all cluster masses [@fall10a], but it does suggest a way toward simulations of cluster formation that simultaneously obtain the correct star formation rate and the correct IMF.
It is thus possible that the problem might be solved the the inclusion of other physics that our simulation omits, such as outflows and photoionization. These mechanisms might be able to generate regions of high enough density that their Bonnor-Ebert masses will be low even in the presence of overlapping regions of radiative heating.
Implications for Competitive Accretion versus Core Accretion
------------------------------------------------------------
It is also interesting to consider the implications of our result for the competitive accretion versus core accretion models for the formation of star clusters and origin of the IMF. Roughly speaking, the competitive accretion model is that collapses that produce star clusters are global in nature, so all stars accrete from the same mass reservoir, and the stellar mass distribution is determined by a competition between formation of new, small fragments (which pushes the mean mass to lower values) and growth of existing fragments by Bondi-Hoyle accretion (which pushes the mean mass to higher values) [e.g. @bonnell01a; @bate05a; @bonnell06c]. In contrast, in the core accretion model, collapses that produce individual star systems are local rather than global, so that different protostars are for the most part not accreting from the same mass reservoir. In this case, the mass distribution of the stars is set by the mass distribution of the regions of localized collapse, the “cores" [e.g. @padoan02a; @mckee03a; @padoan07a; @alves07a; @hennebelle08b; @hennebelle09a]. Intermediate models are also possible, in which low mass stars form via local collapse, but either massive stars or the cores from which they grow form via a global collapse [e.g. @peretto06a; @wang10a].
@krumholz05a point out, and @bonnell06c and @offner08b confirm, that which mode of star formation takes place depends on the level of turbulence and on $\epsilon_{\rm ff}$. If the turbulence is sub-virial, or becomes sub-virial through decay that is not offset by internal feedback or external driving, then $\epsilon_{\rm ff}$ becomes large and competitive accretion is the dominant star formation mode; core accretion prevails if the turbulence remains at virial levels and $\epsilon_{\rm ff}$ is small. In our simulation we do not include either artificial driving or any physical feedback mechanisms capable of driving the turbulence (e.g. protostellar outflows or H <span style="font-variant:small-caps;">ii</span> regions), so our simulation produces large $\epsilon_{\rm ff}$ and we obtain a competitive accretion-like mode of star formation. However, crucially, we have shown that such a mode of star formation cannot produce the correct IMF due to the radiative suppression problem we have identified. The constant production of new, low-mass stars on which competitive accretion relies to keep accretion onto existing stars from pushing the IMF to ever-increasing masses does not happen once radiative feedback is included, at least in the minimal case where hydrodynamics, gravity, and radiative feedback are the only physical ingredients. It is conceivable that some mechanism we have omitted might still enable the production of low mass stars even in clusters with high $\epsilon_{\rm ff}$ (e.g. fragmentation induced by expanding H <span style="font-variant:small-caps;">ii</span> shells), but in this case that mechanism would be responsible for controlling the peak of the IMF. Our results therefore suggest the minimal competitive accretion model is not compatible with the observed IMF.
One might try to alleviate this problem by choosing significantly less dense initial conditions and retaining the high $\epsilon_{\rm ff}$ required for competitive accretion, i.e. by selecting a lower $\Sigma$ in Figure \[fig:fragsuppress\]. However, this solution faces a severe problem: the initial conditions we have selected are typical of the observed gaseous properties of clouds where massive star formation occurs [e.g. @shirley03a; @faundez04a; @fontani05a]. Surface densities are even larger in globular clusters, yet these show the same IMF peak as the field @de-marchi00a [@de-marchi10a]. If the minimal competitive accretion model can only reproduce the observed IMF from initial conditions far less dense than we have simulated, then its applicability is limited to low-density regions like Taurus, which generally do not contain any massive stars.
Implications for Fragmentation-Induced Starvation
-------------------------------------------------
It is also interesting to examine how individual stars, and particularly the most massive stars, grow in mass. We show this in Figures \[mdot\] and \[starvation\], which show the mass versus time and the mass accretion rate versus time for a sample of stars in each run. In run LR the most massive star we form is 20.0 ${M_{\odot}}$, in run HR the most massive star is 16.2 ${M_{\odot}}$, and in run ISO it is 10.3 ${M_{\odot}}$. In runs LR and HR, the most massive stars are continuing to grow rapidly at the end of the simulation, with accretion rates that are generally flat or increasing with time. The most massive stars are also growing in run ISO, but more slowly and with accretion rates that are either constant or declining with time.
These results have potential implications for the idea of fragmentation-induced starvation proposed by @peters10b. In their simulations (which have a resolution comparable to that of our run LR), the most massive stars stop growing after a certain point because the accretion flow that is feeding them fragments to produce small stars rather than being accreted by the massive star. Thus massive stars exhibit accretion rates that fall with time. We do so something roughly consistent with this behavior in run ISO, but not in our radiative runs. This is likely an effect of radiative suppression of fragmentation.
@peters10b also include radiative transfer in their simulations, but they do not find strong suppression of fragmentation. This is probably because their simulated cloud has a much lower column density ($\Sigma\approx 0.03$ g cm$^{-2}$) than either our simulated clouds or than typical regions of massive star formation in the Galaxy ($\Sigma\sim 1$ g cm$^{-2}$). @krumholz10a show that the amount by which radiation suppresses fragmentation is highly sensitive to the column density, and predict essentially no suppression at the column density used by @peters10b The physical reason for this is that a cloud with $\Sigma=0.03$ g cm$^{-2}$ is optically thin even in the near-infrared, so starlight that is absorbed by dust grains promptly escapes, and most gas is not heated by the radiation. It is therefore not surprising that @peters10b see fragmentation-induced starvation and we do not.
We emphasize, however, that the absence of fragmentation-induced starvation in our radiative runs does not mean that fragmentation-induced starvation does not occur under typical Galactic star-forming conditions. We have just argued that fragmentation is suppressed too strongly in our simulations because star formation is too rapid. Indeed, simulations indicate that outflows allow more fragmentation to occur even in single massive cores than in comparable simulations without outflows [@cunningham11a]. However, our results suggest that, before fragmentation-induced starvation can be considered an important mechanism in regulating massive star formation, it will be necessary to simulate the formation of a star cluster using typical Galactic conditions, like we do, and to include mechanisms that produce realistically low star formation rates.
Summary {#sec:summary}
=======
We report simulations of the formation of a massive star cluster comparable in size to the Orion Nebula Cluster. Our simulations use adaptive mesh refinement to obtain high resolution, and include radiation-hydrodynamics coupled to a realistic treatment of stellar radiative feedback. These are the first simulations reported in the literature that include radiation feedback in the context of the typical region of Galactic star cluster formation, as opposed to focusing on single low-mass [@commercon10a] or high-mass [@krumholz07a; @krumholz10a; @myers11a] cores, or on low-mass or low-density regions like Taurus [@bate09a; @offner10a; @peters10b].
Our simulations return a surprising result. At early times in the simulations, accreting stars produce bubbles of warm, radiatively-heated gas around themselves, and within these bubbles fragmentation is suppressed by the increased Bonnor-Ebert mass. However, we find that, once $\sim 10-20\%$ of the gas in the protocluster has been converted to stars, these bubbles of warm gas begin to overlap and merge. Rather than resembling a few warm islands surrounded by a sea of cold gas, we instead have a cloud where all the gas is warmed by the collective luminosity of all the accreting stars.
Once the simulation reaches this state, radiation feedback raises the temperature and the Bonnor-Ebert mass throughout the remaining gas enough to essentially halt the formation of any further stars. Mass continues to be converted from gas to stars, but this is almost entirely through accretion onto existing stars rather than formation of new ones. As a result, when radiation is included, the stellar mass distribution in a globally-collapsing star cluster such as the one we simulate is not nearly constant or very slightly increasing with time, as has been reported in earlier, non-radiative simulations, and as we find here in a control run that does not include radiation. Instead, the stellar mass distribution shifts strongly to systematically higher masses as star formation proceeds, eventually becoming too top-heavy compared to the observed IMF. While the absolute mass scale remains uncertain in our simulations due to our inability to resolve tight binaries, the result that the IMF is non-constant and increasing with time is robust against changes in resolution. This implies that, unless there is also some mechanism to ensure that star formation in every protocluster stops when the IMF peak is in the same place, it is not possible to produce the invariant IMF peak that we observe via the global collapse scenario we have simulated.
We argue that the underlying reason that this problem occurs is that, in the absence of either external turbulent driving or any sort of internal mechanical feedback to slow star formation down, stars in our simulation form too quickly. Since accretion luminosity produced as gas falls onto stars is what ultimately drives the temperature increase in our simulations that shuts off fragmentation and leads to a top-heavy IMF, the problem is likely to be alleviated in simulations that include enough physics to obtain a low star formation rate similar to that observed in real star clusters. We are in the process of conducting such simulations now, and will report on the results in future publications.
We thank R. Banerjee, C. Federrath, R. Klessen, M. Mac Low, & T. Peters for helpful discussions. This work was supported by an Alfred P. Sloan Fellowship (MRK); the NSF through grants CAREER-0955300 (MRK) and AST-0807739 (MRK), and AST-0908553 (CFM and RIK); NASA through ATFP grant NNX09AK31G (RIK, CFM, and MRK) and a Spitzer Space Telescope Theoretical Research Program grant (CFM and MRK); and the US Department of Energy at LLNL under contrast DE-AC52-07NA (RIK). Support for computer simulations was provided by an LRAC grant from the NSF through Teragrid resources and NASA through grants from the ATFP and Spitzer Theory Program.
Generating Comparison IMF Samples {#app:imfsample}
=================================
The statistical samples for the IMFs shown in Figures \[imfplot1\] – \[imfplot2\] consist of stellar populations drawn from a @chabrier05a IMF, subject to the constraint that the total mass of the population have specified value. We create each cluster by the following procedure. First, we draw stars from the @chabrier05a IMF, $$\label{eq:imf}
\frac{dn}{d\ln M_*} = \mathcal{N} \left\{
\begin{array}{ll}
\exp(-[\ln \{M_*/{M_{\odot}}\} - \ln 0.2]^2/2\sigma^2), & M_* \leq {M_{\odot}}\\
\exp[-(\ln 0.2)^2/2\sigma^2] (M_*/{M_{\odot}})^{-1.35}, \quad & M_* > {M_{\odot}}\end{array}
\right.,$$ where $\mathcal{N}$ is a normalization constant and $\sigma = 0.55\ln 10$. We truncate this mass function at $0.05$ ${M_{\odot}}$ on the lower end (to match our minimum stellar mass in the simulation) and at $120$ ${M_{\odot}}$ on the upper end. We continue to draw stars so as long as the total mass of stars is smaller than the specified target mass. If we draw a star of a mass such that adding it to our population causes the total mass to exceed the target mass by more than $0.1$ ${M_{\odot}}$, we reject it and draw another. We continue drawing until the total mass of stars is within $0.1$ ${M_{\odot}}$ of the target mass.
Once we have a set of stars, we form the cumulative and differential distributions. We repeat this procedure 10,000 times each for clusters of total mass (from $100-500$ ${M_{\odot}}$. To produce the values shown in Figures \[imfplot1\] and \[imfplot2\], at each mass $M_*$ on the $x$-axis, we sort the values of the 10,000 cumulative or differential distributions at that value of $M_*$. The 10th, 50th, and 90th percentiles shown are the values at that mass point or mass bin are the 1,000th, 5,000th, and 9,000th vales in the sorted lists.
[^1]: Although @peters10b [@peters11a] study regions with enough mass to form massive stars, the column densities of the regions they simulate are $\sim 0.01$ g cm$^{-2}$, rather than the $\sim 1$ g cm$^{-2}$ typical of Galactic star-forming sites. Their simulated clouds are therefore optically thin even in the near-IR, rendering radiative effects fairly unimportant. In this way their work is closer to that of @bate09a, @offner09a, and @price09a than to the simulations we present here.
[^2]: Defined as the mass $m$ for which stars with masses $m_*<m$ comprise half the total stellar mass
[^3]: We cannot directly compare to the earlier radiative simulations of low mass clusters by @bate09a and @offner09a, because these produced fewer than 20 objects. Their IMFs are therefore much too sparsely sampled for it to be possible to make any meaningful statements about their time-dependence.
[^4]: There may also be difference between our simulations and those of @bonnell04a due to differences in initial conditions (partly centrally condensed for us versus uniform density for them) and equation of state (isothermal for our run ISO, barotropic for them.)
[^5]: The argument has been advanced in the literature that the observed IMF in clusters with masses as small as a few hundred ${M_{\odot}}$ is is truncated at high masses compared to Chabrier or similar IMF (@weidner10a, but see @lamb10a, @calzetti10a, and @fumagalli11a for observational counterarguments). Here we are mostly interested in the peak of the IMF, not the high mass end where our simulations have too few stars to make statistically strong statements. We therefore proceed with the simplest assumption that there is no high mass truncation, since it makes no difference for our purposes.
|
---
abstract: 'Analysis of photometric data of the active giant PZ Mon is presented. Using ASAS-3 project data and new more accurate photometry we establish that during 15 years of PZ Mon CCD observations the light curve remains stable, and consequently a longitude of the active spotted area is stable. The small deviations may be explained by differential rotation or inhomogeneous distribution of spots on the active hemisphere of PZ Mon. The stability of the active longitude and it’s location on the PZ Mon surface indicates on the secondary component as reason of stellar activity.'
author:
- 'Pakhomov Yu. V.$^1$, Antonyuk K. A.$^2$, Bondar’ N. I.$^2$, Pit N. V.$^2$'
title: Stability of an active longitude of the giant PZ Mon
---
Introduction
============
PZ Mon (HD289114, $V\approx9$ mag) is an active K2III star of RS CVn type with the radius of 7.7 $R_\odot$ and the mass of 1.5 $M_\odot$ [@2015MNRAS.446...56P]. The secondary component is a cool dwarf with the radius of 0.15 $R_\odot$ and the mass of 0.14 $M_\odot$ which moves on the circular orbit with the major semiaxis of 0.24 au [@2015AstL...41..677P]. Fig. \[model\] shows the scaled model of the PZ Mon system.
The photometric behaviour of the variable star PZ Mon is not ordinary, it reflects the complex stellar activity. The global cycle is presented by the amplitude of about 1$^m$ and the period of $\sim$50 years [@1995AAS..111..259B]. Also there are small cycles with the amplitudes of about 0.1$^m$ and the periods $\sim$22 and $\sim$6.7 years [@2007OAP....20...14B]. These periods were found from measurements of the photographic plates taken since 1899 in several European and Russian observatories. The special photometric study was not planned, therefore, the smallest periods were not found. Further research is performed using CCD. The most complete modern photometry of PZ Mon is contained in ASAS-3 project [@1997AcA....47..467P], but it’s accuracy (0.02 – 0.05$^m$) is not enough to the detailed study of the stellar surface. Nevertheless, the photometric period of 34.13$\pm$0.02 days with the amplitude of about $0.01-0.05^m$ was determined. This period is most expressed and always present that allow us to attribute it for rotational modulation of the spotty star. The joint analysis of light curve and radial velocity curve led us to conclusion that spotted area on the PZ Mon surface is located under the secondary component (Fig. \[model\]).
Radial velocity period 34.15$\pm$0.02 days is very close to the photometric that may be explained as synchronisation of the rotation and orbital motion in the PZ Mon system. The mass ratio of the components is 0.09$\pm$0.03 – the smallest value among the known RS CVn type giants [@2015AstL...41..677P]. The nature of the synchronisation by a component of small mass is unknown. So, it is important to check the fact of synchronisation and stability of the active longitude by new more accurate photometric data.
Observations and analysis of the position of active longitude
=============================================================
We used photometric data of ASAS-3 project and new own observations. ASAS photometry data of PZ Mon covers the dates in 2001–2009, the phase of variability was stable while the average magnitude and the amplitude changed (Fig. \[asas\]). For example, in 2002–2004 the amplitude was minimal while later began to increase following the activity. There were time spans when the variability is disappeared almost during one period at least. In bottom of Fig. \[asas\] shown the phase curve for data sets in 2005–2009 when the amplitude was more stable. The main trend was subtracted. A dispersion of data ($\sigma_{m_V }\approx 0.02^m$) is better than announced by authors for individual measurements. It is caused by photometry errors for an axis of ordinate and, possible also, by a changing of the active longitude within $\sim$0.1 of the period for an axis of abscissa. It corresponds to $\sim$36$\deg$ on the PZ Mon surface. However, we cannot separate these two effects due to a similar contribution. This is possible in case of accurate photometry.
The new observations were taken in the $BVR_C$-bands with a CCD imaging system installed at the 1.25-m telescope AZT-11 of the Crimean Astrophysical Observatory during 23 Jan – 5 Apr 2015. In this interval 21 dates of observations were received, in each date it was carried out a few records, one record includes a sequence of objects image registration in each of three filters with a time resolution of about 3 minutes. The full dataset contains 94 measurements of brightness, which cover about two periods of PZ Mon. The obtained dense set of data is distributed on the most of phases of one period. Today it is the first photometric observations of PZ Mon that have given an obvious evidence of its existence with a well accuracy. All magnitudes were reduced to the standard system and for the each date the $BVR$ values were averaged and the mean time of records was determined. Accuracy of observations had estimated from a data set of the comparison star HD 49477, it does not exceeds of 0.007$^m$ in the $V$-band. In this paper we analyse data of the $V$-band only.
Fig. \[cmp\] represents the photometric data in the $V$-band overlapped by ephemeris calculated by @2015AstL...41..677P in assumption of the big symmetrical spotted area. In this case, we can use cosines law with the average magnitude and the amplitude corresponding to observable values: $m_V = 9.17+0.05\,\textrm{cos}(2\pi\,JD/P)$, where $JD=2454807.2+34.13\,E$. The shift between them is obvious which may be explained by two ways. The first way is a correction of the period. Indeed, scattered data of the ASAS photometry does not allows to reach for better accuracy, and the corrected value of 34.12 days, which in limits of error, describes the observations in 2015 more accurate. However, there is a second way. The found shift is about of 3 days corresponds to 0.09 of the period or $\sim$32$\deg$ on the PZ Mon surface. The same value was estimated using ASAS data. Note, early we found the size of a spotted area close to the size of hemisphere [@2015AstL...41..677P]. So, value 32$\deg$ can be explain by differential rotation or spot distribution on the spotted hemisphere without a correction of the period.
A form of light curve is slightly different from cosines law constructed using ephemeris in Fig. \[cmp\]. It specifies on a small asymmetry of shape of the spotted area along longitudes. Nevertheless, the location of the activity region has no significant changes, always being on the side located under the secondary component. This observable fact is derived using new accurate photometry confirms our conclusion that activity of PZ Mon is caused by the small component. The same source of the activity is observable for several RS CVn stars with the secondary component of small mass [@2015AstL...41..677P]. However, they are asynchronous systems while PZ Mon is synchronous, so a question of the nature of synchronisation is remain.
This investigation was supported by Basic Research Program P-7 of the Presidium of the Russian Academy of Sciences.
Pakhomov, Yu. V., Chugai, N. N., Bondar’, N. I., Gorynya, N. A. & Semenko, E. A. 2015, MNRAS, 446, 56 Pakhomov, Yu. V 2015, Astronomy Letters, 41, 677 Bondar’, N. I. 1995, A&AS, 111, 259 Bondar’, N. I. & Prokof’eva, V. V. 2007, Odessa Astronomical Publications, 20, 14 Pojmanski, I. 1997, Acta Astron., 47, 467
|
---
abstract: 'Let $\F$ be a field of $q$ elements, where $q$ is a power of an odd prime. Fix $n = (q+1)/2$. For each $s \in \F$, we describe all the irreducible factors over $\F$ of the polynomial $g_s(y): = y^n + (1-y)^n -s$, and we give a necessary and sufficient condition on $s$ for $g_s(y)$ to be irreducible.'
author:
- |
\
\
Ron Evans\
Department of Mathematics\
University of California at San Diego\
La Jolla, CA 92093-0112\
revans@ucsd.edu\
\
and\
\
Mark Van Veen\
Varasco LLC\
2138 Edinburg Avenue\
Cardiff by the Sea, CA 92007\
mark@varasco.com
date: February 2018
title: Irreducible factorization of translates of reversed Dickson polynomials over finite fields
---
2010 *Mathematics Subject Classification*. 11T06, 12E10, 13P05.
*Key words and phrases*. reversed Dickson polynomials over finite fields, irreducible factorization of polynomials, second order linear recurring sequence, quadratic residuacity.
Introduction
============
Let $\F$ be a field of $q$ elements, where $q$ is a power of an odd prime $p$. Fix $$\label{eq:1.1}
n=(q+1)/2,$$ and define a polynomial $f(y) \in \F[y]$ of degree $[n/2]$ by $$f(y): = (1+\sqrt{y})^n + (1-\sqrt{y})^n = D_n(2,1-y),$$ where $D_n(2,1-y)$ is a reversed Dickson polynomial [@HMSY eq.(1)]. Our choice of $n$ in was motivated by Katz’s work on local systems [@Katz]. Indeed, by [@EV Lemma 2.1], $f(y)$ satisfies the equality $$\label{eq:1.2}
f(y)^2 = 2y^n + 2(1-y)^n +2,$$ which was instrumental in proving a theorem of Katz relating two twisted local systems [@Katz Theorem 16.8].
For each $s \in \F$, define the polynomial $g_s(y) \in \F[y]$ of degree $2[n/2]$ by $$\label{eq:1.3}
g_s(y): = y^n + (1-y)^n -s = (f(y)^2 -2s-2)/2.$$ Observe that $g_s(y)$ is a translate of the reversed Dickson polynomial $g_0(y) =D_n(1,y-y^2)$ [@HMSY eq.(3)]. For any zero $x$ of $g_s(y)$, can be written as $$\label{eq:1.4}
g_s(y) = (f(y)^2 -f(x)^2)/2.$$ By and [@EV Remark 2], the zeros of $g_s(y)$ are all distinct when $s \ne \pm 1$.
The goal of this paper is to describe the irreducible factorization of $g_s(y)$ over $\F$, for each $s \in \F$. We remark that irreducible factorizations of classical Dickson polynomials over $\F$ have been given by Bhargava and Zieve [@BZ Theorem 3]; for related work, see the references in [@WY Section 9.6.2].
Our study of the irreducible factors of $g_s(y)$ was initially motivated by the following conjecture of the second author:
[*For $s \in \{\pm 1/2\}$ and $q \equiv \pm 1 \pmod {12}$, every irreducible factor of $g_s(y)$ over $\F$ has the form $y^3 - (3/2)y^2 + (9/16)y - m$ for some $m \in \F$*]{}.
For example, over $\mathbb{F}_{13}$, we have the complete factorizations $$\label{eq:1.5}
\begin{split}
g_{-1/2}(y) &= y^7 +(1-y)^7 + 7=7(y^3+5y^2+3y+1)(y^3+5y^2+3y+3),\\
g_{1/2}(y)&= y^7 +(1-y)^7 - 7=7(y^3+5y^2+3y+6)(y^3+5y^2+3y+11).
\end{split}$$ We found such formulas intriguing, as we initially saw no reason why the zeros of $y^n + (1-y)^n \pm 1/2 $ should have degree 3 over $\F$ when $q \equiv \pm 1 \pmod {12}$, nor did we understand why all of the monic irreducible cubic factors over $\F$ should be identical except for their constant terms.
In Section 2, we present the irreducible factorizations of $g_s(y)$ corresponding to those $s$ for which the irreducible factors all have degree $\le 2$. Before dealing with the more difficult case involving irreducible factors of degree greater than $2$, we discuss properties of a second order linear recurring sequence $S$ in Section 3. The sequence $S$ plays a crucial role in our proofs, although at first glance it appears to have little to do with $g_s(y)$.
Our main results appear in Section 4. Theorem 4.4 shows that if $1-s^2$ is a square $\ne 1$ in $\f$, then every monic irreducible factor $I(y)$ of $g_s(y)$ has the same degree $d=e >2$, where $e$ is the period of the sequence $S$. Theorem 4.5 then shows that these factors $I(y)$ are all identical except for their constant terms. Theorem 4.5 also gives formulas in terms of $s$ for the coefficients of the nonconstant terms of the factors $I(y)$; these formulas are made explicit in Corollaries 4.6–4.12 for some specific values of $s$, yielding all cases where $d$ is in the set $\{3,4,5,6,8,10,12\}$. Corollary 4.6 in particular verifies the aforementioned conjecture of the second author. Corollary 4.13 gives a necessary and sufficient condition on $s$ for the irreducibility of $g_s(y)$. We remark that Gao and Mullen [@GM] gave necessary and sufficient conditions for the irreducibility of translates of classical Dickson polynomials over $\F$.
Let $1-s^2$ be a square $\ne 1$ in $\f$. Then the monic irreducible factors $I(y)$ of $g_s(y)g_{-s}(y)$ are polynomials of degree $d>2$ that are all identical except for their constant terms. When $d$ is odd, Theorem 5.1 gives a criterion that distinguishes the constant terms corresponding to $g_s(y)$ from those corresponding to $g_{-s}(y)$. This will explain, for example, why the constant terms of the cubic factors in the first equality of had to be quadratic residues in $\mathbb{F}_{13}$, while those in the second equality had to be quadratic non-residues.
Linear and quadratic irreducible factors
========================================
We first determine the irreducible factorization of $g_s(y)$ when $s \in \{\pm 1\}$. Let $\rho$ denote the quadratic character on $\F$. When $s=1$, [@EV Lemma 2.3] yields $$\label{eq:2.1}
2g_s(y) =
f(y)^2-4 = \tau^2 y(y-1)\prod\limits_{a \in \C}(y-a)^2,$$ where $\tau$, the leading coefficient of $f$, is given by $$\label{eq:2.2}
\tau =
\begin{cases}
1, &\mbox{ if } q \equiv 1 \mod 4 \\
2, &\mbox{ if } q \equiv 3 \mod 4,
\end{cases}$$ and the set $\C$ is defined by $$\label{eq:2.3}
\C: = \{a \in \F : \rho(a) = \rho(1-a) = 1\}.$$ When $s=-1$, it follows from [@EV Remark 3] that $$\label{eq:2.4}
2g_s(y) = f(y)^2 = \tau^2 \prod\limits_{j}(y-j)^2,$$ where the product is over all $j \in \F$ for which $\rho(j) = \rho(1-j) = -1$. (The factor $\tau^2$ was inadvertently omitted in [@EV Remark 3].) In summary, the irreducible factors of $g_s(y)$ have been completely determined when $s \in \{\pm 1\}$, and they are all linear.
The case $s=0$ will be handled in Theorem 2.4. The following theorem gives the irreducible factorization of $g_s(y)$ in the case that $g_s(y)$ has a zero in $\F$ and $s \notin \{0,\pm 1\}$.
\[Theorem 2.1\] Let $s \notin \{0, \pm 1\}$. The polynomial $g_s(y)$ has a zero in $\F$ if and only if $$\label{eq:2.5}
\rho((1+s)/2)=1, \quad \rho((1-s)/2)=-1.$$ When holds, $(1+s)/2$ and $(1-s)/2$ are the only zeros in $\F$, and $g_s(y)$ has the irreducible factorization $$\label{eq:2.6}
\begin{split}
&g_s(y)= \\
&\frac{\tau^2}{2} \Big(y-\frac{1+s}{2}\Big)\Big(y-\frac{1-s}{2}\Big)
\prod\limits_{a \in \C}\Big(y^2 + (2as-1-s)y +\frac{(2a-1-s)^2}{4} \Big).
\end{split}$$
Observe that $g_s(y)$ has a zero $x \in \F$ if and only if $$\label{eq:2.7}
0 = g_s(x) = x\rho(x) +(1-x)\rho(1-x) - s, \quad x \in \F,$$ and since $s \notin \{\pm 1\}$, is possible if and only if both $\rho(x) = -\rho(1-x)$ and $x \in \{(1 \pm s)/2\}$. The equivalence of and easily follows.
Suppose now that holds, so that $(1+s)/2$ and $(1-s)/2$ are the only zeros of $g_s(y)$ in $\F$. Replacing $x$ by $(1+s)/2$ in the two equations [@EV (1.3),(1.4)] and then multiplying these equations together, we obtain the factorization .
If one of the quadratic factors in were reducible, it would have a zero in $\F$. Then, since $(1+s)/2$ and $(1-s)/2$ are the only zeros of $g_s(y)$ in $\F$, $g_s(y)$ would have a double zero in $\F$, contradicting the fact that the zeros of $g_s(y)$ are all distinct when $s \ne \pm 1$. This proves that all the factors in are irreducible.
The next two theorems deal with the cases where every irreducible factor $I(y)$ of $g_s(y)$ is quadratic. (This is in contrast to Theorem 2.1, where both linear and quadratic factors $I(y)$ appeared.) We first prove a lemma.
\[Lemma 2.2\] The set $ W=\{w \in \F: \rho((1+w)/2) = \rho((1-w)/2) = -1 \}$ has cardinality $|W|=(q - \rho(-1))/4$.
We have $$\label{eq:2.8}
|W|=\frac{1}{4} \sum\limits_{w \in \F}
\Big( 1 -\rho\Big( \frac{1+w}{2} \Big) \Big)
\Big( 1 -\rho\Big( \frac{1-w}{2} \Big) \Big)
=\frac{q - \rho(-1)}{4},$$ where the last equality follows from [@BEW Theorem 2.1.2].
\[Theorem 2.3\] Suppose that $s\notin \{ 0, \pm 1 \}$. Then $$\label{eq:2.9}
\rho((1+s)/2)=-1, \quad \rho((1-s)/2)= 1$$ if and only if $g_s(y)$ has no zeros in $\F$ but has a zero $x$ of degree 2 over $\F$. When holds, $g_s(y)$ has the irreducible factorization $$\label{eq:2.10}
g_s(y) = \frac{\tau^2}{2} \prod\limits_{w}
\Big(y^2-(1+sw)y+(s+w)^2/4\Big),$$ where the product is over all $w \in \F$ for which $$\label{eq:2.11}
\rho((1+w)/2) = \rho((1-w)/2)) = -1.$$
Suppose that holds. Then by Theorem 2.1, $g_s(y)$ has no zeros in $\F$. We proceed to show that $$\label{eq:2.12}
x:= \frac{1}{2} + \frac{sw}{2} +\frac{1}{2} \sqrt{1-w^2}\sqrt{1-s^2}$$ is a zero of $g_s(y)$ for any $w \in \F$ satisfying . We have the factorizations $$\label{eq:2.13}
x= \Big(\sqrt{(1+w)/2}\sqrt{(1+s)/2}
+\sqrt{(1-w)/2}\sqrt{(1-s)/2}\Big)^2$$ and $$\label{eq:2.14}
1-x=
\Big(\sqrt{(1+w)/2}\sqrt{(1-s)/2}
-\sqrt{(1-w)/2}\sqrt{(1+s)/2}\Big)^2,$$ for appropriate choices of the square roots. These two factorizations yield $$\label{eq:2.15}
\begin{split}
x^n+(1-x)^n &=\Big(\sqrt{(1+w)/2}\sqrt{(1+s)/2}
+\sqrt{(1-w)/2}\sqrt{(1-s)/2}\Big)^{q+1}\\
&+\Big(\sqrt{(1+w)/2}\sqrt{(1-s)/2}
-\sqrt{(1-w)/2}\sqrt{(1+s)/2}\Big)^{q+1}.
\end{split}$$ Whenever $A, B \in \F$ and $\sqrt{B} \notin \F$, we have $$(A+\sqrt{B})^{q+1} = (A+\sqrt{B})(A+\sqrt{B})^q =A^2 - B.$$ Thus when holds, the right member of equals $s$. This completes the proof that $x$ is a zero of $g_s(y)$ whenever $w$ satisfies .
Conversely, suppose that $g_s(y)$ has a zero of degree 2 over $\F$ but has no zeros in $\F$. We wish to prove . Denote the zero of degree 2 by $$\label{eq:2.16}
x:=u + \sqrt{v}, \quad u,v \in \F, \quad \rho(v)=-1.$$ Since $g_s(x)=0$, we have $(s-x^n)^2 =(1-x)^{q+1}$, so that $$s^2 + u^2 -v-2sx^n = (1-u)^2 - v.$$ Thus $2sx^n=s^2+2u-1$, and squaring gives $4s^2(u^2-v) = (s^2 +2u -1)^2$. Solving for $u$, we have $$\label{eq:2.17}
u = \frac{1}{2} + \frac{sw}{2}, \quad w:=\sqrt{1+4v/(s^2-1)}.$$ (This definition of $w$ is consistent with , as will be shown shortly.) Observe that $w=(2u-1)/s \in \F$. Since $\rho(v) = -1$ and $$\label{eq:2.18}
(1-w^2)(1-s^2) = 4v,$$ it follows that $$\label{eq:2.19}
\rho(1-w^2) = -\rho(1-s^2).$$ In particular, $$\label{eq:2.20}
s^2 \ne w^2, \quad w^2 \ne 1.$$ By –, $$\label{eq:2.21}
x= \frac{1}{2} + \frac{sw}{2} +\frac{1}{2} \sqrt{1-w^2}\sqrt{1-s^2}.$$ Thus by , $$\label{eq:2.22}
\begin{split}
s&=\Big(\sqrt{(1+w)/2}\sqrt{(1+s)/2}
+\sqrt{(1-w)/2}\sqrt{(1-s)/2}\Big)^{q+1}\\
&+\Big(\sqrt{(1+w)/2}\sqrt{(1-s)/2}
-\sqrt{(1-w)/2}\sqrt{(1+s)/2}\Big)^{q+1}.
\end{split}$$ Suppose for the purpose of contradiction that $\rho((1+w)/2))$ $=$ $-\rho((1-w)/2)$, so that by , $\rho((1+s)/2) = \rho((1-s)/2)$. If $\rho((1+s)/2) = \rho((1+w)/2)$, then $$\sqrt{(1+w)/2}\sqrt{(1+s)/2} \in \F, \quad
\sqrt{(1+w)/2}\sqrt{(1-s)/2} \in \F$$ and $$\sqrt{(1-w)/2}\sqrt{(1+s)/2} \notin \F, \quad
\sqrt{(1-w)/2}\sqrt{(1-s)/2} \notin \F,$$ so by , $s=w$, which contradicts . Similarly, if $\rho((1+s)/2) = \rho((1-w)/2)$, we obtain the contradiction $s = -w$. This contradiction shows that $$\label{eq:2.23}
\rho((1+w)/2) = \rho((1-w)/2).$$ Then by , $$\label{eq:2.24}
\rho(1-s^2) = -1.$$ By Theorem 2.1, cannot hold, so yields , as desired.
Next we show that $w$ satisfies . Suppose for the purpose of contradiction that the two members of are equal to $1$. Then by , $$\sqrt{(1+w)/2}\sqrt{(1+s)/2} \notin \F, \quad
\sqrt{(1-w)/2}\sqrt{(1+s)/2} \notin \F,$$ and $$\sqrt{(1-w)/2}\sqrt{(1-s)/2} \in \F, \quad
\sqrt{(1+w)/2}\sqrt{(1-s)/2} \in \F.$$ Then yields the contradiction $s = -s$. This contradiction shows that $w$ satisfies .
Assuming now , we have only to prove the irreducible factorization in . In view of , $$(y-x)(y-x^q) = y^2 - (1+sw)y +(s+w)^2/4$$ is an irreducible factor of $g_s(y)$ over $\F$, for each $w \in \F$ satisfying . Since the degree of $g_s(y)$ is equal to $(q - \rho(-1))/2$, it remains to show that there are $(q - \rho(-1))/4$ choices of $w \in \F$ for which holds. This follows from Lemma 2.2.
\[Theorem 2.4\] Let $s=0$. Then $g_s(y)$ has the irreducible factorization $$\label{eq:2.25}
g_s(y) = \frac{\tau^2}{2} \prod\limits_{v} (y^2-y+v/4),$$ where the product is over all $v \in \F$ for which $$\label{eq:2.26}
\rho(v) = \rho(1-v) = -1.$$
Suppose that holds and $$x:=(1+\sqrt{1-v})/2, \quad 1-x=(1-\sqrt{1-v})/2.$$ Then $$\begin{split}
(1-&\sqrt{1-v})^n \ 2^n (x^n + (1-x)^n) \\
&=(1-\sqrt{1-v})^n ( (1+\sqrt{1-v})^n + (1-\sqrt{1-v})^n) \\
&= v^n + (1-\sqrt{1-v})^{q+1}
= v^{(q+1)/2} + v =v (\rho(v) + 1) = 0.
\end{split}$$ Therefore $x$ is a zero of $g_0(y)$, so that $$(y-x)(y-x^q) = y^2-y+v/4$$ is an irreducible factor of $g_0(y)$ over $\F$. Since the degree of $g_0(y)$ is equal to $(q - \rho(-1))/2$, it remains to show that there are $(q - \rho(-1))/4$ choices of $v \in \F$ for which holds. This follows by comparing degrees on both sides of . (Alternatively, it can be deduced from Lemma 2.2.)
When $s=0$, Theorem 2.4 shows that the irreducible factors are all quadratic. When $s \in \{ \pm 1 \}$, and show that the irreducible factors are all linear. When $s \notin \{0, \pm 1 \}$, Theorems 2.1 and 2.3 show that the irreducible factors of $g_s(y)$ all have degree $\le 2$ if and only if $\rho(1-s^2) = -1$. In each case above, the irreducible factorization is completely determined. In Section 4, we consider those remaining $s$ for which $$\label{eq:2.27}
\rho(1 - s^2) = 1, \quad s \ne 0.$$ These are precisely the values of $s$ for which $g_s(y)$ has an irreducible factor of degree $>2$.
A second order linear recurrence sequence
=========================================
Define $c=1-s^2$. From here on, we will always assume that $$\label{eq:3.1}
\rho(c) = \rho(1-c) = 1, \ \mbox{ i.e., } \quad 1-s^2=c \in \C.$$ This is just a restatement of .
For $-\infty < k < \infty$, define a bilateral second order linear recurrence sequence $S(c):=\langle c_k \rangle$ in $\F$ by $$\label{eq:3.2}
c_{k+1} = (2-4c)c_k -c_{k-1} +2c, \quad c_0=0, \ c_1=c.$$ For example, $c_2 = -4c^2 +4c$, $\ c_3 = 16c^3 -24c^2 +9c$, and for a general positive integer $k$, $c_k=c_{-k}$ equals a polynomial in $c$ over the integers with leading term $(-4)^{k-1} c^k$. If each of the three $c_i$’s in is replaced by $c_i +1/2$, then the inhomogeneous sequence in is replaced by a homogeneous one. The characteristic polynomial corresponding to the homogeneous sequence is $$\label{eq:3.3}
y^2 + (4c-2)y + 1.$$ Let $i \in \FF$ denote a fixed square root of $-1$. The zeros of the polynomial in are $\beta^2$ and $\beta^{-2}$, where $$\label{eq:3.4}
\beta: = \sqrt{1-c} + i \sqrt{c}, \quad
\beta^2 = 1-2c + 2i\sqrt{1-c}\sqrt{c},$$ so that $$\beta^{-1} = \sqrt{1-c} - i \sqrt{c}, \quad
\beta^{-2} = 1-2c - 2i\sqrt{1-c}\sqrt{c}.$$ By , $\beta \in \F[i]$, and $\beta \in \F$ if and only if $\rho(-1)=1$. Using the well known evaluation of homogeneous linear recurrence sequences [@HN 10.2.17], we obtain the closed form evaluations $$\label{eq:3.5}
c_k = \frac{-1}{4}(\beta^k - \beta^{-k})^2$$ for every integer $k$. A direct calculation using yields the (nonlinear) recurrence relation $$\label{eq:3.6}
c_{k+1} c_{k-1} = (c - c_k)^2.$$
Note that the right member of does not depend on which of the ambiguous signs of the square roots are chosen in the formula for $\beta$ in . Fix one of the choices of $\beta$. We may assume that $\beta$ has even order, since otherwise we could replace $\beta$ by $-\beta$.
By [@HN 10.2.4], the sequence $S(c)$ is purely periodic. Let $e$ denote its period. Thus for each integer $k$, $$\label{eq:3.7}
c_{e+k}=c_k = c_{-k} = c_{e-k}.$$ Write $\theta=\ord (\beta^2)$ (the order of $\beta^2$). Thus $\ord (\beta) = 2\theta$. By , $c_k = c_{k + \theta}$ for all $k$, so that $e$ divides $ \theta$. Also by , $c_k = 0$ if and only if $\theta$ divides $k$. Since $c_e = c_0 = 0$ by , $\theta$ divides $e$, so $\theta = e$. In summary, $$\label{eq:3.8}
\ord (\beta) =2e$$ and $$\label{eq:3.9}
c_k = 0 \ \mbox{ if and only if } \ e|k.$$
By , $$\label{eq:3.10}
1-c_k = \frac{1}{4}(\beta^k + \beta^{-k})^2,
\quad s=\pm \sqrt{1-c_1} = \pm ( \beta + \beta^{-1})/2.$$ Whether or not $\beta \in \F$, it follows from , and that $$\label{eq:3.11}
\rho(c_k) = \rho(1-c_k) = 1, \ \mbox{ i.e.,} \quad c_k \in \C,
\ \ \mbox{whenever} \quad c_k \notin \{0,1\}.$$ From , we also see that $$\label{eq:3.12}
c_k = 1 \ \mbox{ if and only if } 2|e \ \mbox{ and } k \equiv e/2 \pmod e.$$
For $0 \le j < k \le e/2$, we claim that $c_j \ne c_k$. To see this, suppose otherwise. Then by , $\beta^k + \beta^{-k} = \epsilon (\beta^j + \beta^{-j})$ where $\epsilon \in \{\pm 1\}$. Equivalently, $\beta^k (1 - \epsilon \beta^{j-k})
=\epsilon \beta^{-j} (1 - \epsilon \beta^{j-k})$. The factors in parentheses are nonzero, so they can be canceled to yield $\beta^{j+k} = \epsilon$. This is impossible, since $j+k$ lies strictly between 0 and $e$. Thus the claim is proved.
If $\beta \in \F$, then $\beta^{q-1}=1$. If $\beta \notin \F$, then $\beta^q = \beta^{-1}$ by , so that $\beta^{q+1}=1$. Therefore in all cases, it follows from that $e$ divides $E$, where $E$ is the even integer defined by $$\label{eq:3.13}
E:=2[n/2]=(q - \rho(-1))/2.$$
If $c = -(\zeta -\zeta^{-1})^2/4$ for some $\zeta \in \F[i]$ of order $2e$ with $e \mid E$, then $S(c)$ has period $e$ and holds. For example, suppose that $B$ is an element of $\F[i]$ of full order $2E$. (In the case $q \equiv 1 \pmod 4$, this means that $B$ is a primitive root in $\F$.) A special case of the general sequence $S(c)$ of period $e$ is the sequence $S(C)=\langle C_k \rangle$ of period $E$, where (cf. ), $$\label{eq:3.14}
C_k: = \frac{-1}{4}(B^k - B^{-k})^2, \quad C:=C_1 =\frac{-1}{4}(B - B^{-1})^2.$$ We have $C_0=0$ and $C_{E/2} = 1$, and by , the set $\{ C_j: 1 \le j \le E/2 -1\}$ is a subset of $\C$ of cardinality $E/2 - 1$. But $\C$ itself has cardinality $E/2 - 1$, which can be seen by comparing the degrees on both sides of . Therefore, $$\label{eq:3.15}
\{ C_j: 1 \le j \le E/2 -1\} = \C.$$
For $u = \pm 1$, define, for each integer $k$, $$\label{eq:3.16}
A_u(k,c_1) = c_k+c_1-2c_kc_1+2u\sqrt{c_k-c_k^2}\sqrt{c_1-c_1^2}$$ and $$\label{eq:3.17}
A'_u(k,c_1) = c_k+c_1-2c_kc_1-2u\sqrt{c_k-c_k^2}\sqrt{c_1-c_1^2},$$ so that $A'_u = A_{-u}$. We drop the subscript $u$ when $u=1$. Define the set $$\label{eq:3.18}
Z(k,c_1): = \{ A(k,c_1), A'(k,c_1)\}.$$
By , the functions $A(k,c_1), A'(k, c_1)$ have values in $\F$. We can make these functions single-valued for each $k$ by specifying the signs of the square roots of $c_k$, $1-c_k$, and $c_k-c_k^2$ in terms of our fixed $\beta$, as follows: $$\label{eq:3.19}
\sqrt{c_k}: = \frac{-i}{2}(\beta^k - \beta^{-k}), \quad
\sqrt{1-c_k}: = \frac{1}{2}(\beta^k + \beta^{-k}),$$ and $$\label{eq:3.20}
\sqrt{c_k-c_k^2}: =\sqrt{c_k}\sqrt{1-c_k}=
\frac{-i}{4}(\beta^{2k} - \beta^{-2k}).$$ Note that the values of these square roots depend not just on their arguments but on the subscripts $k$ as well. For example, $\sqrt{1-c_1} \ne \sqrt{1-c_{e-1}}$ when $e>2$, even though $c_1 = c_{e-1}$.
\[Lemma 3.1\] For each integer $k$, $$\label{eq:3.21}
Z(k,c_1) = \{c_{k-1}, c_{k+1}\}.$$
By –, $Z(k, c_1)$ consists of the two elements $$c_k+c_1-2c_kc_1 \pm 2\sqrt{c_k-c_k^2}\sqrt{c_1-c_1^2}.$$ Express each of these two elements as a sum of powers of $\beta$ using and . A longish computation (facilitated by a computer algebra program) then shows that these two elements reduce to $c_{k-1}$ and $c_{k+1}$.
Irreducible factors of degree exceeding 2
=========================================
Recall from that $1-s^2 = c=c_1 \in \C$, so that $g_s(y)$ has at least one zero whose degree (over $\F$) exceeds 2. It will be shown below that all the zeros have the same degree. Theorem 4.4 will show that the common degree is $e$, where $e>2$ is the period of the sequence $S(c)$.
Given any zero $x$ of $g_s(y)$, the following lemma gives a formula for $x^q$ in terms of $c_1$ and $x$.
\[Lemma 4.1\] Let $x$ be a zero of $g_s(y)$. Then for some $v = \pm 1$, $$x^q = c_1 + x -2c_1x + 2v \sqrt{c_1-c_1^2}\ \sqrt{x-x^2}.$$
Since $(1-x)^n = s-x^n$, squaring yields $$(1-x)^{q+1} = x^{q+1} + s^2 - 2sx^n.$$ Thus $1 - x - x^q = s^2 - 2sx^n$, which is equivalent to $$\label{eq:4.1}
x^q = 1 - s^2 -x +2sx^n.$$ Multiplication by $x$ gives $$x^{q+1} = (1-s^2)(x-x^2) +2sx^{n+1} - s^2x^2,$$ which simplifies to $$\label{eq:4.2}
(x^n - sx)^2 = (1-s^2) (x-x^2).$$ Note that this shows that $x-x^2$ is a square in $\F [x]$. By , $$x^q = 1-s^2 +x(2s^2-1) +2sx^n - 2s^2x.$$ Then applying , we obtain $$x^q = 1-s^2 + x(2s^2-1) \pm 2s \sqrt{1-s^2}\sqrt{x-x^2},$$ and since $c_1=1-s^2$, this yields the desired formula $$x^q = c_1 + x -2c_1x \pm 2 \sqrt{c_1-c_1^2} \ \sqrt{x-x^2}.$$
We proceed to extend the definitions in –. For $u = \pm 1$, define, for each integer $k$ and each zero $x$ of $g_s(y)$, $$\label{eq:4.3}
A_u(k,x) = c_k+x-2c_kx+2u\sqrt{c_k-c_k^2} \ \sqrt{x-x^2}$$ and $$\label{eq:4.4}
A'_u(k,x) = c_k+x-2c_kx-2u\sqrt{c_k-c_k^2} \ \sqrt{x-x^2}.$$ We drop the subscript $u$ when $u=1$. Define the set $$\label{eq:4.5}
Z(k,x): = \{ A(k,x), A'(k,x)\}.$$ In this notation, Lemma 4.1 states that $$\label{eq:4.6}
x^q \in Z(1,x).$$
The values of the functions $A(k,x), A'(k,x)$ lie in $\F[x]$. We can make these functions single-valued for each zero $x$ of $g_s(y)$, by employing to fix a choice of sign of $\sqrt{x-x^2}$, as follows: $$\label{eq:4.7}
\sqrt{x-x^2} = (x^n - sx)/\sqrt{c_1},$$ where $\sqrt{c_1}$ is specified in . In ,when the zero $x$ is replaced by the zero $1-x$, the argument of the square root on the left remains the same, but the sign of the square root is changed. Also, when the zero $x$ is replaced by its conjugate zero $x^q$, we see that $$\label{eq:4.8}
\sqrt{x^q-x^{2q}}=(\sqrt{x-x^2})^q.$$
Observe that $$\label{eq:4.9}
\sqrt{A_u(k,x) - A_u(k,x)^2} =
\pm \Big((2x-1)\sqrt{c_k-c_k^2} +u(2c_k-1)\sqrt{x-x^2} \Big),$$ and the same formula holds when each $x$ is replaced by $c_1$. This is readily verified upon squaring both sides.
Fix a zero $x$ of $g_s(y)$ of smallest degree $d$ over $\F$. In view of , it follows from [@EV Theorem 3.1] that $$\label{eq:4.10}
\{a+x-2ax \pm 2\sqrt{a-a^2}\sqrt{x-x^2}: a \in \{0,1\} \cup \C \}$$ is the set of zeros of $g_s(y)$. Since $x-x^2$ is a square in $\F [x]$, these zeros all lie in $\F [x]$, so their degrees cannot exceed $d$. Then by minimality of $d$, every zero $x$ of $g_s(y)$ has degree $d$. Since at least one zero has degree exceeding 2, we conclude that $d > 2$.
The next lemma shows how the set $Z(k,x^q)$ depends on $c_{k-1}$ and $c_{k+1}$.
\[Lemma 4.2\] Let $x$ be a zero of $g_s(y)$ and let $k$ be an integer. Then for some $\mu, \lambda \in \{\pm 1\}$, $$Z(k, x^q) = \{A_\mu (k-1,x), A_\lambda (k+1,x)\}.$$
For each $ u \in \{\pm 1\}$, $$\label{eq:4.11}
A_u (k, x^q) = c_k +(1-2c_k) x^q +2u \sqrt{c_k-c_k^2} \ \sqrt{x^q -x^{2q}}.$$ By , we can replace $x^q$ by $A_v(1,x)$ for some $v \in \{ \pm 1 \}$, after which we can apply to the rightmost square root in to obtain $$\label{eq:4.12}
\begin{split}
&A_u (k, x^q) = c_1+c_k - 2c_1c_k + \\
& + x(1-2c_1-2c_k+4c_1c_k)
+2v(1-2c_k)\sqrt{c_1-c_1^2} \ \sqrt{x-x^2} \\
& -2w\sqrt{c_k-c_k^2} \ \Big(v(2c_1-1)\sqrt{x-x^2} + (2x-1)\sqrt{c_1-c_1^2} \ \Big),
\end{split}$$ where $w \in \{\pm 1\}$ depends on $u$. Once again applying , we see that reduces to $$\label{eq:4.13}
A_u (k, x^q) =x+(1-2x)A_w(k,c_1)
\pm 2\sqrt{x-x^2}\sqrt{A_w(k,c_1) - A_w(k,c_1)^2}.$$ Repeating the entire argument above with $-u$ in place of $u$, we see that holds with the signs of $u$ and $w$ reversed, i.e., $$\label{eq:4.14}
A'_u (k, x^q) =x+(1-2x)A'_w(k,c_1)
\pm 2\sqrt{x-x^2}\sqrt{A'_w(k,c_1) - A'_w(k,c_1)^2}.$$ By Lemma 3.1, there exists $\epsilon \in \{ \pm 1 \}$ for which $$A_w(k,c_1) = c_{k-\epsilon}, \quad A'_w(k,c_1) = c_{k+\epsilon}.$$ Therefore and yield $$A_u (k, x^q) \in Z(k-\epsilon,x), \quad
A'_u (k, x^q) \in Z(k+\epsilon,x).$$ Since $A_u (k, x^q)$ and $A'_u (k, x^q)$ are the two elements of the set $Z(k, x^q)$, the lemma is proved.
For ease in notation, write $x_k: = x^{q^k}$ for the conjugates of $x$. The following crucial theorem shows that for any integer $k$ with $0 \le k \le d$, the set $\{ x_k, x_{d-k} \}$ is equal to the set $\{ A(k,x), A'(k,x)\}$.
\[Theorem 4.3\] Let $1-s^2 = c = c_1 \in \C$ and let $x$ be a zero of $g_s(y)$ of degree $d >2$ over $\F$. Then $$\label{eq:4.15}
Z(k,x) = \{ x^{q^k}, x^{q^{d-k}} \}, \quad k=0,1,\dots, d.$$
Since $c_0 = 0$ and $x_0 = x_d =x$, holds for $k=0$. Next we prove for $k=1$. Fix $v$ as in Lemma 4.1, so that $x^q=A_v(1,x)$. Then $$Z(1,x^q) = \{A_v(1,x)^q, A'_v(1,x)^q\}
=\{x_2, A'_v(1,x)^q\},$$ where the first equality follows from . On the other hand, by Lemma 4.2 with $k=1$, $$Z(1,x^q) = \{x, A_\lambda(2,x)\}.$$ Since $x$ has degree $d > 2$, we cannot have $x = x_2$. Therefore $x = A'_v(1,x)^q$. Raising both sides to the power $q^{d-1}$, we see that $x_{d-1} = A'_v(1,x)$. Consequently, $$Z(1,x) = \{A_v(1,x), A'_v(1,x)\} = \{x_1, x_{d-1}\},$$ which proves for $k=1$.
Now let $1 \le k < d$ and assume as induction hypothesis that $$\label{eq:4.16}
Z(j,x) = \{ x_j, x_{d-j} \}, \quad 0 \le j \le k.$$ We need to prove $$\label{eq:4.17}
Z(k+1,x) = \{ x_{k+1}, x_{d-k-1} \}.$$ We will first prove when $k=d/2$ for even $d$. After that we give a proof for the other values of $k$. For brevity, write $D = d/2$. We begin by showing that $c_D=1$. By with $j=k = D$, $$Z(D,x) = \{ A(D,x), A'(D,x) \} = \{x_D\}.$$ Thus $A(D,x)=A'(D,x)=x_D$, so that $c_D \in \{0,1\}$. We cannot have $c_D=0$, otherwise $x_D = x$, contradicting the fact that $x$ has degree $d$. Thus $$\label{eq:4.18}
c_D=1.$$ By and with $k=D$, $$c_{D+1} + c_{D-1} = 2-2c, \quad c_{D+1}c_{D-1} = (1-c)^2.$$ Solving this system, we obtain $$\label{eq:4.19}
c_{D+1} = c_{D-1} =1-c.$$ By with $j=D-1$, $$\{ x_{D-1}, x_{D+1} \} = Z(D-1,x) = Z(D+1,x),$$ where the last equality follows from . This completes the proof of when $k=d/2$. We proceed to prove under the assumption that $k \ne d/2$.
By with $j=k$, we have $x_k \in Z(k,x)$, so taking $q$-th powers yields $x_{k+1} \in Z(k, x^q)$. Therefore, by Lemma 4.2, $$\label{eq:4.20}
x_{k+1} \in \{A_\mu (k-1,x), A_\lambda (k+1,x)\}.$$ Suppose for the purpose of contradiction that $x_{k+1} = A_\mu (k-1,x)$. Then $x_{k+1}$ lies in the set $Z(k-1,x) = \{ x_{k-1}, x_{d-k+1} \}$, where the equality follows from with $j = k-1$. We cannot have $x_{k+1} = x_{k-1}$, because $x$ and its conjugates have degree $d > 2$. Nor can we have $x_{k+1} = x_{d-k+1}$, since this would imply that $k=d/2$. Thus we obtain our desired contradiction. In view of , it therefore follows that $x_{k+1} \in Z(k+1,x)$. To prove , it now suffices to prove $$\label{eq:4.21}
x_{d-k-1} \in Z(k+1,x),$$ which is equivalent (via taking the $q$-th power) to $$\label{eq:4.22}
x_{d-k} \in Z(k+1,x^q).$$
From Lemma 4.2, $$\label{eq:4.23}
Z(k+1,x^q)=\{A_\mu (k,x), A_\lambda (k+2,x) \}.$$ By with $j=k$, $$A_\mu (k,x) \in Z(k,x) = \{ x_k, x_{d-k} \}.$$ If $A_\mu (k,x) = x_{d-k}$, then holds by , and the proof is complete. It remains to prove that the alternative $$\label{eq:4.24}
A_\mu (k,x) = x_k$$ is impossible. Thus assume for the purpose of contradiction that holds. Then by , $x_k \in Z(k+1,x^q)$. Raise both sides to the $q^{d-1}$-th power to obtain $x_{k-1} \in Z(k+1, x)$. By with $j=k-1$, we see that $x_{k-1} \in Z(k-1, x)$. It follows that for some choice of the $\pm$ signs, $$\begin{split}
x_{k-1} =&
c_{k-1} + x -2xc_{k-1} \pm 2 \sqrt{c_{k-1} - c_{k-1}^2} \sqrt{x-x^2} \\
=& c_{k+1}+x -2xc_{k+1} \pm 2 \sqrt{c_{k+1} - c_{k+1}^2} \sqrt{x-x^2}.
\end{split}$$ Appealing to the distinctness of the zeros in , we see that this is only possible if $c_{k-1} = c_{k+1}$. By and , $$c_{k-1} + c_{k+1} = 2c + 2c_k -4cc_k, \quad c_{k-1}c_{k+1} = (c - c_k)^2.$$ Thus $$0 = (c_{k-1} - c_{k+1})^2 =
(2c+2c_k-4cc_k)^2 -4(c - c_k)^2 =16(c-c^2)(c_k - c_k^2),$$ which implies that $c_k \in \{ 0, 1 \}$. It follows that $Z(k,x)$ is a singleton set. By with $j=k$, we have $Z(k,x) = \{ x_k, x_{d-k} \}$, so that $x_k = x_{d-k}$. Thus we obtain the contradiction $k = d/2$. As a result, cannot hold, and the proof is complete.
The next theorem shows that $d=e$, where $e$ is the period of $S(c)$.
\[Theorem 4.4\] Let $1-s^2 = c = c_1 \in \C$ and let $x$ be a zero of $g_s(y)$ of degree $d >2$ over $\F$. Then $d=e$.
By Theorem 4.3, $Z(d,x)= \{ x \}$. Hence $A(d,x) = A'(d,x) =x$, so that $c_d=0$. Assume for the purpose of contradiction that $c_k=0$ for some $k$ with $0 < k < d$. Then $A(k,x) = A'(k,x) =x$, so that $Z(k,x)= \{ x \}$. Again by Theorem 4.3, it follows that $x_k = x_{d - k} = x$, which is not possible. This contradiction shows that $d$ equals the smallest positive integer $k$ for which $c_k=0$. Consequently, by , $d=e$.
Define the monic polynomial $N(y) \in \F[y]$ of degree $d$ by $$\label{eq:4.25}
N(y): =
\begin{cases}
y \prod\limits_{k=1}^{(d-1)/2} (y-c_k)^2, & \ \text{ if } \ 2\nmid d \\
(y^2-y) \prod\limits_{k=1}^{(d-2)/2} (y-c_k)^2, & \ \text{ if } \ 2\mid d.
\end{cases}$$ Note that the coefficients of $N(y)$ can be expressed as polynomials in $c$ (and hence in $s$) over the integers. For some small values of $d$, Corollaries 4.6–4.12 below give explicit formulas for the coefficients of $N(y)$.
For a zero $x$ of $g_s(y)$, define the monic polynomial $I_x(y)$ of degree $d$ by $$\label{eq:4.26}
I(y) = I_x(y): = N(y) - N(x).$$ Theorem 4.5 below shows that these $I_x(y)$ are the irreducible factors of $g_s(y)$. Thus the monic irreducible factors of $g_s(y)$ are all identical except for their constant terms, and Theorem 4.5 provides a way of expressing the coefficients of the nonconstant terms as polynomials in $c$ over the integers.
\[Theorem 4.5\] Let $1-s^2 = c = c_1 \in \C$ and let $x$ be any zero of $g_s(y)$ of degree $d >2$ over $\F$. Then $I_x(y)$ is the monic irreducible polynomial of $x$ over $\F$.
Theorem 4.3 shows that $$\label{eq:4.27}
\{ A(k,x), A'(k,x) \} = \{x_k, x_{d-k} \}, \quad 0 \le k \le d/2.$$ When $0 < k < d/2$, we have $A(k,x) A'(k,x) = (x-c_k)^2$. When $k=d/2$ with $d$ even, the proof of shows that $c_{k} =1$, so that $A(k,x) = 1-x$. When $k=0$, we have $c_k=0$, so that $A(k,x) = x$. Thus yields $$\label{eq:4.28}
(-1)^{d-1} \prod\limits_{k=0}^{d-1} x_k = N(x).$$ In particular, $I_x(y) \in \F[y]$, since $N(x)$ equals $(-1)^{d-1}$ times the norm of $x$. As $x$ has degree $d$ and $x$ is a zero of the polynomial $I_x(y)$ of degree $d$, it follows that $I_x(y)$ is the monic irreducible polynomial of $x$ over $\F$.
\[Corollary 4.6\] Let $s \in \{ \pm 1/2 \}$ and suppose that $q \equiv \pm 1 \pmod {12}$, so that $c=1-s^2 = 3/4 \in \C$. Then each irreducible factor $I(y)$ of $g_s(y)$ has the form $$I(y) = y^3 - \frac{3}{2} y^2 + \frac{9}{16} y - m$$ for some $m \in \F$.
Computing the first four terms of the sequence $S(c)$ starting with $c_0$, we find that $c_0=c_3=0, \ c_1=c_2=3/4$. Thus $S(c)$ has period $e=3$, so for any zero $x$ of $g_s(y)$, the irreducible factor $I_x(y)$ has degree $d=3$. The result now follows from , since $$N(y) = y(y-3/4)^2 = y^3 - \frac{3}{2} y^2 + \frac{9}{16} y.$$
\[Corollary 4.7\] Suppose that $q \equiv \pm 1 \pmod {8}$, so that $\rho(2) = 1$. Let so that $c=1-s^2 = 1/2 \in \C$. Then each irreducible factor $I(y)$ of $g_s(y)$ has the form $$y^4 - 2y^3 + \frac{5}{4} y^2 - \frac{1}{4} y - m$$ for some $m \in \F$.
We have $c_0=c_4=0, \ c_1=c_3=1/2, \ c_2 =1$. Thus $S(c)$ has period $e=4$, so each irreducible factor $I(y)=N(y) - N(x)$ has degree $d=4$. The result now follows, as $$N(y) = (y^2 - y)(y-1/2)^2 = y^4 - 2y^3 + \frac{5}{4} y^2 - \frac{1}{4} y.$$
\[Corollary 4.8\] Suppose that $q \equiv \pm 1 \pmod {20}$, so that $\rho(5)=1$. Set $c=(5 +\sqrt{5})/8$ for either choice of the square root. Then $c \in \C$, and each irreducible factor $I(y)$ of $g_s(y)$ has the form $$y^5 - \frac{5}{2}y^4 + \frac{35}{16} y^3 - \frac{25}{32} y^2 + \frac{25}{256}y - m$$ for some $m \in \F$.
For either choice of sign of $\sqrt{5} \in \F$, the condition on $q$ guarantees the existence of a primitive tenth root of unity $\zeta \in \F[i]$ for which $$-(\zeta -\zeta^{-1})^2/4=(5+\sqrt{5})/8.$$ Thus $c$ is a square in $\F$. Moreover, $1-c$ is a square in $\F$ since $1-c = \Big( (\sqrt{5} -1)/4 \Big)^2$. This proves that $c \in \C$. Define $c':=(5-\sqrt{5})/8$. We have $c_0=c_5=0, \ c_1=c_4 = c, \ c_2=c_3=c'$. As $S(c)$ has period $e=5$, each irreducible factor $I(y)=N(y) - N(x)$ has degree $d=5$. The result now follows, as $N(y) = y(y-c)^2(y-c')^2$.
While the proof above shows that $(5+\sqrt{5})/2$ is a square in $\F$ when $q \equiv \pm 1 \pmod {20}$, it is also true that $(5+\sqrt{5})/2$ is a non-square in $\F$ when $q \equiv \pm 9 \pmod {20}$. To verify this, one can first reduce to the case when $q$ is prime, and then apply the theorem in [@WHF p. 257].
\[Corollary 4.9\] Suppose that $q \equiv \pm 1 \pmod {12}$, and let $s \in \{ \pm \sqrt{3}/2 \}$, so that $c=1-s^2 = 1/4 \in \C$. Then each irreducible factor $I(y)$ of $g_s(y)$ has the form $$y^6 -3y^5 +\frac{27}{8}y^4 -\frac{7}{4}y^3 + \frac{105}{256} y^2 - \frac{9}{256} y - m$$ for some $m \in \F$.
We have $c_0=c_6=0, \ c_1=c_5=1/4, \ c_2 =c_4=3/4, \ c_3 =1$. Thus $S(c)$ has period $e=6$, so each irreducible factor $I(y)$ has degree $d=6$. The result now follows, as $N(y) = (y^2-y)(y-1/4)^2(y-3/4)^2$.
\[Corollary 4.10\] Suppose that $q \equiv \pm 1 \pmod {16}$, so that $\rho(2)=1$, and set $c=(2 + \sqrt{2})/4$ for either choice of the square root. Then $c \in \C$ and each irreducible factor $I(y)$ of $g_s(y)$ has the form $$y^8-4y^7+
\frac{13}{2}y^6-\frac{11}{2}y^5+\frac{165}{64}y^4-\frac{21}{32}y^3
+\frac{21}{256}y^2-\frac{1}{256}y -m$$ for some $m \in \F$.
For either choice of sign of $\sqrt{2} \in \F$, the condition on $q$ guarantees the existence of a primitive sixteenth root of unity $\zeta \in \F[i]$ for which $$-(\zeta -\zeta^{-1})^2/4=(2+\sqrt{2})/4.$$ Thus $c$ and $c':=1-c$ are both squares in $\F$. This proves that $c \in \C$. We have $$c_0=c_8=0, \ c_1 = c_7=c, \ c_2=c_6=1/2, \ c_3= c_5=c', \ c_4 =1.$$ As $S(c)$ has period $e=8$, each irreducible factor $I(y)$ has degree $d=8$. The result now follows, as $N(y) = (y^2-y)(y-c)^2(y-1/2)^2(y-c')^2$.
While the proof above shows that $(2+\sqrt{2})$ is a square in $\F$ when $q \equiv \pm 1 \pmod {16}$, it is also true that $(2+\sqrt{2})$ is a non-square in $\F$ when $q \equiv \pm 7 \pmod {16}$. To verify this, again one can first reduce to the case when $q$ is prime, and then apply the theorem in [@WHF p. 257].
\[Corollary 4.11\] Suppose that $q \equiv \pm 1 \pmod {20}$, so that $\rho(5)=1$. Set $c=(3 - \sqrt{5})/8$ for either choice of the square root. Then $c \in \C$, and each irreducible factor $I(y)$ of $g_s(y)$ has the form $$\begin{split}
&y^{10} - 5y^9 +\frac{85}{8}y^8-\frac{25}{2}y^7+\frac{2275}{256}y^6
-\frac{1001}{256}y^5 +\\
&+ \frac{2145}{2048}y^4 - \frac{165}{1024} y^3 + \frac{825}{65536} y^2
- \frac{25}{65536}y - m
\end{split}$$ for some $m \in \F$.
The proof of Corollary 4.8 shows that $c \in \C$. Define $c':=(3+\sqrt{5})/8$. We have $$c_0=0, \ c_1 = c, \ c_2=c +1/4, \ c_3= c', \ c_4 =c'+1/4, \ c_5=1,$$ and $e=d=10$. The result now follows, as $$N(y) = (y^2-y)(y-c)^2(y-c-1/4)^2(y-c')^2(y-c'-1/4)^2.$$
\[Corollary 4.12\] Suppose that $q \equiv \pm 1 \pmod {24}$, so that $\rho(2)=\rho(3)=1$. Set $c=(2 + \sqrt{3})/4$ for either choice of the square root. Then $c \in \C$, and each irreducible factor $I(y)$ of $g_s(y)$ has the form $$\begin{split}
&y^{12} - 6y^{11}+ \frac{63}{4}y^{10} - \frac{95}{4}y^9 +\frac{2907}{128}y^8
-\frac{459}{32}y^7+
\frac{1547}{256}y^6\\
&-\frac{429}{256}y^5 +
\frac{19305}{65536}y^4 - \frac{1001}{32768} y^3 + \frac{429}{262144} y^2
- \frac{9}{262144}y - m
\end{split}$$ for some $m \in \F$.
We know $c$ is a square in $\F$ because $c = (\sqrt{6} + \sqrt{2})^2 /16$. Similarly, $1-c=c':=(2 -\sqrt{3})/4$ is a square in $\F$. Thus $c \in \C$. We have $$c_0=0, \ c_1 = c, \ c_2=1/4, \ c_3= 1/2, \ c_4 =3/4, \ c_5=c', \ c_6=1.$$ and $e=d=12$. The result now follows, as $$N(y) = (y^2-y)(y-c)^2(y-1/4)^2(y-1/2)^2(y-3/4)^2 (y-c')^2.$$
We next give a necessary and sufficient condition on $s$ for the irreducibility of $g_s(y)$. Recall from that $E$ denotes the even integer $(q-\rho(-1))/2$.
\[Corollary 4.13\] The polynomial $g_s(y)$ is irreducible over $\F$ if and only if $s = (B + B^{-1})/2$ for some element $B \in \F[i]$ of order $2E$. Moreover, for such $s$, $g_s(y)$ is the irreducible polynomial in $\F[y]$ given by $$\label{eq:4.29}
g_s(y) = \frac{\tau^2}{2} (y^2-y)\prod\limits_{a \in \C} (y-a)^2 + 1-s.$$
Since $g_s(y)$ has degree $E$, $g_s(y)$ is irreducible if and only if the sequence $S(c)$ has period $E$, i.e., if and only if $$1-s^2 = c = -(B-B^{-1})^2/4$$ for some element $B \in \F[i]$ of order $2E$. Thus $g_s(y)$ is irreducible if and only if $s = (B + B^{-1})/2$ for some element $B \in \F[i]$ of order $2E$. (Note that if $B$ has order $2E$, then so does $-B$.) Since $g_s(y)$ has constant term $1-s$, Theorem 4.5 yields $$g_s(y) = \frac{\tau^2}{2} N(y) + (1-s),$$ so follows from .
We remark that the elements $a$ in can be rapidly calculated, in view of –. To illustrate Corollary 4.13, first consider the case $q=17$ with the choice of primitive root $B=3$ and the choice $s = (B+B^{-1})/2 = 13$. We have in this case $c=2$, $\tau = 1$, $d=E = 8$, and $\C = \{ 2,9,16 \}$. Thus $2g_s(y)$ equals the irreducible monic polynomial in $\F[y]$ given by $$(y^2-y)(y-2)^2(y-9)^2(y-16)^2 -7.$$ Next consider the case $q=19$ with the choice $B=4+2i \in \F[i]$ of order $q+1 =20$ and the choice $s = (B+B^{-1})/2 = 4$. We have in this case $c=4$, $\tau=2$, $d=E=10$, and $\C=\{ 4, 9, 11, 16 \}$. Thus $g_s(y)/2$ equals the monic irreducible polynomial in $\F[y]$ given by $$(y^2-y)(y-4)^2(y-9)^2(y-11)^2(y-16)^2 -11.$$
Norms of the zeros of $g_s(y)$
==============================
Choose an arbitrary element $\zeta \in \F[i]$ of order $2d$, where $d>2$ divides $E=(q-\rho(-1))/2$. Define the set $$\B_d:=\{(\zeta^j +\zeta^{-j})/2: 0 < j < d, (j,2d)=1\},$$ and let $\B'_d$ be the set of negatives of the elements in $\B_d$. If $d$ is even, then $\B_d=\B'_d$, since $\zeta^j +\zeta^{-j}=$ $-(\zeta^{d-j} +\zeta^{j-d})$ and $(d-j,2d)=1$. However, if $d$ is odd, then $\B_d$ and $\B'_d$ are disjoint. To see this, suppose otherwise, so that $\zeta^k + \zeta^j =-\zeta^{-j-k} (\zeta^k + \zeta^j)$ for some $j, k$ with $0 < j < k <d$ and $(jk,2d)=1$. Then either $\zeta^{k-j}$ or $\zeta^{-k-j}$ equals $-1$, which is impossible since $k-j$ and $-k-j$ are both even.
As $\beta$ runs through the elements of $\F[i]$ of order $2d$, $(\beta + \beta^{-1})/2$ runs twice through the elements of $\B_d$, and if moreover $d$ is odd, $-(\beta + \beta^{-1})/2$ runs twice through the elements of $\B'_d$.
In view of , the elements $s \in \B_d \cup \B'_d$ are precisely those $s$ for which $1-s^2=c \in \C$ and the irreducible factors of $g_s(y)$ and $g_{-s}(y)$ have degree $d$. Define the polynomial $G_s(y) \in \F[y]$ of degree $2E$ by $$G_s(y):= 2g_s(y)g_{-s}(y).$$ When $s \in \B_d \cup \B'_d$, the irreducible factors $I_x(y):=N(y) - N(x)$ of $G_s(y)$ are polynomials in $\F[y]$ of degree $d$ that are identical except for their constant terms $-N(x)$. This raises the following natural question: How can the constant terms corresponding to $g_s(y)$ be distinguished from those corresponding to $g_{-s}(y)$? For odd $d$, the answer is given by the following residuacity criterion for the norms $N(x)$ of the zeros $x$ of $G_s(y)$.
\[Theorem 5.1\] If $s \in \B$, then the norms of the zeros of $g_s(y)$ are nonsquares in $\F$, and when $d$ is odd, the norms of the zeros of $g_{-s}(y)$ are squares in $\F$.
By , $$G_t(y)/2 = \big(y^n +(1-y)^n\big)^2 - t^2.$$ With the change of variable $y = (1+u)/2$, $$\label{eq:5.1}
\begin{split}
G_t(y)/2 &= \Big(\Big(\frac{1+u}{2}\Big)^n +
\Big(\frac{1-u}{2}\Big)^n\Big)^2 - t^2 \\
&=\frac{1}{4}f(u^2)^2-t^2=\frac{1}{4}\Big(f((2y-1)^2)^2-4t^2\Big),
\end{split}$$ since $4^n=4$ in $\F$. By and , $$\label{eq:5.2}
g_s((2y-1)^2) = G_t(y),$$ where $s = 2t^2-1$.
As $s \in \B_d$, we have $s = (\beta + \beta^{-1})/2$ for some $\beta$ of order $2d$. Let $\delta$ be an element in $\F[i]$ of order $4d$ with $\delta^2 = \beta$. Then holds with $t: = (\delta + \delta^{-1})/2$, since $s = 2t^2 -1$. While each zero $x$ of $g_s$ has degree $d$ over $\F$, each zero $v$ of $G_t$ has degree $2d$ over $\F$, since $t \in \B_{2d}$. In particular, $v \notin \F[x]$ for such $x,v$. As $v$ runs through the $2E$ distinct zeros of $G_t$, $(2v-1)^2$ runs twice through the $E$ distinct zeros of $g_s$. Thus each zero $x$ of $g_s$ has the form $x = (2v-1)^2$. It follows that $x$ must be a nonsquare in $\F[x]$, since $2v-1 \notin \F[x]$. This proves that the norms of the zeros of $g_s(y)$ are nonsquares in $\F$ when $s \in \B_d$.
Finally, let $s \in \B'_d$ with $d$ odd, so that $s = -(\beta + \beta^{-1})/2$ for some $\beta$ of order $2d$. It remains to show that the norms of the zeros of $g_s(y)$ are squares in $\F$. Let $j$ denote the odd integer in the set $\{(d+1)/2, (3d+1)/2 \}$. Then holds with $t: = (\beta^j + \beta^{-j})/2$, since $s = 2t^2 -1$. Each zero $x$ of $g_s$ has degree $d$ over $\F$, and the same is true about each zero $v$ of $G_t$, since $t \in \B_d$. As $v$ runs through the $2E$ distinct zeros of $G_t$, $(2v-1)^2$ runs twice through the $E$ distinct zeros of $g_s$. Thus each zero $x$ of $g_s$ has the form $x = (2v-1)^2$. Since $\F[x] \subset \F[v]$ and both fields have degree $d$ over $\F$, we must have $2v-1 \in \F[x]$. Therefore $x$ is a square in $\F[x]$, which proves that the norms of the zeros of $g_s(y)$ are squares in $\F$ when $s \in \B'_d$.
For example, take $d=5$, $q \equiv \pm 1 \pmod {20}$, and $s = (1+\sqrt{5})/4$ in $\F$, for either choice of the square root. Then $s \in \B_5$, so the norms of the zeros of $g_s(y)$ are nonsquares in $\F$, while the norms of the zeros of $g_{-s}(y)$ are squares in $\F$. To illustrate with $q=19$ and $s \in \{12,17\} \subset \B_5$, we have $$\begin{split}
&g_{12}(y) = 2(N(y)-3)(N(y)-14), \quad g_{-12}(y) = 2(N(y)-1)(N(y)-16),\\
&g_{17}(y) = 2(N(y)-2)(N(y)-15), \quad g_{-17}(y) = 2(N(y)-6)(N(y)-11).
\end{split}$$ where $3,14,2,15$ are nonsquares modulo 19, while $1,16,6,11$ are squares. For an example with $d$ even, take $d=6$, $q=37$, and $s=26$. Then $$2G_s(y)=(N(y)-2)(N(y)-5)(N(y)-14)(N(y)-20)(N(y)-29)(N(y)-32),$$ where $2,5,14,20,29,32$ are all nonsquares modulo 37.
Finally, we return to the case $d=3$ which motivated this paper. Define $$G(y): = g_{-1/2}(y) g_{1/2}(y), \quad V = \{ v(v-3/4)^2 : v \in \F \}.$$ The monic irreducible cubic factors $I_x(y)=N(y)-N(x)$ of $G(y)$ are given in Corollary 4.6. In the next theorem, we characterize the set of constant terms of these irreducible factors. Equivalently, we characterize $U$, where $U$ denotes the set of norms $N(x)$ of the zeros $x$ of $G(y)$.
\[Theorem 5.2\] We have $U=T$, where $T$ is the complement of $V$ in $\F$.
Consider the list $$L:=\langle m(m-3/4)^2: m \in \F \rangle,$$ which has $q$ entries in $\F$. The entry $0$ occurs twice in $L$ (for $m=0$ and $m=3/4$) and the entry $1/16$ also occurs twice (for $m=1/4$ and $m=1$). Solving the equation for $w$, we see that of the remaining entries $m(m-3/4)^2$ in $L$, those with $\rho(m(1-m))=-1$ occur once in $L$, and those with $\rho(m(1-m))=1$ occur thrice. The number of distinct entries in $L$ is therefore $$|V| = 2 +\frac{1}{2}\sum(1-\rho(m(1-m))) + \frac{1}{6}\sum (1+\rho(m(1-m))),$$ where the sums are over all $m \in \F$ except $m=0,1/4,3/4,1$. The first term $2$ on the right side cancels out when the sums are taken over all $m$. By [@BEW Theorem 2.1.2], $$\sum \limits_{m \in \F} \rho(m(1-m)) = -\rho(-1),$$ so that $$|V| = (2q + \rho(-1))/3, \quad |T| = (q-\rho(-1))/3.$$
Write $E:=(q-\rho(-1))/2$, and let $x$ denote any of the $2E$ zeros of $G(y)$. Since the $2E$ zeros are distinct, $G(y)$ has $2E/3$ distinct factors of the form $I_x(y)$. In particular, $|U| = 2E/3 = |T|$. If one of the $2E/3$ values of $N(x) \in U$ were in $V$, i.e., if $N(x) = N(v)$ for some $v \in \F$, then we’d have the contradiction that $I_x(y)$ has a linear factor $y-v$. Thus $U \subset T$. Since $|U|=|T|$, this proves that $U = T$.
[abcd]{}
B. C. Berndt, R. J. Evans, K. S. Williams, Gauss and Jacobi Sums, Wiley-Interscience, New York, 1998.
M. Bhargava, M. Zieve, Factoring Dickson polynomials over finite fields, Finite Fields Appl. 5 (1999) 103–111.
R. Evans, M. Van Veen, Bivariate identities related to Chebyshev and Dickson polynomials over $\F$, Finite Fields Applic. 49 (2018) 143–155.
S. Gao, G. L. Mullen, Dickson polynomials and irreducible polynomials over finite fields, J. Number Theory 49 (1994) 118–132.
X.-d Hou, G. L. Mullen, J. A. Sellers, J. L. Yucas, Reversed Dickson polynomials over finite fields, Finite Fields Applic. 15 (2009) 748–773.
N. M. Katz, Rigid local systems on $\AA$ with finite monodromy, (to appear). <https://web.math.princeton.edu/~nmk/gpconj106.pdf>
H. Niederreiter, LFSR sequences and maximal period sequences, Handbook of Finite Fields, CRC Press, Eds. G. L. Mullen and D. Panario, 2013, 311–317.
Q. Wang, J. L. Yucas, Dickson polynomials, Handbook of Finite Fields, CRC Press, Eds. G. L. Mullen and D. Panario, 2013, 282–290.
K. S. Williams, K. Hardy, C. Friesen, On the evaluation of the symbol $((A +B\sqrt{m})/p)$, Acta Arith. 45 (1985) 255–272.
|
---
author:
- 'Jinyong Jeong${}^{1}$, Younggun Cho${}^{1}$, Young-Sik Shin${}^{1}$, Hyunchul Roh${}^{1}$ and Ayoung Kim${}^{1}$[^1]'
bibliography:
- 'string-long.bib'
- 'references.bib'
title: '**Complex Urban LiDAR Data Set** '
---
oldmaketitlemaketitle
ACKNOWLEDGMENT {#acknowledgment .unnumbered}
==============
This material is based upon work supported by the , Korea under Industrial Technology Innovation Program (No.10051867) and \[High-Definition Map Based Precise Vehicle Localization Using Cameras and LIDARs\] project funded by Naver Labs Corporation.
[^1]: $^{1}$J. Jeong, Y. Cho, Y. Shin, H. Roh and A. Kim are with the Department of Civil and Environmental Engineering, KAIST, Daejeon, S. Korea[ @kaist.ac.kr]{}
|
---
abstract: 'A scenario is presented, in which the presence of a quantum critical point due to formation of incommensurate charge density waves accounts for the basic features of the high temperature superconducting cuprates, both in the normal and in the superconducting states. Specifically, the singular interaction arising close to this charge-driven quantum critical point gives rise to the non-Fermi liquid behavior universally found at optimal doping. This interaction is also responsible for $d$-wave Cooper pair formation with a superconducting critical temperature strongly dependent on doping in the overdoped region and with a plateau in the optimally doped region. In the underdoped region a temperature dependent pairing potential favors local pair formation without superconducting coherence, with a peculiar temperature dependence of the pseudogap and a non-trivial relation between the pairing temperature and the gap itself. This last property is in good qualitative agreement with so far unexplained features of the experiments.'
address: |
Istituto di Fisica della Materia e Dipartimento di Fisica, Università di Roma “La Sapienza”,\
Piazzale A. Moro 2, 00185 Roma, Italy
author:
- 'C. Castellani, C. Di Castro, and M. Grilli'
title: 'Non-Fermi-liquid behavior and $d$-wave superconductivity near the charge-density-wave quantum critical point'
---
[2]{}
INTRODUCTION
============
Together with their large superconducting critical temperatures, the cuprates display several anomalous normal state properties which cannot be described in terms of the standard Fermi liquid (FL) theory. One possible explanation for these anomalous properties of the normal phase is that the low dimensionality of such highly anisotropic systems and their correlated nature are at the origin of a breakdown of the FL. FL theory indeed breaks down in a variety of physical situations as, for instance, in quasi-one dimensional conductors. The one-dimensional metallic phase is described by the so-called Luttinger Liquid theory with no quasi-particle weight at the Fermi surface [@sol]. The assessed breakdown of FL theory in $d=1$ suggested the very intriguing theoretical question of a non-Fermi Liquid metallic behavior in two-dimensional electron systems as an extension of the one-dimensional case [@and]. However, it was recently found [@clc] that, for non-singular interactions (involving small momentum transfer) Luttinger behaviour is strictly confined to one-dimensional metals. Only sufficiently singular (stronger than the purely coulombic interaction) long-range forces give non-FL at low energy above one dimension [@mac; @bw]. The anomalous properties would then arise as a consequence of singular scattering processes at low energy.
Singular scattering can be due to gauge field fluctuations [@nagaosa], which arise by implementing the resonating-valence-bond idea in the t-J model. An alternative point of view is related to the existence of a quantum critical point (QCP), where critical fluctuations can mediate singular interactions between the quasiparticles, providing at the same time a strong pairing mechanism. The proximity to the critical point at zero temperature is naturally characterized by the absence of any energy scale besides the temperature itself. This would agree with what can be inferred from the many anomalous features of the normal state at optimal doping. Various proposals have been put forward on the possible nature of the QCP, ranging from a magnetic mechanism [@MBP; @pines; @pinesQCP] to charge-transfer [@varma] or to incommensurate charge-density-wave (ICDW) [@CDG; @PCDG].
In the antiferromagnetic (AF) QCP proposal a difficulty arises. Specifically, the most evident features of non-FL behavior in the normal phase and the largest superconducting critical temperature occur at optimal doping which would be in the quantum disordered region far away from the position of the AF transition. To explain this behavior a mechanism (substantial vertex corrections) has to be advocated [@chubukov] to suppress the effect of critical fluctuations, which would otherwise be the strongest at low doping just above the AF transition. Besides the fact that in the disordered regime the depression of the effective interaction due to vertex correction is not established, it would be hard to understand why at optimal doping the best non-FL behavior always occurs for all the classes of materials. Indeed within this scenario optimal doping is a quite generic point of the quantum disordered region with a finite energy scale.
The peculiarities of optimal doping make this point a natural candidate where to place a QCP relevant for the superconducting and the non-FL properties of the cuprates.
As far as the nature of this critical point is concerned, it seems quite likely that charge degrees of freedom should play the major role, since the disordered region of this QCP coincides with the highly metallic overdoped regime. In this context, besides the appealing but rather exotic proposal of a simmetry breaking related to persistent charge-transfer currents [@varma], we believe that an ICDW-QCP has strong support from both the theoretical and the experimental point of view. The existence of an ICDW-QCP is not alternative to the existence of an AF-QCP and the two QCP’s control the behavior of the system at different doping. The ICDW-QCP sets up the maximum superconducting critical temperature and can constitute the substrate to substain AF fluctuations far away from the AF ordered phase, by allowing for hole-rich and hole-poor “stripes”.
After phase separation (PS) was shown to be present in the phase diagram of the t-J model [@emery; @marder], we pointed out that PS commonly occurs in models with short range interactions[@GRCDK1]-[@BTGD], provided a strong local $e$-$e$ repulsion inhibits the stabilizing role of the kinetic energy. We therefore stressed that PS and superconductivity are related phenomena irrespective of the nature of the short-range interaction (magnetic, excitonic, phononic,...) [@notanumer].
Emery and Kivelson [@emerykivelson] suggested that, although long-range Coulomb (LRC) forces spoil PS as a static thermodynamic phenomenon, the frustrated tendency towards PS may still be important and give rise to large-amplitude charge collective density fluctuations. Approaching the problem within a coarse-grained model, they suggested that these fluctuations may be responsible for the anomalous behaviour of the normal phase. A static pseudospin formulation [@low] of these ideas showed the formation of a phase with hole-rich and hole-poor stripes. This latter result is on the same line of the finding of Refs. [@RCGBK; @CDG; @BTGD], where an ICDW phase was shown to arise in models where PS is spoiled by LRC forces. Our finding is then that in all these models there exists a QCP associated to the formation of ICDW. Near this QCP the dynamic effective interaction between quasiparticles has a singular behavior [@CDG], strongly affecting the single-particle properties and the transport scattering time. In the Cooper channel the same singular scattering provides a strong pairing mechanism with an anisotropic order parameter of $d$-wave symmetry [@BTGD; @PCDG].
Several experimental findings provide support for the existence of a QCP at (or near) optimal doping. This is found in recent transport experiments [@boebinger] in ${\mathrm La_{2-x}Sr_xCuO_4}$ (LSCO), with high magnetic fields, which allow to access the normal phase hidden by superconductivity, and assess the presence of a metal-insulator transition ending at optimal doping at $T=0$. Indications in the same sense are provided by neutron scattering [@aeppli] revealing a huge increase of a magnetic dynamical correlation length in nearly optimally doped LSCO. Qualitative changes of behavior at optimal doping are also detected by optical spectroscopy [@puchkov], NMR [@jullien], susceptibility [@batlogg], neutron scattering [@rossat], photoemission [@marshall; @harris; @campuzano], specific heat [@loram], thermoelectric power [@zhou], Hall coefficient [@hwang], resistivity [@batlogg; @ito; @boebinger]. It is also suggestive that several quantities (resistivity, Hall number, uniform susceptibility) display a scaling behavior with a typical energy scale, which vanishes at optimal doping [@johnston; @nakano; @wuyts].
Many indications exist that the above QCP involves charge ordering. In Ref. the metal-insulator transition at T=0 occurs with a high value of $k_Fl$ (clean limit) suggesting that some charge ordering underlies the insulating behavior of the underdoped LSCO samples. A direct observation of charge-driven ordering was possible by neutron scattering [@tranquada1; @tranquada2; @tranquada3], in ${\mathrm La_{1.48}Nd_{0.4}Sr_{0.12}CuO_4}$ where the related Bragg peaks were detected. For this specific compound the low temperature tetragonal lattice structure pins the CDW and gives static order and semiconducting behavior (see also the case of ${\mathrm La_{1.88}Ba_{0.12}CuO_4}$). Increasing the Sr content at fixed Nd concentration, the pinning effect is reduced leading to metallic and superconducting behavior. In this latter case, the existence of dynamical ICDW fluctuations is suggested by the presence of dynamical incommensurate spin scattering, although the charge peaks are too weak to be observed. In this regard, also the ${\mathrm La_{2-x}Sr_xCuO_4}$ is expected to display dynamical charge fluctuations with doping-dependent spatial modulation as indeed observed in the magnetic scattering [@yamada]. ICDW have been proposed from extended X-ray absorption fine structure (EXAFS) experiments both in optimally doped LSCO [@bianconi1] and ${\mathrm Bi_2Sr_2CaCu_2O_{8+x}}$ (Bi-2212) [@bianconi2]. Superstructures have also been detected in Bi-2212 from X-ray diffraction [@bianconi3].
In the next section we will consider the singular interactions arising in the proximity of charge instabilities. In particular we will consider an ICDW instability, which is present in strongly correlated models due to the interplay between PS and LRC forces. This instability is found [@CDG; @BTGD] in specific models at T=0 starting from a uniform FL phase describing the low-temperature overdoped phase of the superconducting cuprates. Once the ICDW instability will be shown to occur, the challenging task remains of providing a complete description of the systems at finite temperature and in the underdoped phase, where superconductivity, (dynamical) ICDW order and magnetism interplay. While a full theoretical understanding of this latter phase is still missing, in Section III we will provide a general scenario for the T vs. doping $\delta$ phase diagram of the cuprates based both on theoretical results and on experimental evidences. Our conclusions are presented in Section IV.
Singular scattering close to charge instabilities
=================================================
The evaluation of the density-density correlation function $$\chi (q,\omega) \equiv \langle n(q,\omega)
n(-q,-\omega)\rangle$$ provides information on the (charge) stability. In particular a divergence in the static correlation function $\chi (q,\omega=0)$ signals the occurrence of PS (at $q \to 0$) or of CDW instabilities (at finite $q$’s). A complete investigation of the static and dynamical properties of the infinite-U Hubbard-Holstein model together with the analysis of its stability was carried out in a previous work [@CDG; @BTGD]. Here we just mention that, within a large-N slave-boson formalism, this model displays a phonon-driven charge instability even for rather small values of the electron-phonon coupling $g$ [@GC]. In the absence of LRC forces the PS instability occurs before any other finite $q$ instability. The introduction of LRC forces eliminates the $q=0$ divergence in the static correlation function always giving rise to ICDW. The finite critical $q_c$, in this case, is not related to any pseudonesting of the Fermi surface. $q_c$ is determined by the momentum dependence of the (divergent) static correlation function with only short-range forces and by the strength $V_C$, which parametrize the LRC forces according to $$V_{LR}(q)={V_C \over (a_\perp/a_\parallel) \sqrt{\epsilon_\parallel /
\epsilon_\perp}}{1\over q}.$$ The momenta are in units of the inverse planar lattice spacing $a_\parallel$, $a_\perp$ is the interplane distance and $\epsilon_{\parallel} $ and $\epsilon_{\perp}$ are the corresponding dielectric constants.
A divergent scattering amplitude between quasiparticles $\Gamma (q,\omega)$ will follow from a divergent correlation function $\chi$. Indeed the interaction between the quasiparticles is mediated by the exchange of bosonic degrees of freedom, e.g. phonons and slave-boson fields accounting for the strong local repulsion U. These bosonic excitations have a singular propagator, which enters in the expressions of both $\Gamma$ and $\chi$ establishing a clear connection between the charge instability and the singular quasiparticle scattering.
Near the PS instability ($V_C=0$), the anomalous behavior of $\Gamma$ is identified to be of the form [@CDG] $$\Gamma (q,\omega) \approx
\tilde{U} - {V \over q^2 - i\omega {\gamma \over q} +\kappa^2} .
\label{fitgamsr}$$ $\tilde{U}$ describes the (almost momentum-independent, i.e. local) residual repulsion mediated by the slave bosons: Within the large-N slave-boson formalism, the infinite U repulsion between the bare fermions, is reduced to a rather weak residual repulsion between the Fermi-liquid quasiparticles. $\gamma$ is a damping coefficient. The mass term $\kappa^2=a(\delta-\delta_c)$ vanishes linearly when, for a given $g$, the instability takes place at the critical doping $\delta_c=\delta_c(g)$. $\kappa$ can be interpreted as the inverse correlation length for the density fluctuations, $\kappa = \xi^{-1}$. It is worth noticing that the singular part of the effective interaction in Eq. (\[fitgamsr\]) has the same functional form as the scattering amplitude mediated by gauge fields [@nagaosa], although its physical origine is obviously different.
The singular behavior of $\Gamma_q=\Gamma(q \to 0,\omega=0)$ at the PS instability is by no means surprising within a FL framework. Indeed, the FL expression for the compressibility is $\chi_n=2\nu^*/\left(1+2\nu^*
\Gamma_\omega \right)$, where $\Gamma_\omega=
\Gamma_q/(1-2\nu^*\Gamma_q)$ is the standard dynamic ($\omega \to 0, q=0$) limit of the scattering amplitude $\Gamma (q, \omega)$ and $\nu^*$ is the quasiparticle density of states at the Fermi level. This indicates that a divergent $\chi_n$, when the quasiparticle mass remains finite ($\nu^*<\infty$), only happens when $2\nu^*\Gamma_\omega \to -1$ (Pomeranchuk criterion). At the same time $\Gamma_q \to -\infty$. We point out here that the above arguments keep their validity irrespective of the mechanism leading to PS. However PS is related to a first order transition and the need of a Maxwell construction introduces in the phase diagram a cohexistence region embedding the spinodal instability line. Except for the end critical point, the compelled distance of the stable region from the instability line may render the above mechanism for obtaining singular scattering non-generic.
We now proceed to the more likely situation which originates from the presence of LRC forces. In this case the singular part of $\Gamma$ can be written as $$\Gamma ({\mbox{\boldmath $q$}},\omega) \approx
\tilde{U} - {1 \over 4} \sum_\alpha \frac{V}{\kappa^2+
\omega_{\mbox{\boldmath $q$}}^{\alpha}
- i\gamma \omega}
\label{fitgamlr}$$ where the sum is over the four equivalent vectors of the CDW instability ${\mbox{\boldmath $q$}}^{\alpha}
=(\pm q_c,0),(0,\pm q_c)$ and $\omega_{\mbox{\boldmath $q$}}^{\alpha} =
2(2-\cos(q_x-q_x^{\alpha})-\cos(q_y-q_y^{\alpha}))$. This expression is used to reproduce the behavior $\sim -1/(\kappa^2+
(q_x-q_x^{\alpha})^2+(q_y-q_y^{\alpha})^2)$ for $q\rightarrow q^{\alpha}$ while mantaining the lattice periodicity.
Also in this case a linear behavior of the mass term $\kappa^2=a(\delta-\delta_c)$ was found. For reasonable parameters (e.g., for LSCO systems we considered a first and next nearest neighbor hopping $t=0.5$ eV and $t'/t=-1/6$ respectively, $V_C=0.55$ eV, a dispersionless phonon with frequency $\omega_0=0.04$ eV and electron-phonon coupling $g=0.17$ eV) the instability first occurs at $\delta_c\approx 0.2$ with ${\mbox{\boldmath $q$}}_c\approx(\pm 0.28,\pm 0.86)$, or ${\mbox{\boldmath $q$}}_c\approx(\pm 0.86,\pm 0.28)$. From our analysis of the infinite-U Hubbard-Holstein model and of other models we found that the rather large density of states near the $(\pm \pi,0)$ and $(0,\pm \pi)$ points tends to favor instabilities at or close to the (1,0) or (0,1) directions. However, as shown in Fig. 1, the scattering is quite strong, although non-singular, in all directions for $\vert {\mbox{\boldmath $q$}} \vert \approx \vert
{\mbox{\boldmath $q$}}_c \vert$. The (almost) isotropic contribution to the static scattering amplitude is much less fragile under doping variations, than the singular term itself. The imaginary term in the denominators in the r.h.s. of Eqs.(\[fitgamsr\]) and (\[fitgamlr\]) reproduces on a wide range of transferred momenta $q$ the behaviour of the imaginary part of the mean field fermionic polarization bubble $Im\left[ \chi^0({\mbox{\boldmath $q$}},
\omega)\right] \propto
\omega /q$ at small $\omega $. This indicates, that, despite the complicated formal structure of the scattering amplitude within the slave-boson formalism, near the instability a simple RPA-like structure results in the final expression. This supports the idea that the forms (\[fitgamsr\]) and (\[fitgamlr\]) are generic of PS or ICDW and not related to the specific mechanism, which gives rise to the tendency towards phase separation.
\[FIG1\]
The ICDW quantum critical point and the phase diagram
=====================================================
In the previous section an ICDW instability was shown to occur at T=0 by decreasing doping from an overdoped strongly correlated FL system. According to the scenario outlined in Section I, this ICDW-QCP is the crucial ingredient characterizing the physics of the cuprates all over their metallic (under-, optimally, and over-doped) regime. As seen in Sec. II, the ICDW-QCP is characterized by singular interactions, which in the RPA-like treatment of the model assumed the form of Eq.(\[fitgamlr\]). This functional dependence is not strictly related to the specific origine of the QCP, as also witnessed by its similarity to the interactions proposed for systems near the AF-QCP [@MMP], where the instability also occurs for a finite value of $Q_{AF}=(\pi,\pi)$. However Eq.(\[fitgamlr\]) could depend from our approximate (nearly-mean-field) treatment. Nevertheless the singular nature of interactions mediated by critical fluctuations is a sound consequence. For the sake of definiteness and simplicity, we will assume the form (\[fitgamlr\]) to be generically valid and we will also assume a simple Gaussian behavior of the QCP.
The ICDW-QCP in the absence of pairing
--------------------------------------
From the theory of QCP’s [@QCPH; @QCPAF; @QCP], one can schematically draw Fig. 2, which would be valid [*in the absence of any superconducting pair formation*]{}.
[FIG. 2: Schematic structure of the phase diagram around the ICDW-QCP in the absence of superconducting pairing. On the right: Quantum disordered region \[$\xi^{-2}\approx (\delta-\delta_c)$\]; In the middle: Quantum critical (classical gaussian) region \[$\xi^{-2}\approx T$\]; On the left: Ordered ICDW phase. The dashed line indicates the mean-field critical temperature line. ]{} \[FIG2\]
The overdoped region on the right corresponds to the quantum disordered regime, where $\kappa^2 =\xi^{-2} \sim a(\delta-\delta_c)^{2\nu}$. Increasing the temperature we enter in the so-called classical gaussian regime where $\kappa^2$ becomes a function of $T$, $\kappa^2 \sim b T^{(d+z-2)/z}$. The crossover occurs along the line $\tilde{T}=(a/b)^{z/(d+z-2)}
*(\delta-\delta_c)^{2\nu z/(d+z-2)}$, where $d$ is the spatial dimension, $z$ is the dynamical critical index. Roughly we can write $$\label{kmax}
\kappa^2= Max \left[ a(\delta-\delta_c)^{2\nu},b T^{(d+z-2)/z}\right]$$ with $a$ and $b$ model-dependent positive constants, in order to represent the (much more complex) crossover of the actual $\kappa^2(\delta-\delta_c,T)$. The proper $z$ is $z=2$ for CDW as one sees from the fluctuation propagator. In $d=2$ its value is however immaterial, since Eq.(\[kmax\]) reduces to $\kappa^2=Max[a(\delta-\delta_c)^{2\nu},bT]$. As far as the index $\nu$ is concerned, since we are dealing with a QCP within the classical gaussian approximation, we take $\nu=1/2$.
The region on the left would generically correspond to the ordered ICDW phase occurring below a critical temperature $T_{CDW}(\delta)$ starting from the QCP at $\delta_c(T=0)$. The true critical line is depressed with respect to its mean-field expression (sketched by the dashed line $T_0$ of Fig. 2). When evaluated within a specific model, $T_0$ is not only determined by the ${\cal O}(T^2)$ mean-field critical temperature for ICDW formation in a metallic FL phase. It should also include the much more important one-loop gaussian corrections (see Ref. ) accounting for the quantum dynamical reduction controlled by the proximity to the QCP. The region between the two curves $T_0$ and $T_{CDW}$ is dominated by strong precursor effects. On the other hand, in $d=2$, the order parameter strictly appears at T=0 only (in the clean limit).
The quantum disordered region on the right corresponds to the overdoped region of the cuprates with FL behavior. On other hand the classical gaussian region around optimal doping is characterized by the absence of energy scales, but the temperature. Here the best non-FL behavior is obtained. In particular, with the scattering of the form given in Eq. (\[fitgamlr\]), a linear-in-T resistivity is expected in $d=2$, while for $d=3$, $\rho(T)\sim T^{3/2}$. In Ref. the objection was raised for magnetically mediated scattering, that only few “hot” points would feel strong scattering contributing to the above behavior. Generically, all other points would contribute to the lower $T^2$ behavior. However, for ICDW, the fact that typical $q_c$ are fairly small and the strong isotropic character of $\Gamma(q)$ shown in Fig. 1 make this objection less relevant.
The scenario presented so far should find a physical correspondence whenever the superconducting pair formation is forbidden. Indeed the transport experiments in LSCO under strong magnetic field [@boebinger] display a metal-insulator crossover, ending at T=0 at a QCP at optimal doping (see Fig. 3 in Ref. ). The experimental line separating the planar metal from the insulator would correspond in our picture to the “true” $T_{CDW}$ critical temperature as a function of doping. We find quite remarkable that for filling close to the “magic” value 1/8, the experimental temperature for the insulator is substantially higher (see Fig.1(a) of Ref. ) than for nearby filling values. This is consistent with the idea that commensuration effects at this particular doping pin the thermal ICDW fluctuations leading to a high $T_{CDW}$, strengthening the indications of a charge-ordering phenomenon.
As far as $T_0$ is concerned, this temperature marks the onset of the ICDW precursors and is characterized by a loss of spectral weight at low energies giving rise to a uniform decrease of the density of states near the Fermi energy. This would show up as the well known decrease of the uniform magnetic susceptibility below a characteristic temperature, which vanishes by approaching from below the optimal doping [@johnston; @nakano; @wuyts]. In underdoped ${\mathrm {YBa_2Cu_3O_{6+x}}}$ (YBCO) compounds, this last temperature has also been put in correspondence [@ito; @batlogg] with the temperature below which the planar resistivity $\rho_{ab}$ deviates from its linear behavior, while the interplane resistivity $\rho_c$ acquires a non metallic behavior [@takenaka; @notarhoc]. This finding is then compatible with the further identification of our $T_0$ line with the second line present in the phase diagram of LSCO in high magnetic field. This second line in Fig. 3 of Ref. separates a region at larger doping where both $\rho_{ab}$ and $\rho_c$ have metallic behavior from a region where $\rho_c$ increases with decreasing temperature. Consistently with our scenario this crossover line also ends at the QCP at optimal doping.
The ICDW-QCP in the presence of pairing
---------------------------------------
The above scenario is drastically modified, once superconducting pairing is considered [@PCDG]. In particular it is found that the singular interaction of Eq.(\[fitgamlr\]) is also present in the particle-particle channel, thus providing a strong pairing mechanism in the proximity of the critical point. Fig. 3 is a schematic representation of the phase diagram near the ICDW-QCP by allowing for superconducting pairing.
[FIG. 3: Schematic structure of the phase diagram around the ICDW-QCP in the presence of superconducting pairing. On the right: Quantum disordered region \[$\xi^{-2}\approx (\delta-\delta_c)$\]; In the middle: Quantum critical (classical gaussian) region \[$\xi^{-2}\approx T$\]; On the left: Dynamical ICDW phase. The heavy line indicates the region of local (pseudogap) or coherent (superconducting) pairing]{} \[FIG3\]
The most apparent and generic feature is that pairing has $d$-wave symmetry and, being mediated by an interaction rapidly varying with $\kappa^2$ \[cf. Eq.(\[fitgamlr\])\], strongly depends on temperature or doping.
We start discussing the overdoped quantum disordered regime. Even in this uniform FL phase the evaluation of a quantitatively reliable superconducting critical temperature is a difficult task since the pairing is mediated by singular interactions. Nevertheless we got an insight by solving the standard BCS equation in the linearized form $$\label{bcst}
\Delta({\mbox{\boldmath $k$}})=
-\frac{1}{N_s}\sum_{{\mbox{\boldmath $p$}}}\Gamma (
{\mbox{\boldmath $k$}}-{\mbox{\boldmath $p$}})
\frac{\tanh \frac{\epsilon_{{\mbox{\boldmath $p$}}}}{2T}}
{2\epsilon_{{\mbox{\boldmath $p$}}}}\Delta({\mbox{\boldmath $p$}})$$ and obtained $T_c$ vs $\kappa^2$ in the proximity of the ICDW instability. $\epsilon^2({\mbox{\boldmath $p$}})=
\xi_{{\mbox{\boldmath $p$}}}^{2}+\Delta_{{\mbox{\boldmath $p$}}}^{2}$ with $\xi_{{\mbox{\boldmath $p$}}}$ being the electronic dispersion measured with respect to the Fermi energy $E_F$. $N_s$ is the number of sites. We considered a tight-binding model with hopping up to the fifth nearest neighbors according to Ref. . The used parameters are appropriate for the band structure of the BiSCCO compounds, giving an open Fermi surface and a van Hove singularity (VHS) slightly below the Fermi level (for electrons). At optimal doping $\delta=0.17$ the value of $E_{F}=-0.1305$eV is fixed to get the proper distance of the Fermi surface from the VHS ($E_F - E_{VHS}=35$meV as determined experimentally). The full bandwidth $W$ is $1.4$eV.
We have verified that the $d$-wave transition has always a substantially larger critical temperature than the $s$-wave in the proximity the ICDW-QCP. This is a consequence of the form of Eq. (\[fitgamlr\]), which, together with a constant repulsion $\tilde{U}$, has a strongly $q$-dependent attraction generically peaked at rather small $q$’s. Roughly, the $d$-wave becomes favorable since the average repulsion felt by the $s$-wave paired electrons exceeds the loss in condensation energy due to the vanishing of the order parameter along the nodal regions. Among the $d$ waves, the $d_{x^2-y^2}$ is preferred because the nodes occur in regions with small density of states.
As one can see in the inset of Fig. 4, the BCS superconducting critical temperature shows a strong increase upon decreasing $\kappa^2$. The actual behavior of $T_c$ is then obtained by introducing the doping and temperature dependence of $\kappa^2\equiv \kappa^2(\delta-\delta_c,T)$. An additional (less important) doping dependence is due to the variation of the chemical potential with respect to the VHS. In the quantum disordered phase $T_c [\simeq T_c(\kappa^2(\delta-\delta_c,T=0))$\] will rapidly increase by decreasing doping towards $\delta_c$. At a given doping $\tilde{\delta}\gtrsim \delta_c$ the BCS superconducting temperature will reach the crossover line $\tilde{T}$ separating the quantum disordered from the classical gaussian region. In this latter region $\kappa^2$ weakly depends on doping and a plateau in $T_c$ is reached.
To make the analysis of the over- and optimally doped regions more quantitative we proceeded as follows. First the value of the zero temperature coefficient $a$ in Eq.(\[kmax\]) is extracted from the $\delta$-dependence of $\Gamma(q,\omega=0)$ given in Ref.. For the chosen values of the microscopic parameters an optimal critical temperature $\tilde{T_c}\approx 90$K was put in correspondence with a $\tilde{\kappa}^2 \approx 0.1$ (see inset of Fig. 4) giving a coherence length of about 3 lattice units at the overdoped-optimal doping crossover. It was then possible to estimate the coefficient $b$ in Eq. (\[kmax\]) from the relation $\tilde{\kappa}^2\simeq a(\tilde{\delta}-\delta_c)\simeq b\tilde{T_c}$, which holds at the $\tilde{T}$ crossover line. Once $a$ and $b$ are given, the $T_c$ vs $\kappa^2$ curves at the various fillings and the relation (\[kmax\]) allow a complete determination of $T_c=T_c[\delta,\kappa^2(\delta,T_c)]$ as a function of doping in both the quantum disordered and the quantum critical regions. The result is plotted in Fig. 4.
\[FIG4\]
The maximal critical temperature is obtained in correspondence to the quantum-disordered/quantum-critical crossover ($\delta_{opt}\approx \tilde{\delta}$) [@notatopt]. The slow decrease of $T_c$ by decreasing doping in the quantum critical region is a consequence of the decrease in the density of states. Of course this is only an estimate depending on the use of a weak coupling BCS scheme and of a model dependent evaluation of $\kappa^2 (\delta-\delta_c, T)$. Nevertheless, we remark that the experimentally observed rather rapid variation of $T_c$ with doping in the overdoped region and the plateau near optimal doping are quite naturally captured by our description.
Notice that in discussing $T_c$ vs doping we have assumed that the main doping dependence is via $\kappa^2$. Indeed we have verified that the variations induced by moving $E_F$ are less relevant and calculating $T_c$ at fixed $\kappa^2$, the greatest values are obtained for $E_F\simeq E_{VHS}$. Then, a finite $T_c$ would extend from very small (or even negative) doping up to very high doping ($\delta > 0.6$). Strong variations of $T_c$ with doping, like those observed in many cuprates, are hardly obtained in terms of a dependence on band parameters only (specifically, tuning the VHS). They are instead quite natural in the context of proximity to an instability, where doping controls the effective potential itself and not only the density of states. This agrees with the experimental finding that at the maximum $T_c$ the VHS is not at the Fermi energy but below it [@shenreview].
All the above analysis is confined to the over- and optimally doped regions of the phase diagram where the fluctuations are not strong enough to destroy the homogeneous character of the system. On the other hand the region on the left of the mean-field critical curve for the ICDW transition is characterized by strong thermal fluctuations leading to ICDW precursors. The ICDW fluctuations in the underdoped region become critical in the proximity of the line $T_{CDW}(\delta)$ where the ICDW transition would occur in the absence of superconducting pairing. Approaching $T_{CDW}$ the attractive fluctuations would lead to the formation of (local) pairs at the curve $T^*(\delta)$. As a consequence of strongly paired quasiparticles, below $T^*(\delta)$ pseudogap effects will show up as seen in many experiments (NMR, ARPES, optical conductivity, specific heat, ...). However, despite the strong pairing, the true superconducting critical temperature is lower than $T^*$ and it decreases inside the underdoped psudogap region. This occurrence is schematically depicted in Fig. 3 by the bifurcation of the heavy line. The idea of locally paired fermions without long-range coherent superconductivity is a long-standing recurrent concept in the context of high temperature superconductors, where the coherence length is quite small and lead to numerous investigations of mixed fermion-boson models [@ranninger], and Bose condensation vs BCS crossovers [@sademelo]. The superconducting pseudogap problem has also been recently analyzed for a single metallic stripe in an AF environment [@EKZ].
A simple model providing local pair formation is the negative-U Hubbard model in the large-U regime, with a critical temperature for coherent superconductivity, which decreases with increasing $\vert U\vert$. To apply this model to the underdoped cuprates, one should then assume that the pairing potential U strongly increases by decreasing the doping. However, in this case also the zero temperature charge excitation gap would increase by the same amount upon decreasing the doping (like $T^*$). This is contrary to the observation [@loram; @harris; @campuzano] that the low-temperature gap in the underdoped cuprates weakly depends on doping, while $T^*$ increases fast by decreasing $\delta$. We believe that this peculiar behavior requires a remarkable temperature dependence of the pairing potential as implied by our ICDW scenario in the underdoped region. Although a full theory of this complex phenomenon is far from being available, to get an insight on the physics of the pseudogap phase, we introduce in $\kappa^2$ in Eq. (\[fitgamlr\]) a modified temperature dependence \[with respect to Eq.(\[kmax\])\] given by the distance from the critical line $T_{CDW}$ and add a contribution due to the presence of a local superconducting gap $\kappa^2 \equiv Max \left[ \vert \Delta_{Max}(T)\vert,
c(T-T_{CDW}) \right]$. With this we aim to introduce in the pairing potential the stabilizing effect of the local superconducting order on the ICDW instability. $\Delta_{Max}$ is the maximum value of $\Delta({\mbox{\boldmath $k$}})$ in $k$-space. The decreasing of the fluctuations above the ICDW critical line is considered via $c(T-T_{CDW} )$. Of course $\kappa^2$ is selfconsistently related to the superconducting gap determined by the potential itself. For simplicity, we assume again the BCS equation to be valid to estimate the (local) $d$-wave gap function. Then, inserting the effective interaction (\[fitgamlr\]) in the BCS equation, we obtained the behavior of $\Delta({\mbox{\boldmath $k$}})$ as a function of $T$ [@CDGP]. The result for $\Delta_{Max}$, is reported in Fig. 5. Despite the oversimplified form of $\kappa^2$, that we have been using, the $T$ behavior of $\Delta_{Max}$ bears a striking resemblance with the analogous quantity recently measured with ARPES in underdoped Bi-2212 samples [@harris].
\[FIG5\]
Of course the above BCS treatment only deals with the amplitude of the gap and says nothing on the way a true superconducting phase coherence is established below a critical temperature $T_c<T^*$. For this we have to invoke phase fluctuations as in the usual (large) negative-U Hubbard model. Here the strongly peaked $q$-dependence of the pairing potential, which only couples few points on the Fermi surface, and precursors of (1d) stripe formation are expected to give rise to strong phase fluctuations.
It is also important to emphasize that the occurrence of local pairing prevents the actual establishing of the ICDW long-range order [@notaCCCDGR], so that $T_{CDW}$ looses its meaning of a true transition line, merely indicating the area where pairing and strong local-dynamical ICDW order selfconsistently interplay.
As far as magnetic properties are concerned, we notice that the presence of fluctuating hole-poor and hole-rich stripes favors the presence of (locally commensurate) antiferromagnetic fluctuations in the hole-poor regions. This generically explains why, despite the rapid spoiling of the long-range AF by doping, magnetic features survive up to much larger (optimal) doping. Irrespective of the dominant mechanism leading to ICDW formation, magnetism also contributes to further expelling charge from the hole-poor stripes easily leading to non-linear phenomena and higher harmonics generation. The presence of locally commensurate AF order embedded in incommensurate stripes would also reconcile NMR and neutron scattering experiments [@bpt].
Conclusions
===========
The scenario presented in this work was based on the existence of ICDW instabilities in strongly correlated systems and on generic properties of QCP’s in quasi-twodimensional systems. These properties were exploited to establish some consequences of the ICDW-QCP scenario, like, e.g., the marked non-FL character of the optimally doped quantum disordered region, the $d$-wave symmetry of the superconducting order and the peculiar doping dependence of the superconducting critical temperature in the over- and optimally doped regions. In these homogeneous regions rather simple theoretical approaches put our conclusions on a rather firm ground. On the other hand, the theoretical treatment of the underdoped phase is made difficult by the interplay of three different local fields associated to the tendency towards three ordered phases (magnetic, charge-ordered and superconducting). This renders our scenario less established thus calling for further confirmations. It is however remarkable that the temperature dependence of the pairing potential implied by the ICDW scenario accounts well for the peculiar (so far unexplained) temperature behavior of the local gap and for the non-trivial relation between $T^*$ and $\Delta_{Max}(T=0)$.
Part of this work was carried out with the financial support of the Istituto Nazionale di Fisica della Materia, Progetto di Ricerca Avanzata 1996.
For a review on $1d$ and quasi-$1d$ electronic systems, see J. Sólyom, [Adv. Phys.]{} [**28**]{}, 201 (1979). P.W. Anderson, [Science]{} [**235**]{}, 1196 (1987); [Phys Rev.Lett.]{} [**64**]{}, 1839 (1990); [*ibid.*]{} [**65**]{}, 2306 (1990). C. Castellani, C. Di Castro and W. Metzner, [Phys. Rev. Lett.]{} [**72**]{}, 316 (1994); for a review on Fermi systems with strong forward scattering see W. Metzner, C. Castellani, and C. Di Castro, to appear in Adv. Phys. (1997). A.Maccarone, thesis, University of Rome, 1994; C. Castellani, and C. Di Castro, Physica [**C 235-240**]{}, 99 (1994). P. Bares and X.G.Wen, [Phys. Rev.]{} B [**48**]{}, 8636 (1993); A.Houghton, H.-J. Kwon, J.B. Marston and R. Shankar, [J. Phys. Condens.Matter]{} (UK) [**6**]{}, 4909 (1994). N. Nagaosa and P. A. Lee, [Phys. Rev. Lett.]{} [**64**]{}, 2450 (1990); P. A. Lee and N. Nagaosa, [Phys. Rev.]{} B [**46**]{}, 5621 (1992); L.B. Ioffe and A.I. Larkin, Phys. Rev. B [**B39**]{}, 8988 (1989). P. Monthoux, A.V. Balatsky, and D. Pines, Phys. Rev. B [**46**]{}, 14803 (1992). P. Montoux and D. Pines, Phys. Rev. B [**B50**]{}, 16015 (1994). A. Sokol and D. Pines, Phys. Rev. Lett. [**71**]{}, 2813 (1993). C. M. Varma, cond-mat 9607105 and references therein. C. Castellani, C. Di Castro, M. Grilli, Phys. Rev. Lett. [**75**]{}, 4650 (1995). A. Perali, C. Castellani, C. Di Castro, and M. Grilli, Phys. Rev. B [**54**]{}, 16216 (1996). A. V. Chubukov and D. K. Morr, cond-mat 9701196 and references therein. V. J. Emery, S. A. Kivelson, and H. Q. Lin, [Phys. Rev. Lett.]{} [**64**]{}, 475 (1990). M. Marder, N. Papanicolau and G. C. Psaltakis [Phys. Rev.]{} B [**41**]{}, 6920 (1990). M. Grilli, R. Raimondi, C. Castellani, C. Di Castro, and G. Kotliar, [ Phys. Rev. Lett.]{} [**67**]{}, 259 (1991); [ Int. J. of Mod. Phys. B ]{} Vol. [**5**]{}, 309 (1991). N. Cancrini, S. Caprara, C. Castellani, C. Di Castro, M. Grilli and R. Raimondi, Europhys. Lett. [**14**]{}, 597 (1991). C. Di Castro and M. Grilli, Phys. Scr. T [**45**]{}, 81 (1992). R. Raimondi, C. Castellani, M. Grilli, Y. Bang, and G. Kotliar, Phys. Rev. B [**47**]{}, 3331 (1993). S. Caprara, C. Di Castro and M. Grilli, Phys. Rev. B [**51**]{}, 9286 (1995). F. Bucci, C. Castellani, C. Di Castro, and M. Grilli, Phys. Rev. B [**52**]{}, 6880 (1995). M. Grilli and C. Castellani, Phys. Rev. [**B50**]{}, 16880 (1994). F. Becca, M. Tarquini, M. Grilli, and C. Di Castro, Phys. Rev. B [**56**]{}, 12443 (1996). This conclusion has been confirmed by numerical analyses in E. Dagotto and J. Riera, Phys. Rev. Lett. [**70**]{}, 682 (1993) and Sandvik and A. Sudbo, Phys. Rev. B [**54**]{}, R3746 (1996). V.J. Emery and S.A. Kivelson, Physica (Amsterdam) [**[209C]{}**]{}, 597 (1993). U. Löw, V. J. Emery, K. Fabricius, and S. A. Kivelson, Phys. Rev. Lett. [**72**]{}, 1918 (1994). G. S. Boebinger,[*et al.*]{}, Phys. Rev. Lett. [**77**]{}, 5417 (1996). T. E. Mason, [*et al.*]{}, Phys. Rev. Lett. [**77**]{}, 1604 (1996). For a recent work see A. V. Puchkov, D. N. Basov, and T. Timusk, preprint 1996, cond-mat 9611083 and references therein. J. Rossat-Mignod, [*et al.*]{}, Physica C [**185-189**]{}. B. Batlogg, [*et al.*]{}, Physica C [**235-240**]{}, 130 (1994) and references therein. For a recent review see C. Berthier [*et al.*]{}, J. de Physique I, December 1996 and references therein. D. S. Marshall, [*et al.*]{}, Phys. Rev. Lett. [bf 76]{}, 4841 (1996). J. M. Harris, [*et al.*]{}, preprint 1996, cond-mat 9611010. H. Ding, [*et al.*]{}, Nature [**382**]{}, 51 (1996). J. W. Loram, [*et al.*]{}, Phys. Rev. Lett. [**71**]{}, 1740 (1993); J. W. Loram, [*et al.*]{} Physica C [**235-240**]{}, 134 (1994). J.-S. Zhou and J. B. Goodenough, Phys. Rev. Lett. [**77**]{}, 151 (1996). H. Y. Hwang, [*et al.*]{}, Phys. Rev. Lett. [**72**]{}, 2636 (1994). T. Ito, [*et al.*]{}, Phys. Rev. Lett. [**70**]{}, 3995 (1993). D. C. Johnston, Phys. Rev. Lett. [**62**]{}, 957 (1989). T. Nakano, [*et al.*]{}, Phys. Rev. B [**49**]{}, 16000 (1994). B. Wuyts, V. V. Moshchalkov, and Y. Bruinseraede, Phys. Rev. B [**53**]{}, 9418 (1996). K. Takenaka, [*et al.*]{}, Phys. Rev. B [**50**]{}, 6534 (1994). The non-metallic behavior of $\rho_c$ could be due to the fact that the ICDW fluctuations in the planes are enhanced below $T_0$ rendering difficult the coherent single-particle hopping. The fact that the highest $T_c$ is obtained in correspondence to the crossover line between the quantum-critical and the quantum disordered regions is a consequence of the simplified form of $\kappa^2$ in Eq.(\[kmax\]). J.M. Tranquada, B.J. Sternlieb, J.D. Axe, Y. Nakamura, and S. Uchida, Nature [**375**]{}, 561 (1995). J.M. Tranquada, J.D. Axe, N. Ichikawa, Y. Nakamura, S. Uchida, B. Nachumi, Phys. Rev. B [**56**]{}, 7689 (1996). J.M. Tranquada, J.D. Axe, N. Ichikawa, A. R. Moodenbaugh, Y. Nakamura, S. Uchida, B. Nachumi, cond-mat 9608048. K. Yamada, [*et al.*]{} preprint 1996. A. Bianconi, [*et al.*]{}, Phys. Rev. Lett. [**76**]{}, 3412 (1996). A. Bianconi, [*et al.*]{}, Phys. Rev. B [**54**]{}, 12018 (1996) and references therein. A. Bianconi, [*et al.*]{}, Phys. Rev. B [**54**]{}, 4310 (1996). S. E. Barnes, J. Phys. [**F6**]{}, 1375 (1976); P. Coleman, Phys. Rev. [**B29**]{}, 3035 (1984); N. Read, and D. M. Newns, J. Phys. [**C16**]{}, 3273 (1983). A. J. Millis, H. Monien and D. Pines, Phys Rev. B [**[42]{}**]{}, 167 (1990). J. A. Hertz, Phys. Rev. B [**[14]{}**]{}, 1165 (1976). S. Sachdev and J. Ye, Phys. Rev. Lett. [**[69]{}**]{}, 2411 (1992). A. J. Millis, Phys. Rev. B [**[48]{}**]{}, 7183 (1993). R. Hlubina and T. M. Rice, Phys. Rev. B [**51**]{}, 9253 (1995). M.R. Norman, M. Randeria, H. Ding and J.C. Campuzano, Phys. Rev. B [**[52]{}**]{}, 615 (1995). D. C. Newns, C. C. Tsuei, P. C. Pattnaik, and C. L. Kane, Comm. Cond. Mat. Phys. [**15**]{}, 273 (1992) and references therein. Z. X. Shen and D. S. Dessau, Phys. Rep. [**253**]{}, 1 (1995). T. Kostyrko and J. Ranninger, Phys. Rev. B [**54**]{}, 13105 (1996), and references therein. V. J. Emery, S. A. Kivelson, and O. Zachar, cond-mat 9610094. C. A. R. Sá de Melo, M. Randeria, and J. R. Engelbrecht, Phys. Rev. Lett. [**71**]{}, 3202 (1993). Indeed it was shown in a Kondo-lattice-like model that superconductivity stabilizes phase separation already at mean field level[@CCCDGR]. C. Castellani, C. Di Castro, M. Grilli, and A. Perali, unpublished. The existence of charge inhomogeneities was already claimed to reconcile NMR and neutron scattering experiments by V. Barzykin, D. Pines, and D. Thelen, Phys. Rev. B [**50**]{}, 16052 (1994).
|
---
abstract: 'We present a study of the proximity effect and the inverse proximity effect in a superconductor$\mid$ferromagnet bilayer, taking into account several important factors which mostly have been ignored in the literature so far. These include spin-dependent interfacial phase shifts (spin-DIPS) and inhomogeneous textures of the magnetization in the ferromagnetic layer, both of which are expected to be present in real experimental samples. Our approach is numerical, allowing us to access the full proximity effect regime. In Part I of this work, we study the superconducting proximity effect and the resulting local density of states in an inhomogeneous ferromagnet with a non-trivial magnetic texture. Our two main results in Part I are a study of how Bloch and Néel domain walls affect the proximity-induced superconducting correlations and a study of the superconducting proximity effect in a conical ferromagnet. The latter topic should be relevant for the ferromagnet Ho, which was recently used in an experiment to demonstrate the possibility to generate and sustain long-range triplet superconducting correlations. In Part II of this work, we investigate the inverse proximity effect with emphasis on the induced magnetization in the superconducting region as a result of the “leakage” from the ferromagnetic region. It is shown that the presence of spin-DIPS modify conclusions obtained previously in the literature with regard to the induced magnetization in the superconducting region. In particular, we find that the spin-DIPS can trigger an anti-screening effect of the magnetization, leading to an induced magnetization in the superconducting region with *the same sign* as in the proximity ferromagnet.'
author:
- Jacob Linder
- Takehito Yokoyama
- 'Asle Sudb[ø]{}'
date: Received
title: 'Theory of superconducting and magnetic proximity effect in S$\mid$F structures with inhomogeneous magnetization textures and spin-active interfaces'
---
Introduction {#sec:introduction}
============
The interplay between ferromagnetism and superconductivity has over the past decade attracted much interest from the condensed-matter physics community. Research on superconductor$\mid$ferromagnet (S$\mid$F) heterostructures continues to benefit from great interest, which is fueled by the exciting phenomena arising from a fundamental physics point of view in addition to the prospect of harvesting functional devices in low-temperature nanotechnology.
There is currently intense activity in this particular research area (see [*e.g.* ]{}Refs. and references therein). The interest in S$\mid$F hybrid structures was boosted at the beginning of this millenium, primarily due to the theoretical proposition of proximity-induced odd-frequency correlations [@bergeret_prl_01] and the experimental observation of 0-$\pi$ oscillations in S$\mid$F$\mid$S Josephson junctions. [@ryazanov_prl_01] A large amount of work has been devoted to odd-frequency pairing (see [*e.g.* ]{}[@volkov_prl_03; @bergeret_prb_03; @eschrig_prl_03; @Braude; @asano_prl_07_1; @Keizer; @fominov_prb_07; @yokoyama_prb_07; @asano_prl_07_2; @halterman_prl_07; @Tanaka; @eschrig_jlow_07; @linder_prb_08; @eschrig_nphys_08; @halterman_prb_08; @linder_prb_08_2; @yada_arxiv_08]) and the physics of 0-$\pi$ oscillations (see [*e.g.* ]{}[@bulaevskii_jetp_77; @buzdin_pisma_82; @koshina_prb_01; @kontos_prl_02; @buzdin_prb_03; @houzet_prb_05; @cottet_prb_05; @robinson_prl_06; @zareyan_prb_06; @yokoyamajos_prb_07; @houzet_prb_07; @crouzy_prb_07; @linder_prl_08; @champel_prl_08; @brydon_prb_08; @sperstad_prb_08; @volkov_prb_08]) in S$\mid$F heterostructures. The concept of odd-frequency pairing dates back to Refs. [@berezinskii_jetp_74; @balatsky_prb_92; @coleman_prb_93; @abrahams_prb_95] and was recently re-examined in Ref. [@solenov_arxiv_08].
So far, the proximity effect has received much more attention than the inverse proximity effect. In S$\mid$F bilayers, the proximity effect causes superconducting correlations to penetrate into the ferromagnetic region [@bergeretrmp]. Similarly, the inverse proximity effect induces ferromagnetic correlations in the superconducting region near the interface region.[@Gu; @Sillanpaa; @Bergeret; @Morten] Often, the bulk solution is employed in the superconducting region, such that both the induced magnetic correlations and the self-consistency of the superconducting order parameter are neglected. However, it was shown in Ref. [@bergeret_prb_04] that the induction of an odd-frequency triplet component would lead to a finite magnetization in the superconducting region close to the S$\mid$F interface. Prior to this finding, some experimental groups had reported findings which pointed towards precisely such a phenomenon [@muehge_physicac_98; @garifullin_appl_02]. Very recently, Xia [*et al.*]{}[@xia_arxiv_08] presented an experimental observation of the inverse proximity effect in Al/(Co-Pd) and Pd/Ni bilayers by measuring the magneto-optical Kerr effect. Their data could be roughly fitted to the predictions of Ref. [@bergeret_prb_04], and other experiments [@Gu; @Sillanpaa; @muehge_physicac_98; @garifullin_appl_02; @salikhov_arxiv_08] have also addressed aspects of the inverse proximity effect S$\mid$F bilayers.
In Ref. [@kharitonov_prb_06], the authors investigated the proximity-induced magnetization in the superconducting region of a S$\mid$F bilayer, and found that the magnetization would oscillate in the clean limit (see also Ref. [@halterman_prb_04]) and decay monotonously in the diffusive limit, with a sign opposite to the magnetization in the bulk of the ferromagnet. The reason for this screening behavior in the superconductor was attributed to a scenario in which the spin-$\uparrow$ electron of a Cooper pair near the interface would prefer to be located in the ferromagnetic region, while its spin-$\downarrow$ partner would remain in the superconducting region, thus creating a magnetization with an opposite sign compared to the ferromagnet. By considering the weak proximity effect regime in the diffusive limit, both Ref. [@bergeret_prb_04] and Ref. [@kharitonov_prb_06] arrived at this conclusion. However, it would be desirable to go beyond the approximation of a weak proximity effect employed in previous work to investigate if this may alter how the induced magnetization in the superconducting region behaves.
Moreover, none of the above works on the inverse proximity effect have properly included an important property which is intrinsic to S$\mid$F interfaces, namely the spin-dependent interfacial phase shifts (spin-DIPS) that occur at the interface. The spin-DIPS have been shown to exert an important influence on various experimentally observable quantities in S$\mid$F bilayers [@cottet_prb_05; @huertashernando_prl_02; @cottet_prb_07], and should be taken into account. For instance, the anomalous double peak structure in the local density of states (LDOS) in a diffusive S$\mid$F bilayer reported very recently by SanGiorgio [*et al.*]{}in Ref. [@sangiorgio_prl_08] was reproduced theoretically in Ref. [@cottet_arxiv_08] by using a numerical solution of the Usadel equation when including the effect of the spin-DIPS.
So far, due to the complexity of the problem, several assumptions have been usually made when treating S$\mid$F hybrid structures. For instance, since the quasiclassical equations become quite complicated for inhomogeneous ferromagnets, they have been linearized in most of the previous works. However, presently, the direction of this research field tends towards a more realistic description of S$\mid$F structures than the simplified models that mostly have been employed up to now. It is obvious that this is a necessary step in order to reconcile the theoretical predictions with experimentally observed data.
Our motivation for this work is to examine the effect of inhomogeneous magnetization textures and spin-DIPS on both the proximity effect and the inverse proximity effect in S$\mid$F bilayers. This is directly relevant to two recent experimental studies [@sosnin_prl_06; @xia_arxiv_08] which studied the superconducting proximity effect in the conical ferromagnet Ho and the inverse proximity effect in the superconducting region of a S$\mid$F bilayer, respectively. As we shall show in this work, non-trivial magnetization textures and spin-DIPS have profound influence on the physical properties of S$\mid$F bilayers, suggesting that their role must be taken seriously.
We divide this work into two parts which are devoted to the proximity effect in the ferromagnetic region (Part I) and the inverse proximity effect in the superconducting region (Part II). In Part I, we present results where we treat the role of magnetic properties at the interface and the possibility of inhomogeneous magnetization thoroughly. We study the proximity-induced density of states (DOS) in a S$\mid$F bilayer which takes into account the presence of spin-DIPS at the interface and also the possibility of having a non-trivial magnetization texture (such as a domain wall) in the ferromagnetic region. In order to access the full proximity effect regime, we do not restrict ourselves to any limiting cases. Rather, we employ a full numerical solution of the DOS by means of the quasiclassical theory of superconductivity. We apply our theory to two cases of ferromagnets with an inhomogeneous magnetic texture, namely on one hand ferromagnets with domain walls and on the other hand conical ferromagnets.
In Part II, we study numerically and self-consistently the inverse proximity effect in a S$\mid$F bilayer of finite size upon taking properly into account the spin-DIPS that occur at the S$\mid$F interface. Our main objective is to study the influence exerted on the inverse proximity effect by the spin-DIPS. Surprisingly, we find that the spin-DIPS may invert the sign of the proximity-induced magnetization in the superconducting layer compared to the predictions of Refs. [@bergeret_prb_04; @kharitonov_prb_06]. Consequently, the spin-DIPS can trigger an anti-screening effect of the magnetization, which suggests that their role must be taken seriously in any attempt to construct a theory for the inverse proximity effect in S$\mid$F bilayers. We also explain the basic mechanism behind the sign-inversion induced by the spin-DIPS.
This paper is organized as follows. In Section \[sec:theoryI\], we present the theoretical framework we use to perform our computations in Part I, namely the quasiclassical theory of superconductivity in the diffusive limit for an inhomogeneous ferromagnet using the Ricatti parametrization. In Section \[sec:resultsI\], we present our numerical results for proximity-effect and the local density of states for the two cases of ferromagnets with domain walls and with conical magnetic textures. In Section \[sec:discussionI\], we present a discussion of our results obtained in Part I. Moving on to Part II of this work, we introduce a slightly different notation and parametrization for the Green’s function in Sec. \[sec:theoryII\], which is easier to implement for a homogeneous S$\mid$F bilayer. In Sec. \[sec:resultsII\], we present our results for the inverse proximity effect, manifested through an induced magnetization in the superconducting region and in particular how it is influenced by the presence of spin-DIPS. The results for Part II are discussed in Sec. \[sec:discussionII\], and we conclude with final remarks in Sec. \[sec:summary\]. Throughout the paper, we will use boldface notation for 3-vectors, $\hat{\ldots}$ for $4\times4$ matrices, and $\underline{\ldots}$ for $2\times2$ matrices.
Proximity effect in a S$\mid$F bilayer with an inhomogeneous magnetization texture {#sec:PartI}
===================================================================================
Theoretical framework {#sec:theoryI}
---------------------
In the first part of our work, we shall consider the proximity effect in the ferromagnetic region of an S$\mid$F bilayer when the magnetization texture is inhomogeneous. This is the case [*e.g.* ]{}in the presence of a domain-wall structure or conical ferromagnetism, which both will be treated below. We will use the quasiclassical theory of superconductivity [@serene], and consider the diffusive limit described by the Usadel equation [@usadel].
### Quasiclassical theory and Green’s functions
To account for an inhomogeneous magnetization in the ferromagnet, it is convenient to parametrize the Green’s function to obtain a simpler set of equations to solve. One possibility is to use a generalized $\theta$-parametrization [@ivanov_prb_06], as follows $$\begin{aligned}
\hat{g} &= \begin{pmatrix}
M_0c\underline{\sigma_0} + (\boldsymbol{M}\cdot\underline{\boldsymbol{\sigma}})s & \underline{\rho}^+ \notag\\
\underline{\rho}^- & -M_0c\underline{\sigma_0} - (\boldsymbol{M}\cdot\underline{\boldsymbol{\sigma}})^*s
\end{pmatrix},\notag\\
\underline{\rho}^\pm &= c[\i(M_z\underline{\sigma_2}-M_y\underline{\sigma_3})\pm M_x\underline{\sigma_0}] \pm M_0\underline{\sigma_1}s,\end{aligned}$$ where $\underline{\sigma_j}$ are the identity $(j=0)$ and Pauli $(j=1,2,3)$ matrices, and $$\begin{aligned}
\underline{\boldsymbol{\sigma}} = (\underline{\sigma_1}, \underline{\sigma_2}, \underline{\sigma_3}).\end{aligned}$$ Also, $s\equiv \sinh(\theta)$ and $c\equiv \cosh(\theta)$. The Green’s function is then completely determined by the complex functions $\theta$, $M_0$, and $\boldsymbol{M}$ with the additional constraint $M_0^2 -\boldsymbol{M}^2=1$ in order to satisfy $\hat{g}^2=\hat{1}$.
However, for our purpose we find it both more convenient and elegant to use a Ricatti-parametrization of the Green’s function as follows [@schopohl_prb_95; @konstandin_prb_05], as follows $$\begin{aligned}
\label{eq:g}
\hat{g} &= \begin{pmatrix}
{\underline{\mathcal{N}}}(\underline{1}-{\underline{\gamma}}{\underline{\tilde{\gamma}}}) & 2{\underline{\mathcal{N}}}{\underline{\gamma}}\\
2{\underline{\tilde{\mathcal{N}}}}{\underline{\tilde{\gamma}}}& {\underline{\tilde{\mathcal{N}}}}(-\underline{1} + {\underline{\tilde{\gamma}}}{\underline{\gamma}}) \\
\end{pmatrix}.\end{aligned}$$ This parametrization facilitates the numerical computations, and also ensures that $\hat{g}^2=\hat{1}$. The unknown functions ${\underline{\gamma}}$ and ${\underline{\tilde{\gamma}}}{\underline{\gamma}}$ are key elements in this parametrization of the Green’s function, and will be solved for below. Here, $\underline{\ldots}$ denotes a $2\times2$ matrix and $$\begin{aligned}
{\underline{\mathcal{N}}}=(1+{\underline{\gamma}}{\underline{\tilde{\gamma}}})^{-1}\; {\underline{\tilde{\mathcal{N}}}}= (1+{\underline{\tilde{\gamma}}}{\underline{\gamma}})^{-1}.\end{aligned}$$
In order to calculate the Green’s function $\hat{g}$, we need to solve the Usadel equation with appropriate boundary conditions at $x=0$ and $x=d_F$. The two natural length scales associated with each of the long-range orders are the superconducting and ferromagnetic coherence lengths $$\begin{aligned}
\xi_S = \sqrt{D_S/\Delta_0},\; \xi_F = \sqrt{D_F/h_0},\end{aligned}$$ where $\Delta_0$ and $h_0$ denote the bulk values of the gap and the exchange field. We set $D_F=D_S=D$ for simplicity. The Usadel equation reads $$\begin{aligned}
\label{eq:usadel}
D\partial(\hat{g}\partial\hat{g}) + \i[\varepsilon\hat{\rho}_3 + \text{diag}[\boldsymbol{h}\cdot\underline{\boldsymbol{\sigma}},(\boldsymbol{h}\cdot\underline{\boldsymbol{\sigma}})^\mathcal{T}], \hat{g}]=0,\end{aligned}$$ and is supplemented with the boundary conditions [@cottet_prb_05; @huertashernando_prl_02] $$\begin{aligned}
2\zeta d_F\hat{g} \partial \hat{g} = [\hat{g}_\text{BCS}, \hat{g}] + \i (G_\phi/G_T) [\text{diag}(\underline{\tau_3}, \underline{\tau_3}), \hat{g}]\end{aligned}$$ at $x=0$ where the interface is spin polarized along the z-axis, and $ \hat{g}\partial\hat{g}=\hat{0}$ at $x=d_F$. Here, $\partial \equiv \frac{\partial}{\partial x}$ and we define $$\begin{aligned}
\zeta=R_B/R_F\end{aligned}$$ as the ratio between the resistance of the barrier region and the resistance in the ferromagnetic film (note that $R_B = G_T^{-1}$). The barrier conductance is given by [@cottet_prb_05] $$\begin{aligned}
G_T = G_Q\sum_n^N T_n,\end{aligned}$$ where $G_Q = e/h$ and $T_n$ is the transmission coefficient for channel $n$. The boundary conditions Eqs. (\[eq:bcF\]) and (\[eq:bcS\]) are derived under the assumption that $T_n \ll 1$, but this does not necessarily mean that the barrier conductance is small since there may be a large total number of channels $N$ through which transport may take place. The parameter $G_\phi$ describes the spin-DIPS taking place at the F side of the interface.[@Brataas] Since its exact value depend on the microscopic properties of the barrier region, they are here treated phenomenologically. We finally underline that the boundary conditions above are valid for planar diffusive contacts.
Since we employ a numerical solution, we have access to study the full proximity effect regime and also an, in principle, arbitrary spatial modulation $h=h(x)$ of the exchange field. This is desirable in order to clarify effects associated with non-uniform ferromagnets, such as spiral magnetic ordering or the presence of domain walls. Inserting Eq. (\[eq:g\]) into Eq. (\[eq:usadel\]), we obtain the transport equation for the unknown function ${\underline{\gamma}}$ (and hence ${\underline{\tilde{\gamma}}}{\underline{\gamma}}$) $$\begin{aligned}
D[\partial^2{\underline{\gamma}}+(\partial{\underline{\gamma}})\underline{\tilde{\mathcal{F}}}(\partial{\underline{\gamma}})] + \i[2\varepsilon{\underline{\gamma}}+ \boldsymbol{h}\cdot(\underline{\boldsymbol{\sigma}}{\underline{\gamma}}- {\underline{\gamma}}\underline{\boldsymbol{\sigma}}^*)] = 0,\end{aligned}$$ with $\underline{\tilde{\mathcal{F}}} = -2{\underline{\tilde{\mathcal{N}}}}{\underline{\tilde{\gamma}}}$. The boundary condition at $x=0$ reads $$\begin{aligned}
2\zeta d_F\partial_x{\underline{\gamma}}&= [2c{\underline{\gamma}}-s\i\underline{\tau_2} + {\underline{\gamma}}(s\i\underline{\tau_2}){\underline{\gamma}}] \notag\\
&+ \i (G_\phi/G_T)(\underline{\tau_3}{\underline{\gamma}}- {\underline{\gamma}}\underline{\tau_3}),\end{aligned}$$ while $\partial_x{\underline{\gamma}}= 0$ at $x=d$. For ${\underline{\tilde{\gamma}}}$, we obtain $$\begin{aligned}
D[\partial^2{\underline{\tilde{\gamma}}}+(\partial{\underline{\tilde{\gamma}}})\underline{\mathcal{F}}(\partial{\underline{\tilde{\gamma}}})] + \i[2\varepsilon{\underline{\tilde{\gamma}}}+ \boldsymbol{h}\cdot({\underline{\tilde{\gamma}}}\underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\sigma}}^*{\underline{\tilde{\gamma}}})] = 0,\end{aligned}$$ with the corresponding boundary condition $$\begin{aligned}
2\zeta d_F\partial_x{\underline{\tilde{\gamma}}}&= [2c{\underline{\tilde{\gamma}}}-s\i\underline{\tau_2} + {\underline{\tilde{\gamma}}}(s\i\underline{\tau_2}){\underline{\tilde{\gamma}}}] \notag\\
&- \i (G_\phi/G_T)(\underline{\tau_3}{\underline{\tilde{\gamma}}}- {\underline{\tilde{\gamma}}}\underline{\tau_3}).\end{aligned}$$ We have defined $\underline{\mathcal{F}} = -2{\underline{\mathcal{N}}}{\underline{\gamma}}$. Note that we use the bulk solution in the superconducting region, which is a good approximation when assuming that the superconducting region is much less disordered than the ferromagnet and when the interface transparency is small, as considered here (see detailed discussion in Sec. \[sec:discussionI\]). One finds that $$\begin{aligned}
{\underline{\gamma}}_\text{BCS}={\underline{\tilde{\gamma}}}_\text{BCS} = \begin{pmatrix}
0 & s/(1+c)\\
-s/(1+c) & 0 \\
\end{pmatrix}.\end{aligned}$$ The normalized DOS is finally evaluated by $$\begin{aligned}
N(\varepsilon)/N_0 = \text{Tr}\{\text{Re}[{\underline{\mathcal{N}}}(1-{\underline{\gamma}}{\underline{\tilde{\gamma}}})]\}/2.\end{aligned}$$ In what follows, we will omit the effect of spin-flip and spin-orbit scattering to reduce the number of parameters in the problem. In comparison with real experimental data, however, the effects of these pair-breaking mechanisms are easily included in our framework by adding two terms $\hat{\sigma}_\text{sf}$ and $\hat{\sigma}_\text{so}$ in Eq. (\[eq:usadel\]) (see [*e.g.* ]{}Ref. [@linder_prb_08_2] for a detailed treatment). In this paper, we will focus on the role of the phase-shifts obtained at the interface due to the spin-split bands and the inhomogeneity of the exchange field in the ferromagnet.
### Inhomogeneous magnetization
We will consider three types of inhomogeneous magnetic structures: Bloch walls, Néel walls, and conical ferromagnets (see Fig. \[fig:model\]). An example of the latter is the rare-earth heavy fermion elemental magnet Ho, although we hasten to add that while Ho features strong ferromagnet, we will consider the weakly ferromanetic case. These structures are shown in Fig. \[fig:model\] and are to be contrasted with the usual assumption of a homogeneous exchange field in the ferromagnetic region. For the first two cases, the domain wall has a width $d_W$ and is taken to be located at the center of the ferromagnetic region $(x=d_F/2)$. The Bloch wall is thus modelled by $$\begin{aligned}
\mathbf{h} = h(\cos\theta\hat{\mathbf{y}}+\sin\theta\hat{\mathbf{z}}),\end{aligned}$$ while $\hat{\mathbf{y}}\to \hat{\mathbf{x}}$ for the Néel wall. Here, we have defined $$\begin{aligned}
\theta=-\arctan[(x-d_F/2)/d_W],\end{aligned}$$ similarly to Ref. [@konstandin_prb_05].
In the case of a conical ferromagnet, cf. Fig. \[fig:model\], the magnetic moment belongs to a cone. In Ho, the opening angle is $\alpha=4\pi/9$ and the magnetic moment then rotates like a helix along the $c$-axis with a turning angle $\theta=\pi/6$ per interatomic layer with distance $a$ (see Ref. [@sosnin_prl_06] for a further discussion). Above 21 K, the conical ferromagnetic structure transforms into a spiral antiferromagnetic structure. Instead of using an abrupt change in the magnetization direction at each interatomic layer, we will model this transition smoothly since the effective field felt between the layers probably should be a weighed superposition of the exchange fields from the two closest layers. In the ferromagnetic phase, the spatial variation of the exchange field may thus be written as $$\begin{aligned}
\mathbf{h} = h\Big[&\cos\alpha\hat{\mathbf{x}} + \sin\alpha\Big\{\sin\Big(\frac{\theta x}{a}\Big)\hat{\mathbf{y}} +\cos\Big(\frac{\theta x}{a}\Big)\hat{\mathbf{z}}\Big\}\Big].\end{aligned}$$
Results {#sec:resultsI}
-------
In what follows, we will choose the parameters of our model, corresponding to a realistic experimental setup in order to make our study directly relevant for experiments on S$\mid$F bilayers. The numerical treatment makes use of built-in routines in MATLAB for a two-point boundary value problem for an ordinary differential equation. More specifically, we use a finite difference code which implements a three-stage Lobatto-Illa formula. An initial guess for the Ricatti-matrices is supplied with fixed boundary conditions, and the Usadel equation is then solved in the entire ferromagnetic region.
In the first part of this section, we will study the effect of domain walls in weak ferromagnets. Weak ferromagnetic alloys such as PdNi or CuNi are commonly employed in experiments, and the corresponding exchange field $h$ depends on the concentration of Ni, reaching up to tens of meV. The modification of the DOS is most dramatic in the case when the energy scales for the superconductivity and the ferromagnetism are of the same order, $h\sim\Delta$. This scenario appears to have been realized in Ref. where Cu$_{1-x}$Ni$_x$ with $x=0.44$ was used. The diffusion constant in the weakly ferromagnetic alloys is usually of order $D \sim 10^{-4}$ m$^2$/s. The superconducting region is considered to act as a reservoir with thickness $d_S\gg\xi_S$, while we fix the thickness of the ferromagnetic region at $d_F/\xi_S = 0.5$. This typically corresponds to a thickness of the ferromagnetic layer $\sim$ 10 nm. The remaining parameters are then the domain wall thickness $d_W$ and the term $G_\phi$ accounting for the spin-dependent phase-shifts at the interface. Below, we will contrast a thin domain wall $(d_W\ll d_F)$ with a thick domain wall $(d_W \simeq d_F)$ and investigate the role of $G_\phi$. In what follows, we choose $\zeta=5$ corresponding to a situation where $R_B\gg R_F$.
In the second part of this section, we will study conical ferromagnetism, of a similar kind to that realized in the heavy rare earth element Holmium (Ho) under certain conditions. Recently, it was strongly suggested by experimental data that a long-range triplet superconducting component was generated and sustained in a superconductor$\mid$Ho proximity structure [@sosnin_prl_06]. The experimental samples used in Ref. [@sosnin_prl_06] did not appear to fall into the diffusive motion regime, since Ho is a strong ferromagnet. More specifically, it was estimated that $h\tau \simeq 10$ in Ref. [@sosnin_prl_06], suggesting that one would have to revert to the more general Eilenberger equation in order to study the proximity effect in Ho. In this work, we will study a conical ferromagnet under the assumption that the diffusive limit is reached. For the actual structure of the magnetization, we choose the same parameters for Ho as those reported in Ref. [@sosnin_prl_06]: $\alpha=4\pi/9$, $\theta=\pi/6$, and $a=0.526$ nm (see Fig. \[fig:model\]). However, we choose the exchange field much weaker than in Ho, in order to justify the Usadel approach. Thus, our results may not be directly applicable to Ho. While in Ref. [@sosnin_prl_06] it was estimated that $h \sim 1$ eV, corresponding to an exchange field comparable in magnitude with the Fermi energy, we choose $h/\Delta_0=5$ in our study of conical ferromagnetism to ensure the validity of the quasiclassical approach. Assuming that $\xi_S = 20$ nm, which should be reasonable for a moderately disordered conventional superconductor, we obtain $a/\xi_S = 0.0263$.
### Domain wall
Before proceeding to a dissemination of our results, it should be noted that we find identical results for the Bloch and Néel wall cases. This seems reasonable, since the only difference between those two cases is that the $y$-component of the magnetization is exchanged with the $x$-component. The long-range triplet component comes about as long as only one of these is non-zero, and it does not matter which one it is. It is also necessary for the magnetization to vary directionally with the $x$-coordinate in order to generate the inhomogeneity required for the long-range triplet component. Note that the $z$-component of the magnetization is the same for the Bloch and Néel walls. In what follows, we only consider the Bloch wall configuration since the results for the Néel wall are identical. We also note that in our study, the magnetization is always inhomogeneous in the direction perpendicular to the interface, i.e. upon penetrating into the ferromagnetic region. In the case where the inhomogeneity of the magnetization is in the transverse direction (parallell to the interface), i.e. there is no variation in the $x$-direction, the proximity effect does not become long-ranged even if equal-spin correlations may be generated [@champel_prl_08]. The general condition for a long-range proximity effect is that there exists a misalignment between the triplet anomalous Green’s function vector and the exchange field.
We first study the thin-domain wall case $d_W/d_F=0.2$. To begin with, we shall consider the energy-resolved DOS in the center of the domain wall $(x=d_F/2)$ for several values of the exchange field. This is shown in Fig. \[fig:thinE\]. As seen, the zero-energy DOS is enhanced in all cases due to the presence of odd-frequency correlations.[@asano_prl_07_1; @Braude; @yokoyama_prb_07; @yokoyama_prb_05] The influence of the spin-DIPS $(G_\phi)$ seems to be an induction of additional peak features in the subgap regime. This effect is most pronounced at low exchange fields (in particular $h/\Delta_0=0.5$ in Fig. \[fig:thinE\]). A possible physical explanation for the additional peak features in the LDOS may be the fact that $G_\phi$ acts as an effective exchange field in both the superconducting and ferromagnetic layers.[@huertashernando_prl_02] It thus conspires with the intrinsically existing exchange field in the ferromagnetic layer to yield a modified value of the total exchange field. This explanation is consistent with the fact that the position of the peaks change upon increasing $G_\phi$. More specifically, the spin-DIPS appear to enhance the exchange field since the peaks move outwards toward the gap edge.
Next, we investigate the thick domain wall case, and choose $d_W/d_F=0.8$. In Fig. \[fig:thickE\], we again consider the energy-resolved LDOS in the middle of the ferromagnetic layer $(x/d_F=0.5)$ for three different values of the exchange field. Upon comparison with Fig. \[fig:thinE\], it is seen that the general trend upon increasing the domain wall thickness is an overall enhancement of the proximity effect. The qualitative features in Fig. \[fig:thickE\] are very similar to those in the thin domain wall case, but the enhancement at zero-energy tends to be larger, particularly so for large values of $h/\Delta_0$. Again, it is seen that the effect of the spin-DIPS is a modification of the total exchange field, amounting to a double-peak structure at subgap energies in the LDOS.
\
\
It is also interesting to consider the spatial dependence of the zero-energy DOS in the ferromagnetic region. By using local STM-techniques, it is possible to probe the DOS at (in principle) any location in the ferromagnetic film. The specific choice of $\varepsilon=0$ is particularly interesting in terms of the DOS, since it is strongly influenced by the presence of odd-frequency correlations. As pointed out in Refs. , the behaviour of the DOS at $\varepsilon=0$ may be interpreted as a competition between spin-singlet even-frequency correlation and spin-triplet odd-frequency correlation. The former tend to give a minigap in the DOS for subgap energies, while the latter yields a zero-energy peak in the DOS. Clearly, these two effects are competing with each other since they have a destructive interplay. In the present case, one would expect that the domain wall structure should favor the generation of the odd-frequency triplet components, thus enhancing the LDOS. This conjecture is supported by Figs. \[fig:thinE\] and \[fig:thickE\].
In Fig. \[fig:DOS\_x\], we plot the spatially-resolved LDOS at $\varepsilon=0$ for several values of $d_W$ to probe directly how the odd-frequency correlations are affected by the domain wall thickness. As compared to Figs. \[fig:thinE\] and \[fig:thickE\], we normalize the LDOS on its value at $x=0$ in Fig. \[fig:DOS\_x\] for easier comparison between different values of $d_W$, and choose $G_\phi=0$. From the plot, it is clear that the thicker the domain wall, the more strongly enhanced the zero-energy DOS. This also supports the notion that the magnetically inhomogeneous structure favors the generation of odd-frequency triplet components. The concomitant enhancement of the DOS may then be seen at increasingly larger penetration depths in the ferromagnet when the domain wall thickness is increased.
### Conical ferromagnetism
We now turn to a study of how the superconducting proximity effect is manifested in a ferromagnet with a conical magnetization such as Ho. We fix the exchange field at $h/\Delta_0=5$ and study how the DOS changes upon increasing the ferromagnetic layer thickness. The motivation for this is to obtain a better understanding of how the DOS changes when only the long-range triplet components are present in the sample. In an inhomogeneous ferromagnet, the singlet component and the $S_z=0$ triplet component are short-ranged, and penetrate in a distance $\xi_F = \sqrt{D/h}$ into the ferromagnet. The $S_z=\pm1$ triplet components, however, are not subject to the pair-breaking effect originating with the Zeeman splitting, and can thus penetrate a much longer distance $\xi_N=\sqrt{D/T}$ into the ferromagnet, where $T$ is temperature. Therefore, by making the ferromagnetic layer thick enough, one can be certain that there is no contribution from either the singlet or $S_z=0$ triplet components. Since we have chosen $h/\Delta_0=5$, we find that the penetration depth of these components in the ferromagnetic layer should be $0.44\xi_S$.
We next turn to a study of the proximity-induced LDOS. In Fig. \[fig:conical\_E\], we plot the energy-resolved LDOS for three layer thicknesses: *i)* $d/\xi_S=0.1$, *ii)* $d/\xi_S=0.5$, and *iii)* $d/\xi_S=0.9$. In case *i)*, both short-ranged and long-ranged components should contribute significantly to the LDOS. In case *ii)*, the long-ranged components should dominate over the short-ranged ones, while finally in case *iii)* only long-ranged components remain. This is because we evaluate the energy-resolved DOS at the F$\mid$I interface, $x=d_F$, as was also done in the experiment of Refs. [@kontos_prl_01; @sangiorgio_prl_08].
As seen in case *ii)* and *iii)*, a pronounced zero energy peak is present, bearing witness of the odd-frequency correlations in the system. The peak is more pronounced with increasing thickness, since the long-range triplet correlations dominate over the even-frequency singlet Green’s function as the thickness increases. However, case *i)* is qualitatively different from the two other thicknesses. In this case, the low-energy LDOS is completely suppressed in the regime $G_\phi/G_T<1$, and suddenly reappears for $G_\phi/G_T>1$. It is very interesting to note that the same effect was recently discovered for an S$\mid$N junction with a magnetically active interface [@linder_submitted_08], but in that case the effect was completely independent of the junction thickness.
\
In order to investigate this effect further, we focus on the zero-energy LDOS in the thin junction case in Fig. \[fig:conical\_G2\]. As seen, for sufficiently thin layers $d_F/\xi_S \ll 1$, an abrupt crossover takes place at a critical value of $G_\phi/G_T$, qualitatively altering the LDOS at zero-energy. Remarkably, we find that a similar transition takes place upon increasing the ferromagnetic layer thickness. Consider a plot of the zero-energy LDOS in Fig. \[fig:conical\_d\] as a function of $d_F/\xi_S$. As seen, at a critical layer thickness, the zero-energy LDOS rises abruptly from zero and acquires the usual oscillating behavior. To see how the full energy-resolved LDOS evolves with increasing $G_\phi$ for a fixed thickness $d_F/\xi_S = 0.1$, consider Fig. \[fig:conical\_G\]. As seen, the LDOS changes qualitatively above a critical value of $G_\phi/G_T \simeq 1.14$.
To summarize the findings of Figs. \[fig:conical\_G2\], \[fig:conical\_d\], and \[fig:conical\_G\], we have found that there is an abrupt crossover from a fully suppressed LDOS to a finite LDOS which appears at a critical thickness of the ferromagnetic layer, and the particular value of the critical thickness depends on the value of $G_\phi$. In a similar way, we find that there is an abrupt change appearing at a critical value of $G_\phi$ for sufficiently thin layers. The natural question is: what is the reason for these changes? An important clue is found in the fact that when the LDOS is fully suppressed, the odd-frequency correlations must be zero [@yokoyama_prb_07]. The presence of odd-frequency correlations will in general lead to an enhancement of the LDOS at zero-energy, which at present is one of the main suggestions put forth in the literature with regard to the issue of how to obtain clear experimental signatures of this exotic type of superconducting pairing. Therefore, the abrupt transition from a fully suppressed LDOS to a LDOS which is enhanced even compared to the normal-state value is a strong indicator of a symmetry-transition from the usual even-frequency correlations to a state of mixed even- and odd-frequency correlations, or possibly even pure odd-frequency correlations. It is therefore clear that the spin-DIPS occuring at the interface have paramount consequences with regard to the symmetry-properties of the induced superconducting correlations in the ferromagnet. Due to the complexity of the problem, it is unfortunately not possible to give an exact analytical treatment of the influence of $G_\phi$ on the symmetry-properties of the anomalous Green’s function.
In the remaining part of the discussion of conical ferromagnets, we wish to focus on how the proximity-induced LDOS depends on the structure of the magnetic texture, which is determined by the parameters $\{a,\alpha,\theta\}$ in Fig. \[fig:model\]. We here focus on the role of $\alpha$ and $\theta$, which control respectively the direction and the speed of rotation of the magnetization upon entering the ferromagnetic layer. Thus, we keep $a/\xi_S$ fixed at $a/\xi_S=0.0263$. In Fig. \[fig:conical\_theta\], we present results for the zero-energy LDOS at $x=d_F$ as a function of $\theta$ for several values of $\alpha$. The LDOS displays oscillations as a function of $\theta$, and eventually seems to sature upon increasing $\theta$. This may be understood microscopically by realizing that when the rotation of the magnetization texture becomes faster, [*i.e.* ]{}increasing $\theta$, the effective magnetization felt by the Cooper pair averages out to zero for the rotating components. For our setup, this would mean that only the $h_x$-component should remain non-zero, while $h_y=h_z=0$. To verify this scenario, we have also plotted the results in the $h_y=h_z=0$ case in Fig. \[fig:conical\_theta\] (dotted lines) for each value of $\alpha$, which is seen to coincide with the limiting behavior in the the high-$\theta$ case. It is interesting to note that for $\alpha=\pi/2$, the LDOS vanishes completely above a critical value for $\theta$. This may be understood by noting that $h_x=0$ when $\alpha=\pi/2$. Thus, when $\theta$ increases, we have $\langle h_x \rangle = \langle h_y \rangle = 0$, causing the ferromagnetic layer to act as a normal metal.
Discussion {#sec:discussionI}
----------
The main approximation that we have made in our calculations is to use the bulk solution for the order parameter of the superconductor. Although this approximation is expected to be satisfactory in the regime $d_S\gg\{\xi_S,d_F\}$, such that the superconductor acts as a reservoir, there are two aspects which are lost upon doing so. One aspect is the depletion of the superconducting order parameter near the interface. The depletion may be disregarded in the tunneling limit [@bruder_prb_90] (low barrier transparency), and we do not expect that an inclusion of the spatial profile of the superconducting order parameter near the interface should have any qualitative influence upon our results, as long as the superconducting order parameter is not dramatically reduced at the interface.
The assumption of a step-function superconducting order parameter is commonly employed in the literature, but let us for the sake of clarity here examine a bit more carefully under which circumstances this is truly warranted. In the present work, we have considered a superconducting reservoir of size $d_S\gg\xi_S$ and a ferromagnetic film of size $d_F \leq \xi_S$. For a weak ferromagnet considered here, the ferromagnetic coherence length $\xi_F$ is comparable in size to $\xi_S$. Also, we have considered the case where $\zeta=R_B/R_F\gg1$, corresponding to a low barrier transparency, which should be experimentally relevant. To investigate quantitatively how much the superconducting order parameter is suppressed near the interface, let us fix $h/\Delta_0=10$, $d_S/\xi_S=5$, $d_F/\xi_F=1$, and $\zeta=5$. Using a numerical approach for S$\mid$F bilayer with a homogeneous exchange field as employed in Part II of our paper, we obtain the gap self-consistently with the result shown in Fig. \[fig:gap\]. It is also necessary to introduce the barrier asymmetry factor $\gamma=\xi_S\sigma_F/(\xi_F\sigma_S)$, where $\sigma_{F(S)}$ is the conductivity in the F (S) layer. Here, we set $\gamma=1$. As seen, the depletion of the gap is quite insensitive to the value of $G_\phi$, and we have verified that the depletion of the gap is virtually the same even up to ferromagnetic layer thicknesses of $d_F/\xi_F=4$. As recently pointed out in Ref. [@cottet_arxiv_08], the step-function approximation breaks down for low values of $\zeta$ and/or high values of $\gamma$, and if the spin-DIPS $G_\phi^S$ induced on the superconducting side are large in magnitude compared to the tunneling conductance $G_T$, the suppression of the gap becomes more pronounced.
The second aspect which is lost is the inverse proximity effect in the superconductor. The inverse proximity effect is, in similarity to the depletion of the order parameter, expected to be small when the interface transparency is low and $d_S \gg d_F$. Nevertheless, the presence of the spin-DIPS at the interface, modelled through the parameter $G_\phi$, could have some non-trivial impact on the correlations in the superconductor. Cottet showed that this may indeed be so in Ref. , at least when the superconducting layer is quite thin. The full effect exerted on the LDOS by the presence of spin-DIPS on both sides of the interfaces was recently investigated numerically in an S$\mid$F bilayer [@cottet_arxiv_08]. However, no study so far have investigated how the proximity-induced magnetization in the superconducting region is affected by spin-DIPS. We will proceed to investigate this particular issue in detail in Part II of this work.
Above, we have considered the diffusive limit $\xi_S/l_\text{imp}\gg1$, where $l_\text{imp}=v_F\tau$ is the mean free path. Although the magnetic texture we have considered in the second part is identical that of the conical ferromagnet Ho, one important difference is that Ho is a strong ferromagnet, contrary to the case studied here. This means that the diffusive limit condition $h\tau\ll1$ is not fulfilled for Ho, and it was in fact estimated in [@sosnin_prl_06] that $h\tau \simeq 10$. This calls for a treatment with the more general Eilenberger equation, which allows for a study where the energy scale of the Zeeman-splitting is comparable or larger than the self-energy associated with impurity scattering. A natural continuation of this work would therefore be to study a proximity-structure of a superconductor$\mid$conical ferromagnet for an arbitrary ratio of the parameter $h\tau$. Such an endeavor would nevertheless be quite challenging unless a weak proximity effect is assumed. In the present work, we have not restricted ourselves to any limits with regard to the barrier transparency or the proximity effect. Although the exchange field considered for the conical ferromagnet in this paper is smaller than the one realized in Ho, we expect that our results may be qualitatively relevant for STM-measurements in superconducting junctions with Ho. In general, increasing the exchange field amounts to a quantitative reduction of the magnitude of the proximity effect.
Finally, we show that the zero-energy DOS for the domain wall case exhibits a similar crossover behavior as the conical ferromagnetic case upon varying $G_\phi$ and $d_F$ when $d_F/\xi_S \ll 1$. In Fig. \[fig:domainwall\_crossover\], the zero-energy DOS is plotted for the thick-domain wall case to illustrate this effect - the results are very similar even for $d_W/d_F\ll1$ when $d_F/\xi_S\ll1$. Once again, it should be noted that a complete suppression of the DOS amounts to pure even-frequency superconducting correlations induced in the ferromagnetic region, since the presence of odd-frequency correlations enhances the zero-energy DOS. The exact microscopic mechanism behind the abrupt crossover occuring at critical values of $G_\phi$ and $d_F$, respectively, remains somewhat unclear. A possible resolution to this behavior is the observation that the spin-DIPS may conspire with the proximity-induced minigap in the ferromagnetic region for sufficiently thin layers ($d_F/\xi_S\ll1$) and yield a zero-energy DOS of the form $N(0) \sim 1/\sqrt{G_\phi^2-G_T^2}$, as noted in Ref. [@huertashernando_prl_02]. In this case, a scenario similar to the one of a thin-film superconductor in the presence of an in-plane magnetic field is realized, where the spin-resolved DOS experiences a quasiparticle energy-shift with $\pm h$. In this case, the role of the exchange field is played by $G_\phi$ while the role of the superconducting gap is played by $G_T$. We do not observe the effects shown in Fig. \[fig:domainwall\_crossover\] for larger values of $d_F$, which is consistent with the fact that the minigap is completely absent in this regime since the proximity effect becomes weaker.
Inverse proximity effect in a S$\mid$F bilayer with a homogeneous magnetization texture
=======================================================================================
In this part of the paper, we will consider the inverse proximity effect of an S$\mid$F bilayer, where the exchange field is fixed and parallel to the $z$-axis, manifested through an induced magnetization near the interface of the superconducting region. We will again employ the quasiclassical theory of superconductivity [@serene], and consider the diffusive limit described by the Usadel equation [@usadel], as this is experimentally the most relevant case. Our approach will be to solve the Usadel equation and the gap equation for the superconducting order parameter self-consistently everywhere in the system shown in Fig. \[fig:model2\].
Theory {#sec:theoryII}
------
We will use the conventions and notation of Ref. [@linder_prb_08_2], which also allows for an inclusion of magnetic impurities and spin-orbit coupling if desirable. To facilitate the numerical implementation, we employ the following parametrization of the Green’s functions: $$\begin{aligned}
\label{eq:green}
\hat{g}_j = \begin{pmatrix}
c_{\uparrow,j} & 0 & 0 & s_{\uparrow,j}\\
0 & c_{\downarrow,j} & s_{\downarrow,j} & 0 \\
0 & -s_{\downarrow,j} & -c_{\downarrow,j} & 0 \\
-s_{\uparrow,j} & 0 & 0 & -c_{\uparrow,j} \\
\end{pmatrix},\; j=\{S,F\}\end{aligned}$$ where we have introduced $$\begin{aligned}
s_{\sigma,j} = \sinh(\theta_{\sigma,j}),\; c_{\sigma,j} = \cosh(\theta_{\sigma,j}).\end{aligned}$$ Note that $(\hat{g}_j)^2=\hat{1}$ is satisfied. The parameter $\theta_{\sigma,j}$ is a measure of the proximity effect, and obeys the Usadel equation $$\begin{aligned}
\label{eq:usadel}
D_j\partial_x^2\theta_{\sigma,j} &+ 2\i(\varepsilon+\sigma h)\sinh(\theta_{\sigma,j}) \notag\\
&- 2\i\sigma\Delta\cosh(\theta_{\sigma,j}) = 0,\; \sigma=\{\uparrow,\downarrow\}\end{aligned}$$ in the superconducting ($h=0, j=S$) and ferromagnetic ($\Delta=0, j=F$) layer, respectively. Above, $D_S$ and $D_F$ denote the diffusion constants in the two layers, $\varepsilon$ is the quasiparticle energy, $\Delta$ is the pair potential, while $h$ is the exchange field. The two latter are in general subject to a depletion close to the S$\mid$F interface.
The boundary condition for the ferromagnetic Green’s function, $\hat{g}_F$, reads [@huertashernando_prl_02] $$\begin{aligned}
\label{eq:bcF}
2\xi_F\hat{g}_F \partial_x \hat{g}_F = \gamma_T[\hat{g}_S, \hat{g}_F] + \i \gamma_{\phi,F} [\hat{\alpha}_3, \hat{g}_F]\end{aligned}$$ at $x=0$, and $ \hat{g}_F\partial_x\hat{g}_F=\hat{0}$ at $x=d_F$. Here, $\hat{\ldots}$ denotes a $4\times4$ matrix in spin$\otimes$particle-hole space. Also, $\hat{\alpha}_3=\text{diag}(1,-1,1,-1)$. For the superconducting Green’s function, $\hat{g}_S$, we have $$\begin{aligned}
\label{eq:bcS}
2(\xi_S/\gamma)\hat{g}_S \partial_x \hat{g}_S = -\gamma_T[\hat{g}_F, \hat{g}_S] - \i \gamma_{\phi,S} [\hat{\alpha}_3, \hat{g}_S]\end{aligned}$$ at $x=0$, and $ \hat{g}_S\partial_x\hat{g}_S=\hat{0}$ at $x=-d_S$. Above, we have defined $$\begin{aligned}
\gamma_T = G_T\xi_F/(A\sigma_F),\; \gamma_{\phi,F(S)} = G_{\phi,F(S)}\xi_F/(A\sigma_F),\end{aligned}$$ and the barrier asymmetry factor $$\begin{aligned}
\gamma = \xi_S\sigma_F/(\xi_F\sigma_S).\end{aligned}$$ Moreover, $A$ is the tunneling contact area, while $\sigma_{F(S)}$ are the normal-state conductivities. Note that $$\begin{aligned}
A\sigma_{F(S)} = d_{F(S)}/R_{F(S)},\end{aligned}$$ where $d_{F(S)}$ is the thickness of the layer and $R_{F(S)}$ is the normal-state resistance.
In total, the interface between the S and F regions is thus characterized by three parameters: the normalized barrier conductance $\gamma_T$, the spin-DIPS $\gamma_{\phi,S}$ and $\gamma_{\phi,F}$ on each side of the interface. In what follows, we will study the mutual influence of superconductivity and ferromagnetism on each other, instead of assuming the bulk solution for $\hat{g}_S$ in the superconducting region, as is usually done in the literature. We solve the Usadel equation self-consistently in both the S and F layer, supplementing it with the gap equation: $$\begin{aligned}
\Delta = \frac{N_F\lambda}{2}\int^\omega_0\text{d}\varepsilon \tanh{(\beta\varepsilon/2)} \sum_\sigma\sigma \text{Re}\{\sinh(\theta_\sigma)\},\end{aligned}$$ where we choose the weak coupling-constant and cut-off energy to be $N_F\lambda=0.2$ and $\omega/\Delta_0 = 75$. When obtaining the Green’s functions, a number of interesting physical quantities may be calculated. For instance, the normalized LDOS is obtained according to $$\begin{aligned}
N(\varepsilon)/N_0 = \text{Re}\{\cosh\theta_\uparrow + \cosh\theta_\downarrow\}/2.\end{aligned}$$ Experimentally, the LDOS may be probed at $x=-d_S$ in the superconducting layer and $x=d_F$ in the ferromagnetic layer by performing tunneling spectroscopy through the insulating layer. In principle, it is also possible to obtain the LDOS at any position $x$ by using spatially-resolved scanning tunneling microscopy.
The quantity of interest which we shall focus on in this work is the proximity-induced magnetization in the superconducting region. A few words about the sign of the magnetization in the problem is appropriate. First, recall that the magnetic moment $\boldsymbol{\mu}$ of an electron is directed *opposite* to its spin $\mathbf{S}$, namely $\boldsymbol{\mu} \simeq -(e/m_e)\boldsymbol{S}$, where $e=|e|$ and $m_e$ is the electron charge and mass. Therefore, if the exchange energy $h$ favors spin-$\uparrow$ electrons energetically, the resulting magnetization $\boldsymbol{M}$ of the ferromagnet will be directed in the opposite direction, $\boldsymbol{M}\parallel (-\boldsymbol{z})$.
In the absence of a proximity effect, we have $\boldsymbol{M}=0$ in the superconducting region and $\boldsymbol{M} = M_0\hat{\boldsymbol{z}}$ in the ferromagnetic region, where $$\begin{aligned}
M_0 \simeq -\mu_BN_0h\end{aligned}$$ in the quasiclassical approximation $h\ll \varepsilon_F$. Now, the change in magnetization due to the proximity effect may be calculated according to $$\begin{aligned}
\delta\boldsymbol{M} = -\mu_B \hat{\boldsymbol{z}}\sum_\sigma \sigma \langle \psi_\sigma^\dag \psi_\sigma \rangle\end{aligned}$$ in both the superconducting and ferromagnetic region. Using a quasiclassical approach, the above expression translates into a normalized change in magnetization $$\begin{aligned}
\label{eq:mag}
\delta M/M_0 = -\int^\infty_0 \frac{\text{d}\varepsilon}{h} \sum_\sigma \sigma \text{Re} \{\cosh\theta_\sigma\}\tanh(\beta \varepsilon/2).\end{aligned}$$ In the ferromagnetic region, the normalized magnetization $M/M_0$ is therefore $1 + \delta M_F/M_0$, while in the superconducting region we have an induced magnetization $\delta M_S/M_0$, where $\delta M_{F(S)}$ is determined by Eq. (\[eq:mag\]) on the ferromagnetic (superconducting) side of the interface.
Although we shall be concerned with a full numerical solution when presenting our results in Sec. \[sec:resultsII\], let us for completeness sketch how an analytical solution may be obtained under the assumption of a weak proximity effect. Including the spin-DIPS, the analytical results obtained here are thus a natural extension of the results in Ref. [@bergeret_prb_04], where the spin-DIPS were neglected. We remind the reader that spin-DIPS occur whenever there is a finite spin-polarization in the ferromagnetic region or when the barrier itself is magnetic.
In the weak-proximity regime, the Usadel equation in the ferromagnetic region becomes $$\begin{aligned}
D_F\partial_x^2\delta\theta_\sigma^F + 2\i(\varepsilon+\sigma h)\delta\theta_\sigma^F = 0,\end{aligned}$$ where the linearization of Eqs. (\[eq:green\]) and (\[eq:usadel\]) amounts to $\theta_{\sigma,F}\to \delta\theta_\sigma^F$ where $|\delta\theta_\sigma^F|\ll1$. The general solution is readily obtained as $$\begin{aligned}
\label{eq:weakF}
\delta\theta_\sigma^F = A_\sigma({\mathrm{e}^{\i k_\sigma x}} + {\mathrm{e}^{-\i k_\sigma x + 2\i k_\sigma d_F}}),\end{aligned}$$ upon taking into account the vacuum boundary condition $\partial_x \delta\theta_\sigma^F=0$ at $x=d_F$, and defining $$\begin{aligned}
k_\sigma^2 &= 2\i(\varepsilon+\sigma h)/D_F.\end{aligned}$$ In the superconducting region, we obtain the Usadel equation $$\begin{aligned}
D_S\partial_x^2\delta\theta_\sigma^S + 2\i(\varepsilon c_\text{BCS} - \Delta s_\text{BCS}) = 0,\end{aligned}$$ under the assumption that the superconducting order parameter is virtually unaltered from the bulk case. This is a valid approximation for $\{\gamma,\gamma_T\}\ll1$ and not too large $\gamma_{\phi,S}$ (typically $\gamma_{\phi,S}<1$), which we have verified by using the full numerical solution. Here, $\delta\theta_\sigma^S$ is the deviation from the bulk BCS solution, i.e. $\theta_{\sigma,S} \to \sigma\theta_\text{BCS} + \delta\theta_\sigma^S$ with $|\delta\theta_\sigma^S|\ll1$ and $$\begin{aligned}
c_\text{BCS} = &\cosh(\theta_\text{BCS}),\; s_\text{BCS} = \sinh(\theta_\text{BCS}),\notag\\
&\theta_\text{BCS} = \text{atanh}(\Delta/\varepsilon).\end{aligned}$$ In this case, the general solution reads $$\begin{aligned}
\label{eq:weakS}
\delta\theta_\sigma^S = B_\sigma({\mathrm{e}^{\i \kappa x}} + {\mathrm{e}^{-\i\kappa x - 2\i \kappa d_S}}),\end{aligned}$$ when incorporating the vacuum boundary condition $\partial_x \delta\theta_\sigma^S=0$ at $x=-d_S$, upon defining $$\begin{aligned}
\kappa^2 = (\varepsilon c_\text{BCS} - \Delta s_\text{BCS})/D_S.\end{aligned}$$ The remaining task is to determine the unknown coefficients $\{A_\sigma,B_\sigma\}$. Linearizing the boundary conditions Eq. (\[eq:bcF\]) and (\[eq:bcS\]), we obtain at $x=0$ $$\begin{aligned}
\xi_S \partial_x\delta\theta_\sigma^S/\gamma &= \gamma_T(c\delta\theta_\sigma^F -\sigma s_\text{BCS} - c_\text{BCS}\delta\theta_\sigma^S) \notag\\
&- \sigma\i\gamma_{\phi,S}(\sigma s_\text{BCS} + c_\text{BCS}\delta\theta_\sigma^S),\notag\\
\xi_F\partial_x\delta\theta_\sigma^F &= \gamma_T(c\delta\theta_\sigma^F -\sigma s_\text{BCS} - c_\text{BCS}\delta\theta_\sigma^S)\notag\\
&+\sigma\i\gamma_{\phi,F}\delta\theta_\sigma^F.\end{aligned}$$ From these boundary conditions, one derives that $$\begin{aligned}
\label{eq:coeff}
A_\sigma &= \frac{z_4^\sigma}{z_3^\sigma}\frac{s_\text{BCS}[z_3^\sigma(\gamma_T\sigma + \i\gamma_{\phi,S}) - z_1^\sigma\sigma\gamma_T]}{z_2^\sigma z_3^\sigma - z_1^\sigma z_4^\sigma} - \sigma s_\text{BCS} \gamma_T,\notag\\
B_\sigma &= \frac{s_\text{BCS} [z_1^\sigma \sigma\gamma_T - z_3^\sigma(\gamma_T\sigma + \i\gamma_{\phi,S})]}{z_2^\sigma z_3^\sigma - z_1^\sigma z_4^\sigma}.\end{aligned}$$ Here, we have defined the auxiliary quantities: $$\begin{aligned}
z_1^\sigma &= -\gamma_Tc_\text{BCS}(1+{\mathrm{e}^{2\i k_\sigma d_F}})\notag\\
z_2^\sigma &= \frac{\i \kappa \xi_S(1-{\mathrm{e}^{-2\i \kappa d_S}})}{\gamma} + c_\text{BCS}(\gamma_T + \i\sigma \gamma_{\phi,S})(1+{\mathrm{e}^{-2\i\kappa d_S}})\notag\\
z_3^\sigma &= \i k_\sigma \xi_F(1-{\mathrm{e}^{2\i k_\sigma d_F}}) - (\gamma_Tc_\text{BCS} + \sigma \i\gamma_{\phi,F})(1+{\mathrm{e}^{2\i k_\sigma d_F}})\notag\\
z_4^\sigma &= c_\text{BCS}\gamma_T(1+{\mathrm{e}^{-2\i \kappa d_S}}).\end{aligned}$$ Eqs. (\[eq:weakF\]), (\[eq:weakS\]), and (\[eq:coeff\]) constitute a closed analytical solution for the Green’s functions in the entire S$\mid$F bilayer. To use this analytical solution, one should verify that $|\delta\theta_\sigma^{F,S}|\ll1$ for the relevant parameter regime. Spin-flip and spin-orbit scattering may also be accounted for in the analytical solution of the Green’s function by adding appropriate terms to the Usadel equation. The calculation is then performed along the lines of Refs. [@linder_prb_08_2; @linder_spin].
Results {#sec:resultsII}
-------
We are now in a position to evaluate the proximity-induced magnetization numerically. The full (non-linearized) Usadel equation will be employed, such that we are not restricted to the weak-proximity effect regime. To stabilize the numerical calculations, we add a small imaginary number to the quasiparticle energy, $\varepsilon\to \varepsilon +\i\eta$, with $\eta=0.05\Delta_0$. We focus on the results reported very recently by Xia [*et al.*]{}, [@xia_arxiv_08] and take $d_S/\xi_S=0.2$, $d_F/\xi_F=1.0$, and $h/\Delta_0=15$ as a reasonable set of parameters which should be relevant to this experiment. Also, we assume that the junction conductance was low, $\gamma_T=0.1$, and we set the barrier asymmetry factor to $\gamma=0.2$, corresponding to a scenario where the superconducting region is much less disordered than the ferromagnetic one. We will also investigate the case $d_S/\xi_S=1.0$, to see how the properties of the system changes when going away from the limit $d_S/\xi_S\ll 1$. We underline that our main objective in this work is to investigate the influence of the spin-DIPS on the proximity-induced magnetization in the system, such that we mainly vary $\gamma_{\phi,F}$ and $\gamma_{\phi,S}$ while keeping the other parameters fixed.
Let us first consider the temperature-dependence of the proximity-induced magnetization in the superconducting region in Fig. \[fig:Mag\_T\]. To clarify the role of the spin-DIPS on each side of the interface, we plot $\delta M_S/M_0$ for several values of $\gamma_{\phi,S}$ in Fig. \[fig:Mag\_T\] (a) while keeping $\gamma_{\phi,F}=0$ fixed. Conversely, we plot $\delta M_S/M_0$ for several $\gamma_{\phi,F}$ in Fig. \[fig:Mag\_T\] (b) with $\gamma_{\phi,S}=0$. In both cases, we plot the proximity-induced magnetization at $x=-d_S$. One obvious difference between these two scenarios is that the spin-DIPS on the superconducting side, $\gamma_{\phi,S}$, influence the proximity-induced magnetization much stronger than $\gamma_{\phi,F}$. The same thing is true with regard to the influence of spin-DIPS on the superconducting order parameter: $\gamma_{\phi,S}$ influences the spatial profile of $\Delta$ much more than what $\gamma_{\phi,F}$ does. From Fig. \[fig:Mag\_T\], it is clearly seen how the proximity-induced magnetization may switch sign upon increasing the magnitude of the spin-DIPS $\gamma_{\phi,S}$. We have checked numerically that this effect also takes upon increasing $\gamma_{\phi,F}$ when keeping $\gamma_{\phi,S}=0$. *Thus, increasing either* $\gamma_{\phi,S}$ *or* $\gamma_{\phi,F}$ *can lead to a sign change of the proximity-induced magnetization in the superconducting region.* It is then clear that the conclusion of Ref. [@kharitonov_prb_06] that only spin screening is possible in diffusive S$\mid$F bilayers does not hold in general, since the presence of spin-DIPS alters the screening effect. In what follows, we focus on the role of $\gamma_{\phi,S}$ since its impact on $\delta M_S/M_0$ is much greater than that of $\gamma_{\phi,F}$. In Fig. \[fig:combination\], we consider the case $d_S/\xi_S=1.0$ to show that the sign change of the magnetization persists when going away from the limit $d_S/\xi_S\ll1$. The spatial profile of the total magnetization in the F and S regions are shown for the case $d_S/\xi=0.2$ with $\gamma_{\phi,S}=0.0$ in Fig. \[fig:spatial0.0\] and $\gamma_{\phi,S}=1.0$ in Fig. \[fig:spatial1.0\]. It is seen that the magnetization decreases in a monotonic fashion toward the superconducting region, and reaches its bulk value deep inside the ferromagnetic region. In the superconductor, magnetization is induced near the interface and decays with the distance from the interface.
Discussion {#sec:discussionII}
----------
We propose the following explanation for the anti-screening effect observed upon increasing $\gamma_{\phi,S}$. The effect of the spin-DIPS in the case of a thin superconducting layer $d_S\ll\xi_S$ in Ref. [@cottet_prb_07] was shown to be equivalent to an internal magnetic exchange splitting $h_\text{eff}$ in the superconducting region. Therefore, the magnitude of the magnetization in the superconductor should essentially grow with an increasing value of $\gamma_{\phi,S}$. If this is the case, the proximity-induced magnetization should also be sensitive to *the sign* of $\gamma_{\phi,S}$, as the opposite spin species would be energetically favored when comparing the case $\gamma_{\phi,S}$ with $(-\gamma_{\phi,S})$. To test this hypothesis, we plot in Fig. \[fig:gammaphi\] the proximity-induced magnetization at $T=0$ as a function of the spin-DIPS on the superconducting side, $\gamma_{\phi,S}$ (keeping $\gamma_{\phi,F}=0$). The results confirm our hypothesis – it is seen that $\delta M_S/M_0$ is an antisymmetric function of $\gamma_{\phi,S}$. The influence of $\gamma_{\phi,S}$ can also be seen directly in the LDOS in the superconducting region. For $\gamma_{\phi,S}\neq0$, we obtain a double-peak structure in the LDOS at $x=-d_S$ in agreement with Refs. [@cottet_prb_07; @cottet_arxiv_08], while the superconducting order parameter depletes very little close to the interface for the chosen parameter values. In general, the depletion of the superconducting order parameter is found to be small as long as $\{\gamma_T,\gamma\}\ll1$ and $\gamma_{\phi,S} \simeq 1$ or smaller.
In Ref. [@bergeret_prb_04], the inverse proximity effect of an S$\mid$F bilayer was studied without taking into account the presence of spin-DIPS, with the result that the proximity-induced magnetization in the superconducting region would have the opposite sign of the proximity ferromagnet, i.e. a screening effect. It was proposed in Ref. [@bergeret_prb_04] that this behavior could be understood physically by considering the contribution to the magnetization from Cooper pairs which were close to the interface: the spin-$\uparrow$ electron would prefer to be in the ferromagnetic region due to the exchange energy, while the spin-$\downarrow$ electron remaining in the superconducting region then would give rise to a magnetization in the opposite direction of the proximity ferromagnet. However, it is clear from the present study that this simple picture must be modified when properly considering the spin-DIPS $\gamma_{\phi,S}$ on the superconducting side of the junction, since they act as an effective exchange field inside the superconductor.
In this paper, we have evaluated the proximity-induced magnetization in the vicinity of the interface without taking into account the Meissner response of the superconductor. This should be permissable in a thin-film geometry as the one employed in Ref. [@xia_arxiv_08], where the screening currents are suppressed. In particular, for a field in the plane of the superconducting film (see Fig. \[fig:model2\]), the Meissner effect should be strongly suppressed [@meservey] for $d_S/\xi_S\ll1$.
Summary {#sec:summary}
=======
In conclusion, we have in Part I of this work investigated the proximity effect in a superconductor$\mid$inhomogeneous ferromagnet junctions. Proper boundary conditions which take into account the spin dependent phase-shifts experienced by the reflected and transmitted quasiparticles were employed. As an application of our model, we have studied the LDOS in the ferromagnet in the presence of domain walls and a conical magnetic structure. We find that the presence of a domain wall enhances the odd-frequency correlations induced in the ferromagnet, manifested through a zero-energy peak in the LDOS. For the conical ferromagnet, we show that the spin-dependent phase shifts originating with the interface have a strong qualitative effect on the LDOS, especially for thin layers. In particular, we find an abrupt crossover from a fully suppressed LDOS to a finite LDOS which appears at a critical thickness of the ferromagnetic layer, and the particular value of the critical thickness depends on the value of $G_\phi$. In a similar way, we find that there is an abrupt change appearing at a critical value of $G_\phi$ for sufficiently thin layers. We speculate that the reason for this could be a symmetry-transition from even- to odd-frequency correlations for the proximity-amplitudes in the ferromagnetic region. The theory developed in the present paper takes into account both the phase-shifts acquired by scattered quasiparticles at the interface due to the presence of ferromagnetic correlations, and also an arbitrary inhomogeneity of the magnetic texture on the ferromagnetic side. Our results for the conical ferromagnetic structure should be relevant for the material Ho, which was used in Ref. [@sosnin_prl_06] to indicate the presence of long-range superconducting correlations.
In Part II of this work, we have investiged numerically and self-consistently the inverse proximity effect in a superconductor$\mid$ferromagnet (S$\mid$F) bilayer, manifested through an induced magnetization in the superconducting region. We find that the interface properties play a crucial role in this context, as the spin-dependent interfacial phase-shifts (spin-DIPS) may invert the sign of the proximity-induced magnetization. This finding modifies previous conclusions obtained in the literature, and suggests that the influence of the spin-DIPS should be properly accounted for in a theory for the inverse proximity effect in S$\mid$F bilayers.
J.L. acknowledges M. Eschrig and A. Cottet for very useful discussions. J.L. and A.S. were supported by the Research Council of Norway, Grants No. 158518/432 and No. 158547/431 (NANOMAT), and Grant No. 167498/V30 (STORFORSK). T.Y. acknowledges support by JSPS.
Spin-active boundary conditions
===============================
To facilitate and encourage use of the spin-active boundary conditions required for an S$\mid$F interface, we here write down their explicit form in the diffusive limit for the case of a magnetization in the $\boldsymbol{z}$-direction, following Ref. [@huertashernando_prl_02; @cottet_prb_07]. Consider a junction consisting of two regions 1 and 2, as shown in Fig. \[fig:appendix\]. The regions have widths $d_j$ and bulk electrical resistances $R_j$. The matrices used below are $4\times4$ matrices in particle-hole$\otimes$spin space, using a basis $$\begin{aligned}
\psi({\boldsymbol{r}},t) = \begin{pmatrix}
\psi_\uparrow({\boldsymbol{r}},t)\notag\\
\psi_\downarrow({\boldsymbol{r}},t)\notag\\
\psi_\uparrow^\dag({\boldsymbol{r}},t)\notag\\
\psi_\downarrow^\dag({\boldsymbol{r}},t)\notag\\
\end{pmatrix}.\end{aligned}$$ Introducing $\hat{\alpha} = \text{diag}(1,-1,1,-1) = \text{diag}(\underline{\sigma}_3,\underline{\sigma}_3)$, where $\underline{\sigma_3}$ is the third Pauli matrix in spin-space, we may write the boundary conditions as follows: $$\begin{aligned}
\label{eq:bcspin}
2(d_1/R_1) \hat{g}_1\partial_x \hat{g}_1 = G_T[\hat{g}_1,\hat{g}_2] - \i G_{\phi,1}[\hat{\alpha},\hat{g}_1],\notag\\
2(d_2/R_2) \hat{g}_2 \partial_x \hat{g}_2 = G_T[\hat{g}_1,\hat{g}_2] + \i G_{\phi,2}[\hat{\alpha},\hat{g}_2].\end{aligned}$$
Here, $G_T$ is the conductance of the junction while $G_{\phi,j}$ are the phase-shifts on side $j$ of the interface. The parameters $\{G_T,G_{\phi,j}\}$ may be calculated by relating them to microscopic transmission and reflection probabilities within [*e.g.* ]{}a Blonder-Tinkham-Klapwijk (BTK) [@btk] framework. Explicitly spin-active barriers were considered in ballistic S$\mid$F bilayers using the BTK-approach for both $s$-wave [@linder_prb_07] and $d$-wave [@kashiwaya_prb_99] superconductors. In the absence of spin-DIPS $(G_{\phi,j}\to0)$, Eq. (\[eq:bcspin\]) reduce to the Kupriyanov-Lukichev non-magnetic boundary conditions [@kupluk]. Let us make a final remark concerning the treatment of interfaces in the quasiclassical theory of superconductivity. We previously stated that the application of the present theory requires that the characteristic energies of various self-energies and perturbations in the system are much smaller than the Fermi energy $\varepsilon_\text{F}$. At first glance, this might seem to be irreconcilable with the presence of interfaces, which represent strong perturbations varying on atomic length scales. However, this problem may be overcome by including the interfaces as boundary conditions for the Green’s functions rather than directly as self-energies in the Usadel equation.
[99]{}
F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Rev. Mod. Phys. **77**, 1321 (2005).
A. I. Buzdin, Rev. Mod. Phys. **77**, 935-976 (2005).
F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Phys. Rev. Lett. **86**, 4096 (2001).
V. V. Ryazanov, V. A. Oboznov, A. Yu. Rusanov, A. V. Veretennikov, A. A. Golubov, and J. Aarts, Phys. Rev. Lett. **86**, 2427 (2001).
A. F. Volkov, F. S. Bergeret, and K. B. Efetov, Phys. Rev. Lett. **90**, 117006 (2003).
F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Phys. Rev. B **68**, 064513 (2003).
M. Eschrig, J. Kopu, J. C. Cuevas, and G. Sch[ö]{}n, Phys. Rev. Lett. **90**, 137003 (2003).
V. Braude and Yu. V. Nazarov, Phys. Rev. Lett. [**98**]{}, 077003 (2007).
Y. Asano, Y. Tanaka, and A. A. Golubov, Phys. Rev. Lett. **98**, 107002 (2007); Y. Asano, Y. Sawa, Y. Tanaka, and A. A. Golubov, Phys. Rev. B [**76**]{}, 224525 (2007).
R. S. Keizer, S. T. B. Goennenwein, T. M. Klapwijk, G. Miao, G. Xiao, and A. Gupta, Nature (London) [**439**]{}, 825 (2006).
Ya. V. Fominov, A. F. Volkov, and K. B. Efetov, Phys. Rev. B **75**, 104509 (2007).
T. Yokoyama, Y. Tanaka, and A. A. Golubov, Phys. Rev. B **75**, 134510 (2007).
Y. Asano, Y. Tanaka, A. A. Golubov, and S. Kashiwaya, Phys. Rev. Lett. **99**, 067005 (2007).
K. Halterman, P. H. Barsic, and O. T. Valls, Phys. Rev. Lett. **99**, 127002 (2007).
Y. Tanaka and A. A. Golubov, Phys. Rev. Lett. [**98**]{}, 037003 (2007).
M. Eschrig, T. Lofwander, T. Champel, J. C. Cuevas, J. Kopu, G. Sch[ö]{}n, J. Low Temp. Phys. **147**, 457 (2007).
J. Linder, T. Yokoyama, and A. Sudb[ø]{}, Phys. Rev. B **77**, 174507 (2008); J. Linder, T. Yokoyama, Y. Tanaka, Y. Asano, and A. Sudb[ø]{}, Phys. Rev. B [**77**]{}, 174505 (2008). M. Eschrig and T. L[ö]{}fwander, Nature Phys. [**4**]{}, 138 (2008).
K. Halterman, O. T. Valls, and P. H. Barsic, Phys. Rev. B **77**, 174511 (2008).
J. Linder, T. Yokoyama, and A. Sudb[ø]{}, Phys. Rev. B **77**, 174514 (2008). K. Yada, S. Onari, Y. Tanaka, and K. Miyake, arXiv:0806.4241.
L. N. Bulaevskii, V. V. Kuzii, and A. A. Sobyanin, Pis’ma Zh. Eksp. Teor. Fiz. **25**, 314 (1977) \[JETP Lett. **25**, 290 (1977)\].
A. I. Buzdin, L. N. Bulaevskii and S. V. Panyukov, Pis’ma Zh. Eksp. Teor. Fiz. **35**, 147 (1982).
E. Koshina and V. Krivoruchko, Phys. Rev. B **63**, 224515 (2001).
T. Kontos, M. Aprili, J. Lesueur, F. Genêt, B. Stephanidis, and R. Boursier, Phys. Rev. Lett. **89**, 137007 (2002).
A. Buzdin and A. E. Koshelev, Phys. Rev. B **67**, 220504 (2003).
M. Houzet, V. Vinokur, and F. Pistolesi, Phys. Rev. B **72**, 220506 (2005).
A. Cottet and W. Belzig, Phys. Rev. B **72**, 180503 (2005).
J. W. Robinson, S. Piano, G. Burnell, C. Bell, and M. G. Blamire, Phys. Rev. Lett. **97**, 177003 (2006).
G. Mohammadkhani and M. Zareyan, Phys. Rev. B **73**, 134503 (2006).
T. Yokoyama, Y. Tanaka, and A. A. Golubov, Phys. Rev. B **75**, 094514 (2007).
M. Houzet and A. I. Buzdin, Phys. Rev. B **76**, 060504 (2007).
B. Crouzy, S. Tollis, and D. A. Ivanov, Phys. Rev. B **75**, 054503 (2007); Phys. Rev. B **76**, 134502 (2007).
J. Linder, T. Yokoyama, D. Huertas-Hernando, and Asle Sudb[ø]{}, Phys. Rev. Lett. **100**, 187004 (2008).
T. Champel, M. Eschrig, Phys. Rev. B [**71**]{}, 220506(R) (2005); T. Champel, M. Eschrig, Phys. Rev. B [**72**]{}, 054523 (2005); T. Champel, T. L[ö]{}fwander, and M. Eschrig, Phys. Rev. Lett. **100**, 077003 (2008).
P. M. Brydon, B. Kastening, D. K. Morr, and D. Manske, Phys. Rev. B **77**, 104504 (2008).
I. B. Sperstad, J. Linder, and A. Sudb[ø]{}, Phys. Rev. B **78**, 104509 (2008).
A. F. Volkov and K. B. Efetov, Phys. Rev. B **78**, 024519 (2008). V. L. Berezinskii, JETP Lett. **20**, 287 (1974).
A. Balatsky and E. Abrahams, Phys. Rev. B **45**, 13125 (1992).
P. Coleman, E. Miranda, and A. Tsvelik, Phys. Rev. Lett. **70**, 2960 (1993).
E. Abrahams, A. Balatsky, D. J. Scalapino, and J. R. Schrieffer, Phys. Rev. B **52**, 1271 (1995).
D. Solenov, I. Martin, D. Mozyrsky, arXiv:0812.1055. J. Y. Gu, J. A. Caballero, R. D. Slater, R. Loloee, and W. P. Pratt, Jr., Phys. Rev. B **66**, 140507 (2002).
M. A. Sillanpaa, T. T. Heikkila, R. K. Lindell, and P. J. Hakonen, Europhys. Lett. **56**, 590 (2001).
F. S. Bergeret, A. Levy Yeyati, A. Martin-Rodero, Phys. Rev. B **72**, 064524 (2005).
J. P. Morten, A. Brataas, G. E. W. Bauer, W. Belzig, Y. Tserkovnyak, arXiv:0712.2814v1.
D. Huertas-Hernando, Yu. V. Nazarov, and W. Belzig, Phys. Rev. Lett. **88**, 047003 (2002); D. Huertas-Hernando and Yu. V. Nazarov, Eur. Phys. J. B **44**, 373 (2005).
A. Cottet and J. Linder, arXiv:0810.0904.
P. SanGiorgio, S. Reymond, M. R. Beasley, J. H. Kwon, and K. Char, Phys. Rev. Lett. **100**, 237002 (2008).
See [*e.g.* ]{}J. W. Serene and D. Rainer, Phys. Rep. **101**, 221 (1983).
K. Usadel, Phys. Rev. Lett. **25**, 507 (1970).
D. A. Ivanov and Ya. V. Fominov, Phys. Rev. B **73**, 214524 (2006).
N. Schopohl and K. Maki, Phys. Rev. B **52**, 490 (1995); N. Schopohl, cond-mat/9804064.
A. Konstandin, J. Kopu, and M. Eschrig, Phys. Rev. B **72**, 140501 (2005).
A. Brataas, Yu. V. Nazarov, and G. E. W. Bauer, Phys. Rev. Lett. [**84**]{}, 2481 (2000); Eur. Phys. J. B **22**, 99 (2001); A. Brataas, G. E. W. Bauer, and P. J. Kelly, Phys. Rep. **427**, 157 (2006).
I. Sosnin, H. Cho, V. T. Petrashov, and A. F. Volkov, Phys. Rev. Lett. **96**, 157002 (2006).
T. Yokoyama, Y. Tanaka, and A. A. Golubov, Phys. Rev. B **72**, 052512 (2005); Phys. Rev. B **73**, 094501 (2006).
T. Kontos, M. Aprili, J. Lesueur, and X. Grison, Phys. Rev. Lett. **86**, 304 (2001).
J. Linder, T. Yokoyama, A. Sudb[ø]{}, and M. Eschrig, unpublished.
C. Bruder, Phys. Rev. B **41**, 4017 (1990).
F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Phys. Rev. B **69**, 174504 (2004).
J. Xia, V. Shelukhin, M. Karpovski, A. Kapitulnik, and A. Palevski, arXiv:0810.2605.
Th. Muehge, N. N. Garif’yanov, Yu. V. Goryunov, K. Theis-Br[ö]{}hl, K. Westerholt, I. A. Garifullin, and H. Zabel, Physica C **296**, 325 (1998).
I. A. Garifullin, D. A. Tikhonov, N. N. Garif’yanov, M. Z. Fattakhov, K. Theis- Br[öhl]{}, K. Westerholt, and H. Zabel, Appl. Magn. Reson. **22**, 439 (2002).
R. I. Salikhov, I. A. Garifullin, N. N. Garif’yanov, L. R. Tagirov, K. Theis-Br[ö]{}hl, K. Westerholt, H. Zabel, arXiv:0806.4104.
M. Yu. Kharitonov, A. F. Volkov, K. B. Efetov, Phys. Rev. B **73**, 054511 (2006).
K. Halterman and O. T. Valls, Phys. Rev. B **69**, 014517 (2004).
A. Cottet, Phys. Rev. B **76**, 224505 (2007).
J. Linder and A. Sudb[ø]{}, Phys. Rev. B **76**, 214508 (2007).
R. Meservey and P. M. Tedrow, Phys. Rep. [**238**]{}, 173 (1994).
G. E. Blonder, M. Tinkham, and T. M. Klapwijk, Phys. Rev. B **25**, 4515 (1982).
J. Linder and A. Sudb[ø]{}, Phys. Rev. B 75, 134509 (2007).
S. Kashiwaya, Y. Tanaka, N. Yoshida, and M. R. Beasley, Phys. Rev. B **60**, 3572 (1999).
M. Yu. Kupriyanov and V. F. Lukichev, Zh. Exp. Teor. Fiz. **94**, 139 (1988) \[Sov. Phys. JETP **67**, 1163 (1988)\].
|
---
abstract: |
In this paper, we investigate information-theoretic scaling laws, independent from communication strategies, for point-to-point molecular communication, where it sends/receives information-encoded molecules between nanomachines. Since the Shannon capacity for this is still an open problem, we first derive an asymptotic order in a single coordinate, i.e., i) scaling time with constant number of molecules $m$ and ii) scaling molecules with constant time $t$. For a single coordinate case, we show that the asymptotic scaling is logarithmic in either coordinate, i.e., $\Theta(\log t)$ and $\Theta(\log m)$, respectively. We also study asymptotic behavior of scaling in both time and molecules and show that, if molecules and time are proportional to each other, then the asymptotic scaling is linear, i.e., $\Theta(t)=\Theta(m)$.\
author:
-
-
bibliography:
- 'MolecularInfoTheory.bib'
- 'infotheory.bib'
- 'ref\_Mole\_Chae.bib'
title: Scaling Laws for Molecular Communication
---
Molecular communication, scaling laws, channel capacity.
Introduction
============
In molecular communication, a transmitter expresses a message in molecules, which propagate towards the receiver via Brownian motion, or some similar means [@hiy05]. Molecular communication is found in biological processes such as signal transduction [@ein11; @eck13]; it has also been proposed as an enabling technology for nanoscale systems [@par09]. For this new paradigm of communication, several papers have tried to address the achievable rates (defined as ‘bits per symbol’) of the communication system under theoretical channel and noise assumptions [@JSAC_10; @Kuran_NCN10; @Chae_JSAC_13]. The author in [@JSAC_10] evaluated, using a circuit model, the normalized gain and delay of the system. The authors in [@Kuran_NCN10; @Chae_JSAC_13] studied extensively the basics of molecular communication via diffusion. In [@Kuran_NCN10], they investigated a new energy model to understand how much energy is required to transmit messenger molecules and [@Chae_JSAC_13] introduced several modulation techniques. The authors in [@Chae_JSAC_13] also compared, by using a simple symmetric channel model, the achievable rates. However, most prior work on molecular communication has focused on proposing and analyzing (practical) transmission strategies with theoretical assumptions to achieve higher achievable rates.
The investigation of fundamental capacity limits of molecular communication is still an open problem in information theory. It consistently, however, attracts the attention from researchers since understanding the fundamental limits may provide practical insights. Calculation of mutual information in molecular communication channel is known to be a hard problem. Say there are $m$ molecules, numbered $\{1,2,\ldots,m\}$, where the $i$th molecule is released at time $x_i$. This molecule takes $n_i$ seconds to propagate to the receiver, and arrives at time $x_i + n_i$. So far this looks like a simple additive noise channel – but the trick is that the molecule arriving at time $x_i+n_i$ might not be the $i$th molecule to arrive. If the molecules are indistinguishable, then the releases and arrivals form an order-statistical distribution, which involves a sum over terms for every possible permutation from inputs to outputs (see, e.g., [@bap89; @eck07]). For these reasons, unlike better-known channels, we know very little about the Shannon capacity of molecular communication. The state of our ignorance about capacity in this channel is such that it is not even clear what are the right units in which to measure capacity: bits per second? Bits per molecule? Bits per second per molecule?
While transmission strategies are now relatively well understood [@Chae_JSAC_13], knowledge about the information-theoretic performance limits is scarce. An early result on achievable information rates was been reported in [@Eckford_08_arxiv], which provided an *upper bound* in terms of mutual information. Other notable recent efforts in this direction include [@rose13], which gave lower bounds by exploiting the symmetry of possible input vectors; and [@nak12], which considered capacity in a simplified discrete-time setting. Thus, to better understand molecular communication, in this paper, we investigate asymptotic behaviour of the capacity of molecular communication with respect to the number of time intervals and/or the number of molecules. Related work was conducted in [@noel13], which used dimensional analysis to permit arbitrary scaling of their model.
The rest of this paper is organized as follows. Section \[sec:model\] describes the system model under consideration. Sections \[sec:Single\] shows scaling results in a single coordinate, i.e., scaling time with constant molecules and scaling molecules with constant time. Scaling in both time and molecules is shown in Section \[sec:both\].
Model and notation {#sec:model}
==================
First, a brief word on notation: vectors will be represented with superscripts, e.g., $x$ is a scalar, while $x^t = [x_1,x_2,\ldots,x_t]$ is a vector. It will be clear from context whether a superscript represents a vector or a scalar exponent. Generally, random variables will be represented by capital letters (e.g., $Y$), and particular values of those random variables by lower case letters (e.g., $y$).
Molecular communication model
-----------------------------
We use the standard assumptions for information-theoretic analysis of molecular communication [@nakano-book]:
1. The transmitter is a point source of molecules at the origin, and is the only source of the molecule species of interest;
2. The receiver is a surface surrounding a connected region of points $\mathcal{P}$, which does not include the origin;
3. Motions of different molecules are independent and identically distributed (i.i.d.), and molecules do not change species or disappear while propagating;
4. There is no interaction between the transmitter and any molecule after release; and
5. The medium is infinite in every direction, with no barrier or obstacle except $\mathcal{P}$.
Some of these assumptions may be physically unrealistic: for example, in signal transduction, the transmitter is a cell, which is not well modelled as a point source. However, these assumptions lend themselves to tractable analysis.
To further simplify our analysis, we restrict ourselves to discrete time: the communication session lasts $t$ time instants, indexed $\{1,2,\ldots,t\}$. Meanwhile, the transmitter has $m > 0$ molecules available. It is important to note that the molecules are [*indistinguishable from each other*]{}.
The transmitter forms the vector $X^t = [X_1,X_2,\ldots,X_n]$, where $X_i$ represents the number of molecules released at discrete time instant $i$. The receiver forms the vector $Y^t = [Y_1,Y_2,\ldots,Y_t]$, where $Y_i$ is the number of molecules that arrive at time $t$, obtained as follows. For a molecule released at time $i$, its first arrival time at the receiver is $i + n$, where $n$ is the outcome of a random variable with distribution $p_N(n)$, the first arrival time distribution of the Brownian motion. Thus, $Y_j$ is the number of molecules such that $i+n=j$, for each possible release time $i$.
Recalling that we restrict ourselves to discrete time, $N$ is supported on $\{0,1,2,\ldots\}$. We further assume, as in [@eck07], that molecules are absorbed on arrival at the receiver; this can be shown to be an information-theoretically ideal assumption [@nakano-book]. Thus, $p_N(n)$ is the only property of Brownian motion we require.
Finally, we require the following conditions on $p_N(n)$ to prove our results:
- $p_N(n) = 0$ for all $n < 0$, i.e., the system is causal.
- Let $F_N(n) = \sum_{i=0}^n p_N(i)$ represent the cdf of the first arrival time distribution; then there must exist constants $c > 0$ and $n_0 < \infty$ such that $F_N(n_0) \geq c$.
Aside from these, we will put no other conditions on the first arrival time distribution $p_N(n)$, so that our results can apply as widely as possible.
Since we are interested in scaling with increasing $t$ and $m$, we do [*not*]{} calculate information rates in this paper; instead, we deal directly with mutual information $I(X^t;Y^t)$. Reflecting this, we use the notation $C(t)$ or $C(m)$ to indicate capacity as a function of either time or molecules, respectively. In either case, capacity is found by maximizing over the input distribution $p_{X^t}(x^t)$.
Scaling notation
----------------
Throughout this paper we use Bachmann-Landau scaling notation. For nonnegative functions $f(n)$ and $g(n)$: $$f(n) = \Omega(g(n)), \nonumber$$ signifies that there exist positive constants $a$ and $n^\prime$ such that $a g(n) \leq f(n)$ for all $n \geq n^\prime$ (i.e., $f(n)$ upper bounds $g(n)$); $$f(n) = O(g(n)), \nonumber$$ signifies that there exist positive constants $b$ and $n^\prime$ such that $f(n) \leq b g(n)$ for all $n \geq n^\prime$ (i.e., $g(n)$ upper bounds $f(n)$); and $$f(n) = \Theta(g(n)), \nonumber$$ signifies that $f(n) = \Omega(g(n))$ and $f(n) = O(g(n))$ (i.e., $g(n)$ is of the same order as $f(n)$).
Scaling in a single coordinate {#sec:Single}
==============================
Overview of main results in this section
----------------------------------------
In this section, we consider the scaling of capacity as a function of time, where number of molecules is held constant, and vice versa. In both cases, we show that the asymptotic scaling is logarithmic in the other coordinate: in Theorem \[thm:tTheta\], we show that $\Theta(\log t)$ for constant $m$, and in Theorem \[thm:mTheta\], we show that $\Omega(\log m)$ for constant $t$.
Our approach is to find an upper bound for capacity using a maximum-entropy argument, and a lower bound for capacity using an example communication system. The results follow by observing that the upper and lower bounds have the same asymptotic order.
Scaling time with constant molecules
------------------------------------
Assume that the number of molecules $m$ is fixed, and evaluate the capacity as the number of time intervals $t$ increases.
\[lem:tOmega\] For fixed $m$, $$C(t) = \Omega(\log t) .$$
The proof is found in Appendix \[apx:tOmega\].
\[lem:tO\] For fixed $m$, $$C(t) = O(\log t) .$$
Write mutual information as $$\begin{aligned}
I(X^t;Y^t) &= H(X^t) - H(X^t \given Y^t) \\
&\leq H(X^t) \\
\label{eqn:lem:tO1}
& \leq \sum_{i=1}^t H(X_i)
\end{aligned}$$ where (\[eqn:lem:tO1\]) follows from the chain rule of entropy and the properties of conditional entropy. Moreover, $$H(X_i) \leq \log t . \nonumber$$ Substituting back into (\[eqn:lem:tO1\]), we have $$I(X^t;Y^t) \leq m \log t. \nonumber$$ Since $m$ is constant (by assumption), and since $C(t) = \max_{p_{X^t}(x^t)} I(X^t;Y^t)$, the lemma follows.
\[thm:tTheta\] For fixed $m$, $$C(t) = \Theta(\log t). \nonumber$$
The theorem follows directly from *Lemmas* \[lem:tOmega\]-\[lem:tO\], and the definition of $\Theta(\log t)$.
Scaling molecules with constant time
------------------------------------
In this section, we assume that the number of time intervals $t$ is fixed, and evaluate the mutual information as the number of molecules $m$ increases.
\[lem:mOmega\] For fixed $t$, $$C(m) = \Omega(\log m) .$$
The proof is found in Appendix \[apx:mOmega\].
\[lem:mO\] For fixed $t$, $$C(m) = O(\log m) .$$
Note that $$\begin{aligned}
\label{eqn:lem:mO1pre}
I(X^t;Y^t) &\leq H(Y^t) \\
\label{eqn:lem:mO1}
&\leq \sum_{i=1}^t H(Y_i) ,
\end{aligned}$$ where (\[eqn:lem:mO1\]) follows from the chain rule of entropy and the properties of conditional entropy. Further, since there are only $m$ molecules in total, $$\label{eqn:lem:mO2}
H(Y_i) \leq \log m ,$$ The remainder follows the proof of Lemma \[lem:tO\], exchanging $m$ for $t$.
\[thm:mTheta\] For fixed $t$, $$C(m) = \Theta(\log m). \nonumber$$
The theorem follows directly from *Lemmas* \[lem:mOmega\]-\[lem:mO\], and the definition of $\Theta(\log m)$.
Scaling in both time and molecules {#sec:both}
==================================
The news from Section \[sec:Single\] is grim: a simplistic reading of these results would be that capacity scales logarithmically in both $t$ and $m$. However, if $m$ is proportional to $t$, the story changes. In this section we restrict ourselves to the natural case where the number of molecules $m$ is upper bounded by $\alpha t$, for some constant $\alpha$. Our main result is to show that $I(X^t;Y^t) = \Theta(m) = \Theta(t)$. As many authors have pointed out that molecules $m$ are proportional to energy, then if $m$ is proportional to $t$, this could mean a power constraint.
Our approach in this section is similar to that in Section \[sec:Single\]: we give a maximum entropy result as the upper bound, and a practical system as the lower bound.
For $0 \leq \lambda \leq 1$, let $\mathcal{H}(\lambda)$ represent the binary entropy function: $$\mathcal{H}(\lambda) = \lambda \log \frac{1}{\lambda} + (1-\lambda) \log \frac{1}{1-\lambda}. \nonumber$$ We make use of the well-known result that $$\label{eqn:BinomialEntropyBound}
\log {n \choose k} \leq n \mathcal{H} \left( \frac{k}{n} \right),$$ and the property that, given $n$ indistinct objects and $k$ distinct bins, the number of ways to assign objects to bins is $$\label{eqn:bins}
{n+k-1 \choose k-1 } .$$
\[lem:mtOmega\] For some constant $\alpha > 0$, suppose $m \leq \alpha t$. Then $C(t) = \Omega(t)$ and $C(m) = \Omega(m)$.
The proof is found in Appendix \[apx:mtOmega\].
\[lem:mtO\] For some constant $\alpha > 0$, suppose $m \leq \alpha t$. Then $C(t) = O(t)$ and $C(m) = O(m)$.
For convenience, assume $\alpha t$ is an integer; we first show that $I(X^t;Y^t) = O(t)$. First, how many ways are there to arrange any $m \leq \alpha t$ molecules in $t$ time slots? This is equivalent to arranging exactly $\alpha t$ indistinct objects in $t+1$ distinct bins: for any such assignment, there are $m \leq \alpha t$ objects in the first $t$ bins, representing molecules assigned to time slots; and $\alpha t - m$ objects in bin $t+1$, representing molecules not sent. From (\[eqn:bins\]), the number of assignments $A$ is given by $$\label{eqn:mtO0}
A = {t + \alpha t \choose t} .$$ Moreover, $$\begin{aligned}
I(X^t;Y^t) &\leq H(X^t) \\
&\leq \log A \\
\label{eqn:mtO1}
&\leq (t + \alpha t) \mathcal{H} \left( \frac{t}{t+\alpha t} \right) \\
\label{eqn:mtO2}
&\leq (1+\alpha)t ,
\end{aligned}$$ where (\[eqn:mtO1\]) follows from (\[eqn:BinomialEntropyBound\]) and (\[eqn:mtO0\]), while (\[eqn:mtO2\]) follows since $\mathcal{H}(\cdot) \leq 1$. Moreover, this expression upper bounds $C(t)$, since it upper bounds the maximum of $I(X^t;Y^t)$. Finally, (\[eqn:mtO2\]) is obviously $O(t)$. Since $m \leq \alpha t$, $C(m) = O(m)$ if $C(t) = O(t)$ by the $O(\cdot)$ notation, and the lemma follows.
\[thm:mtTheta\] For some constant $\alpha > 0$, suppose $m \leq \alpha t$. Then $C(t) = \Theta(t)$ and $C(m) = \Theta(m)$.
The theorem follows directly from *Lemmas* \[lem:mtOmega\]-\[lem:mtO\], and the definition of $\Theta(\log m)$.
Proof of Lemma \[lem:tOmega\] {#apx:tOmega}
-----------------------------
Divide the interval $t$ into intervals of length $\tau = \lfloor \sqrt{t} \rfloor$. The number of such intervals $\ell$ is $$\begin{aligned}
\ell &= \left\lfloor t/\tau \right\rfloor \\
& \geq \sqrt{t} -1.\end{aligned}$$ First suppose $m=1$. To transmit data, we select one of the $\ell$ intervals (uniformly at random) and release our one molecule during that interval. Then $$\begin{aligned}
H(X^t) &= \log \ell \\
&\geq \log \left( \sqrt{t} - 1 \right) .\end{aligned}$$
Since $m = 1$, at most one element of $Y^t$ is equal to 1. At the receiver, suppose $U$ is formed from $Y^t$ as follows: if $y_i = 1$, and $(j-1)\tau + 1 \leq i \leq j\tau$, then $U = j$; if all $y_i = 0$, then $U = \ell+1$. Further, the receiver decides that the molecule was transmitted at the beginning of the $U$th interval. Note that there are $\ell+1$ possible outcomes for $U$, and an error occurs if and only if the molecule takes longer than $\tau$ time units to arrive. Thus, the probability of error is $$\label{eqn:tOmegaPerr}
P_e = 1-F_N(\tau) ,$$ where $F_{N}$ represents the CDF of the first arrival time.
Using Fano’s inequality, $$\begin{aligned}
H(X^t \given U)
&\leq (1-F_N(\tau)) \log \ell + \mathcal{H}(1-F_N(\tau)) \\
\label{eqn:tOmegaFano}
&\leq (1-F_N(\tau)) \log \left( \sqrt{t} - 1 \right) + 1 ,\end{aligned}$$ where (\[eqn:tOmegaFano\]) follows from the fact that $\mathcal{H}(\cdot) \leq 1$. Thus $$\begin{aligned}
\nonumber \lefteqn{I(X^t;U)} & \\
&= H(X^t) - H(X^t \given U) \nonumber\\
\label{eqn:tFinal1}
&\geq \log \left( \sqrt{t} - 1 \right) - (1 - F_N(\tau)) \log \left( \sqrt{t} - 1\right) - 1\\
&= F_N(\tau)\log \left( \sqrt{t} - 1\right) - 1 ,
$$ By the capacity definition and the data processing inequality, $$\begin{aligned}
C(t) &\geq I(X^t;Y^t) \geq I(X^t;U) \\
&\geq F_N(\tau)\log \left( \sqrt{t} - 1\right) - 1 .\end{aligned}$$ Finally, $\log(\sqrt{t}-1) = \Omega(\log(\sqrt{t})) = \Omega(\log t)$.
Finally, we generalize to $m > 1$: suppose the transmitter releases [*all*]{} the molecules at once, and $U$ gives the time of arrival of the [*first*]{} arriving molecule. Then (\[eqn:tOmegaPerr\]) becomes $$P_e = \left( 1-F_N(\tau) \right)^m, \nonumber$$ and (\[eqn:tFinal1\]) becomes $$\begin{aligned}
I(X^t;Y^t) &\geq \log \left( \sqrt{t} - 1 \right) - (1 - F_N(\tau))^m \log \left( \sqrt{t} - 1\right) - 1 \nonumber\\
\nonumber
&\geq \log \left( \sqrt{t} - 1 \right) - (1 - F_N(\tau)) \log \left( \sqrt{t} - 1\right) - 1, \end{aligned}$$ which follows since $1-F_N(\sqrt{t}) \leq 1$. The remainder of the derivation is identical.
Proof of Lemma \[lem:mOmega\] {#apx:mOmega}
-----------------------------
In this proof, suppose a communication scheme works as follows. Let $\mathcal{W} = \{W_1,W_2,\ldots,W_n\}$ represent the signalling alphabet, where each $W_i$ is an integer number of molecules between 0 and $m$. We form $X^t$ by setting $X_1 = W$ (where $W \in \mathcal{W}$), and $X_2 = X_3 = \ldots = X_t = 0$. That is, all molecules are released in the first time instant. At the receiver, we form $U = \sum_{i=1}^t Y_i$ from $Y^t$.
Let $p = F_N(t)$, and let $q = 1-p$. Chebyshev’s inequality can be rewritten $$ \Pr\left( |U - pW| < k \sqrt{Wpq} \right) \geq 1-\frac{1}{k^2} \nonumber$$ Since $W \leq m$, $$\label{eqn:Chebyshev2}
\Pr\left( |U - pW| < k \sqrt{mpq} \right) \geq 1-\frac{1}{k^2}$$ The event under the probability can be rewritten $$\label{eqn:YRange}
pW - k \sqrt{mpq} < U < pW + k \sqrt{mpq} .$$ For the elements $\{W_1,W_2,\ldots,W_n\}$ of the signalling alphabet, let $$\label{eqn:XSelection}
W_j = 2jk\sqrt{mq/p}.$$ The peak signal is $W_n = m$, so $m = 2nk\sqrt{mq/p}$ and $n = (1/2k)\sqrt{mp/q}$, rounding in each case to the nearest integer as necessary.
Moreover, suppose the elements of $\mathcal{W}$ are uniformly distributed. Then $$\begin{aligned}
H(X^t) &= \log n \nonumber \\
&= \frac{1}{2} \log m + \log \frac{1}{2k}\sqrt{\frac{p}{q}} . \nonumber\end{aligned}$$
Let $D(U)$ represent a decoding function such that $D(U) =~j$ if $$\label{eqn:DecodingRange0}
p2jk\sqrt{\frac{mq}{p}} - k \sqrt{mpq} < U \leq p2jk\sqrt{\frac{mq}{p}} + k \sqrt{mpq} .$$ After some manipulation, (\[eqn:DecodingRange0\]) becomes $$\label{eqn:DecodingRange}
(2j-1)k\sqrt{mpq} < U \leq (2j+1)k\sqrt{mpq} .$$ From (\[eqn:Chebyshev2\])-(\[eqn:XSelection\]), the probability of error using $D(U)$ is at most $1/k^2$. By Fano’s inequality, $$\label{eqn:Fano}
H(X^t \given U) \leq \frac{1}{k^2} \log (n-1) + \mathcal{H}\left(\frac{1}{k^2}\right) ,$$ where $\mathcal{H}$ is the binary entropy function. Since $n \geq 1$, $2n \geq n+1$, so we can relax the bound in (\[eqn:Fano\]) slightly to $$\begin{aligned}
H(X^t \given U) &\leq \frac{1}{k^2} \log (n+1) + \mathcal{H}\left(\frac{1}{k^2}\right) \nonumber \\
&= \frac{1}{k^2} (1 + \log n) + \mathcal{H}\left(\frac{1}{k^2}\right) \nonumber\\
&= \frac{1}{2k^2} \log m + \frac{1}{k^2}(1+ \frac{1}{2k}\sqrt{\frac{p}{q}})
+ \mathcal{H}\left(\frac{1}{k^2}\right). \nonumber\end{aligned}$$ Finally, $$\begin{aligned}
C(m) &\geq I(X^t;Y^t) \geq I(X^t;U) \nonumber \\
\nonumber
&\geq \frac{1}{2} \log m + \log \frac{1}{2k}\sqrt{\frac{p}{q}} \\
& - \frac{1}{2k^2} \log m - \frac{1}{k^2}(1+ \frac{1}{2k}\sqrt{\frac{p}{q}})
- \mathcal{H}\left(\frac{1}{k^2}\right) \nonumber \\
& = \frac{1}{2} \left( 1 - \frac{1}{k^2} \right) \log m + K , \nonumber\end{aligned}$$ where $K$ is constant in $m$; this is clearly $\Omega(\log m)$.
Proof of Lemma \[lem:mtOmega\] {#apx:mtOmega}
------------------------------
We will start by considering the case of $\alpha = 1$, and generalize the result afterward.
Consider the following communication scheme: each time instant, we release a single molecule with probability $r$, and release no molecule with probability $(1-r)$. Obviously, $m \leq t$. As before, the receiver forms $Y^t$ by counting the number of arrivals at time $t$.
To simplify the proof, however, the receiver will actually observe $W^t$, a processed version of $Y^t$: $$w_i =
\left\{
\begin{array}{cl}
1, & y_i \geq 1 \\
0, & y_i = 0.
\end{array}
\right. \nonumber$$ We now determine $\gamma_0 := \Pr(w_i = 0 \given x_i = 0)$ (the notation $:=$ signifies assignment). First, molecular releases are i.i.d. by assumption. Second, for each $j > 0$, a molecule arrives at time $i$ if and only if one was released at time $i-j$, and its propagation delay was $j$. Thus, $$\gamma_0 = \prod_{j=1}^{i-1} \Big( 1 - rp_N(j) \Big) . \nonumber$$ For $\gamma_1 := \Pr(w_i = 0 \given x_i = 1)$ , $$\gamma_1 = \Big( 1 - p_N(0) \Big)\prod_{j=1}^{i-1} \Big( 1 - rp_N(j) \Big) . \nonumber$$ For $w,x \in \{0,1\}$, define $$g_i(w \given x) :=
\left\{
\begin{array}{cl}
\gamma_x, & w = 0 \\
1-\gamma_x, & w = 1 ,
\end{array}
\right. \nonumber$$ and $$g_i(w) := r g_i(w \given 1) + (1-r) g_i(w \given 0) . \nonumber$$ It should be clear that $g_i(w \given x) = p_{W_i | X_i}(w \given x)$, and $g_i(w) = p_{W_i}(w)$ is the corresponding marginal. Finally, let $$\begin{aligned}
I(W_i;X_i) &= E \left[ \log \frac{p_{W_i | X_i}(w \given x)}{p_{W_i}(w)} \right] \nonumber\\
&= E \left[ \log \frac{g_i(w \given x)}{g_i(w)} \right] ,\nonumber\end{aligned}$$ and let $I_0 = \min_i I(W_i;X_i)$. It is straightforward to show that $I_0 > 0$ so long as $p_N(0) > 0$. Then $$\begin{aligned}
\label{eqn:mtOmega1}
I(Y^t ; X^t) & \geq I(W^t;X^t) \\
\label{eqn:mtOmega2}
&= E \left[ \log \frac{p_{W^t | X^t}(w^t \given x^t)}{p_{W^t}(w^t)} \right] \\
\label{eqn:mtOmega3}
&\geq E \left[ \log \frac{\prod_{i=1}^t g_i (w \given x)}{\prod_{i=1}^t g_i (w)} \right] \\
\label{eqn:mtOmega4}
&= \sum_{i=1}^t E \left[ \log \frac{g_i (w \given x)}{g_i (w)} \right] \\
\label{eqn:mtOmega5}
&= \sum_{i=1}^t I(W_i;X_i) \\
\label{eqn:mtOmega5}
&\geq t I_0 ,\end{aligned}$$ where (\[eqn:mtOmega1\]) follows from the data processing inequality, (\[eqn:mtOmega2\]) follows from the definition of mutual information, and (\[eqn:mtOmega3\]) follows from the auxiliary channel lower bound for mutual information (see [@arn06]). Finally, from the last line, $I(Y^t;X^t) =~\Omega(t)$.
To generalize beyond $\alpha = 1$, clearly if $\alpha > 1$ these arguments still apply, since $m \leq t < \alpha t$. If $\alpha < 1$, we restrict the input to use only $1/\alpha$ of the time instants, sending nothing at the remaining times; in this case, the final line in (\[eqn:mtOmega5\]) becomes $I(Y^t ; X^t) \geq \alpha t I_0$, which is still $\Omega(t)$.
|
---
abstract: '[As]{} the Hubbard energy at half filling is believed to reproduce at strong coupling (part of) the all loop expansion of the dimensions in the $SU(2)$ sector of the planar $ {\cal N}=4$ SYM, we compute an exact non-perturbative expression for it. For this aim, we use the effective and well-known idea in 2D statistical field theory to convert the Bethe Ansatz equations into two coupled non-linear integral equations (NLIEs). We focus our attention on the highest anomalous dimension for fixed bare dimension or length, $L$, analysing the many advantages of this method for extracting exact behaviours varying the length and the ’t Hooft coupling, $\lambda$. For instance, we will show that the large $L$ (asymptotic) expansion is exactly reproduced by its analogue in the BDS Bethe Ansatz, though the exact expression clearly differs from the BDS one (by non-analytic terms). Performing the limits on $L$ and $\lambda$ in different orders is also under strict control. Eventually, the precision of numerical integration of the NLIEs is as much impressive as in other easier-looking theories.'
---
LAPTH - 1167/06\
[**Hubbard’s Adventures in ${\cal N}=4$ SYM-land?**]{}\
[**Some non-perturbative considerations on finite length operators.**]{}\
\
$^a$[*LAPTH [^1], 9 Chemin de Bellevue, BP 110, F-74941 Annecy-le-Vieux Cedex, France*]{}\
$^b$[*INFN and Dept. of Physics, University of Bologna, Via Irnerio 46, Bologna, Italy*]{}\
$^c$[*LPTA, Université Montpellier II, Place Eugène Bataillon, 34095 Montpellier, France*]{}
: Quantum Integrability (Bethe Ansatz); Non-Linear Integral Equation; Hubbard model; Super Yang-Mills theories.
Prologue
========
It is a modern achievement that gauge theories and in particular supersymmetric gauge theories [*hide*]{} many realisations of the algebraic geometry theorisation (cf. [@KW] just as a recent monumental reference on the [*last*]{} discovered parallel and many other features).
More in specific, the AdS/CFT correspondence [@MWGKP] should be a general dictionary, which would equate – among other physical objects – energies of string states to anomalous dimensions of local gauge-invariant operators of a dual conformal quantum field theory. Proving or even testing this duality in full generality may be a formidable task, but the integrability properties of the ${\cal N}=4$ super Yang-Mills (SYM) theory have proved to be extremely useful to understand how it may work and to which extent.
The identification [@MZ] of the one-loop dilatation operator of scalar gauge-invariant fields with bare dimension $L$ with an $SO(6)$ integrable chain with $L$ sites, reducing in the $SU(2)$ subspace to the spin $1/2$-XXX Heisenberg chain, allowed, by using the Bethe Ansatz technique [@Bethe], to test the one-loop AdS/CFT duality in many cases [@HLPS; @BTZ; @GK] beyond the BMN conditions [@BMN]. In the aforementioned cases, special emphasis has been delivered to the $1/L$ correction, as this would result as the first quantum correction in string theory; and more generally all the finite size $L$ corrections would have a similar stringy origin and importance. Soon afterwards, integrability of ${\cal N}=4$ SYM at higher loops started to be hinted and hunted [@BKS]. After various attempts and tests (cf. for instance [@SS]), eventually in [@BDS] an all loop asymptotic expression has been proposed for the eigenvalues of the dilatation operator in the $SU(2)$ sub-sector, in terms of the solutions of Bethe Ansatz-like equations, derived by assuming BMN scaling and perturbative integrability. Moreover, this proposal (named after them BDS equations) was shown to give the correct (truncated) Bethe equations for the five loop dilatation operator, after deriving the latter as an operator. Nevertheless, the BDS equations are valid only asymptotically, that is for fixed $L$ when the ’t Hooft coupling $\lambda$ is small enough that the ${\cal O} (\lambda^L)$ term ($L$ loops) may be neglected along with the higher order powers. In fact, the higher loops are clearly affected by the chain wrapping problem – namely an interaction range longer than the chain length – which is not taken into account by the BDS proposal. In this respect, an important progress was the remark, by Rej, Serban and Staudacher [@RSS], that the $SU(2)$ dilatation operator could be reproduced up to three loops by the strong coupling expansion of the Hamiltonian of the half-filled Hubbard model. Many tests of the proposal [@RSS] started to be performed (cf. for instance [@MIN]), while it seems now clear that starting from four loops the Hubbard model will reproduce only part of the entire contributions (likely the ’rational ones’), the string theory/gauge theory discrepancies motivating the introduction of a specific dressing factor [@BES] also in gauge theory. Indeed, although the dressing factor would also care for the large $\lambda$ behaviour, it is unclear for now how to insert it into the two Lieb-Wu Bethe equations for the Hubbard model [@LW]. On the contrary, it is manifest its introduction into the BDS Bethe Ansatz, and therefore a comparative study of BDS versus Hubbard model is one of the motivations of this paper.
In a previous paper [@FFGR] we proposed a description of the highest and immediately lower energy states, for both the $SO(6)$ chain and the BDS model, based on the non-linear integral equation (NLIE). The NLIE was first introduced in [@KP; @DDV; @FMQR] for studying the finite size scaling of the ground state and of the excited states in (critical and off-critical) statistical (lattice) field theories respectively. Although it is equivalent to the set of all the Bethe Ansatz equations, it is often more suitable for numerical and analytical calculations, especially when it is important, like in the present case, to detect how the anomalous dimension (energy) behaves with the length $L$ (especially in large $L$ investigations). In fact, it condensates into a single (or only very few) integral equation(s) many algebraic equations. In this respect, we will prove here that it is a right tool to deal with the two possible ordering of the limits of large size $L$ and large coupling $\lambda$. Furthermore, we will find a systematic way to perform the two expansions for small coupling and large coupling at any fixed size.
In this paper we want to introduce the NLIEs as a profitable treatment of the Hubbard model, especially for the understanding of the exact scaling behaviour of the dimension (energy) with $\lambda$ and $L$. We will concentrate on the highest energy (anomalous dimension) state of the half-filled Hubbard model ($SU(2)$ sub-sector of ${\cal N}=4$ SYM). This state is described by two coupled NLIEs which will be written in Section 3. In Section 4 we will give an exact expression for its energy, as a function of the coupling $g$ and the length of the chain $L$, in terms of the solution of the two coupled NLIEs. This peculiar expression for the energy allows a comparison at large length $L$ (Section 5) with the analogous result coming from the BDS chain: we will show that the $1/L$ leading term and all the next finite size corrections (power-like and logarithmic) in fact coincide (i.e. the usual large $L$ asymptotic expansion do coincide), the difference being captured by exponentially small corrections (whose leading contribution we estimated at strong coupling). Moreover, as a consistency check of our findings, in Section 6 the weak and strong coupling limits of the NLIEs on one hand and of the energy on the other hand will be studied and shown to reproduce the known results. The strong coupling for the BDS model is also carefully analysed. As a consequence, Section 7 is devoted to the understanding of the ordering of the two distinct limits $\lambda\rightarrow +\infty$ and $L\rightarrow +\infty$, both in the Hubbard and BDS models. Eventually, a detailed numerical analysis is carried out in the last Section 8.
The Hubbard model: a bird’s-eye view
====================================
The Hubbard model was introduced as a simplified model for strongly correlated electrons on a lattice [@hub]. In one dimension, it describes $N_{\text{e}}$ electrons moving on a chain with $L$ sites and interacting via the Hamiltonian $$H=-t \sum _{i=1}^L \sum _{\sigma=\uparrow , \downarrow}\left
(c_{i,\sigma}^{\dagger}
c_{i+1,\sigma}+c_{i+1,\sigma}^{\dagger}c_{i,\sigma}\right) +U \sum
_{i=1}^L
c_{i,\uparrow}^{\dagger}c_{i,\uparrow}c_{i,\downarrow}^{\dagger}c_{i,\downarrow}
\, , \label {Hubbham}$$ where $c_{i,\sigma}^{\dagger}$, $c_{i,\sigma}$ are (fermionic) canonical creation and annihilation operators respectively, $t$ is the strength of the kinetic nearest-neighbour hopping term, $U$ the coupling constant of the density potential and, for our interests, periodic boundary conditions are assumed, i.e. $c_{i+L,\sigma}=c_{i,\sigma}$, $c_{i+L,\sigma}^{\dagger}=c_{i,\sigma}^{\dagger}$.
In the relevant paper [@RSS], a precise sub-set of the energies $E$ of (\[Hubbham\]) was conjectured to be proportional to the anomalous contribution $\gamma$ to the conformal dimensions in the $SU(2)$ scalar sector of ${\cal N}=4$ super Yang-Mills theory in the planar limit, $$\gamma = \frac {\lambda }{8 \pi ^2} E \, , \label{andim}$$ provided we restrict ourselves to the half-filling case $N_{\text{e}}=L$ and also equate the length $L$ to the number of constituent operators.
Now, a very important part of the correspondence between an integrable system and a gauge theory is the mapping of the coupling constants, and the latter can be easily argued to be a strong-weak coupling duality for many reasons[^2]. Therefore, one possible choice, reproducing the known results up to three loops, may well be [@RSS] $$t=-\frac {1}{{\sqrt {2}}g}=-\frac {2\pi}{\sqrt {\lambda}} \, ,
\quad U=-\frac1{g^2}= -\frac {8\pi^2}{\lambda} \, , \label{tUg}$$ where $\lambda=Ng_{YM}^2=8\pi ^2 g^2$ is the ’t Hooft coupling of the $SU(N)$ SYM theory in the planar limit ($N\rightarrow
\infty$). But this can be modified by higher order contributions still preserving, of course, the matching outcomes. Actually, to have a loop expansion of (\[andim\]) in $g^2$ with the right [*wrapping phenomenon*]{} occurring at ${\cal O}(g^{2L})$, we need to introduce a constant magnetic flux $\phi$ [@RSS], $$H=\frac {1}{{\sqrt {2}}g} \sum _{i=1}^L \sum _{\sigma=\uparrow ,
\downarrow}\left (e^{i\phi }c_{i,\sigma}^{\dagger}
c_{i+1,\sigma}+e^{-i\phi}c_{i+1,\sigma}^{\dagger}c_{i,\sigma}\right)
-\frac {1}{g^2} \sum_{i=1}^L
c_{i,\uparrow}^{\dagger}c_{i,\uparrow}c_{i,\downarrow}^{\dagger}c_{i,\downarrow}
\, , \label {Hubbham2}$$ distinguishing odd and even lengths: $\phi=0$ when $L$ is odd and $\phi=\frac {\pi}{2L}$ when $L$ is even.
The Hubbard hamiltonian (\[Hubbham\]) describes an integrable model (infinite many conserved charges in involution), which was diagonalised by Lieb and Wu by Bethe Ansatz in 1968 [@LW]. The twisted Hamiltonian (\[Hubbham2\]) is still integrable and the Lieb-Wu equations easily generalise. In the half-filling case they read [@RSS] $$\begin{aligned}
e^{i\hat{k}_jL}&=&\prod _{l=1}^M \frac {u_l - \frac {2t}{U}\sin
(\hat{k}_j+\phi)-\frac {i}{2}} {u_l - \frac {2t}{U}\sin
(\hat{k}_j+\phi)+\frac {i}{2}} \nonumber \\
\prod _{j=1}^L \frac {u_l - \frac {2t}{U}\sin
(\hat{k}_j+\phi)+\frac {i}{2}} {u_l - \frac {2t}{U}\sin
(\hat{k}_j+\phi)-\frac {i}{2}}&=& \mathop{\prod _ {m=1}}_{m \not=l}^M \frac
{u_l-u_m+i}{u_l-u_m-i} \, , \label {Beqs}\end{aligned}$$ where $M$ is the number of down spins. The spectrum of the Hamiltonian is then given in terms of the pseudo-momenta $\hat{k}_j$, by the [*free dispersion relation*]{} $$\label{energia}
E=-2 t \sum _{j=1}^{L} \cos (\hat{k}_j+\phi) \,.$$ Starting from here, we will equivalently derive two coupled nonlinear integral equations (NLIEs) for the antiferromagnetic state of the model at any value of $L$, thanks to the methods used in [@FFGR]. For reason of completeness, we point out that the thermodynamics (infinite length $L$, but finite temperature) of the Hubbard model has been studied [@KB; @JKS] by means of three NLIEs (for a summary of the procedure and a complete list of references see [@DEGKKK]). This approach was based on the equivalence of the (quantum) one-dimensional Hubbard model with the (classical) two-dimensional Shastry model. For the gauge theory understanding, we need to obtain energies of the Hubbard model at zero temperature, but at any value of the length $L$. This completely justifies our approach, and [*a fortiori*]{} in the perspective of extending our calculations to excited states (i.e. lower dimension operators in the SYM spectrum).
Two non-linear integral equations (NLIEs)
=========================================
Looking at the Bethe equations (\[Beqs\]), we define the function $$\Phi (x,\xi)=i \ln \frac {i\xi +x}{i\xi -x} \, , \label {Phi}$$ with the branch cut of $\ln(z)$ along the real negative $z$-axis in such a way that $-\pi < \arg z <\pi$. Then, we perform a gauge transformation which amounts to adding the magnetic flux: $$k_j=\hat{k}_j+\phi \,.$$ After a possible choice of the counting functions as $$\begin{aligned}
W(k)&=&L(k-\phi) -\sum _{l=1}^M \Phi \left (u_l-\frac {2t}{U}\sin
k,
\frac{1}{2} \right ) \, , \label {Wdef} \\
Z(u)&=&\sum _{j=1}^L \Phi \left (u - \frac {2t}{U}\sin k_j,
\frac{1}{2} \right ) -\sum _{m=1}^M \Phi \left (u-u_m, 1 \right )
\, , \label {Zdef}\end{aligned}$$ we can rewrite the Bethe equations, by taking their logarithm, in the usual form of quantisation conditions for the Bethe roots $\{k_j,u_l\}$, $$\begin{aligned}
W(k_j)&=&\pi M +2 \pi I^w_j \, , \\
Z(u_l)&=&\pi (M-L+1+2I^z_l) \, .\end{aligned}$$ From now on, we specialise our treatment to the highest energy state, consisting of the maximum number $M=L/2$ of real roots $u_l$ and of $L$ real roots $k_j$. For simplicity reasons, we restrict ourselves to the case $M\in 2{\Bbb N}$ (the remaining case $M\in 2{\Bbb N}+1$ is a simple modification of this case), which obviously implies $L\in 4{\Bbb N}$.
In the definition of the counting functions (\[Wdef\], \[Zdef\]) we have to deal with sums of functions computed on real Bethe roots, $k_j$ and $u_l$. Let us first concentrate on functions of $k_j$. We notice that $k_j$ may run only within the first Brillouin zone $[-\pi,\pi)$ and that the functions of $k_j$ involved are periodic with period $2\pi$. On the other hand, the counting function $W(k)$ is quasi-periodic on that interval and $e^{iW(k)}$ and $W'(k)$ are indeed periodic. Using the Cauchy theorem to circulate the interval $[-\pi,\pi)$ by a small displacement $\epsilon>0$ (this periodic case has been developed in [@FR]), we get $$\begin{aligned}
\label{cauchyW}
\sum _{j=1}^L f(k_j)=&-&\int _{\pi }^{-\pi} \frac{dk}{2\pi
i}f(k+i\epsilon) \frac
{iW^{\prime}(k+i\epsilon)e^{iW(k+i\epsilon)}}{1-e^{iW(k+i\epsilon)}}-
\nonumber \\
&-&\int _{-\pi}^{\pi} \frac{dk}{2\pi i}f(k-i\epsilon) \frac
{iW^{\prime}(k-i\epsilon)e^{iW(k-i\epsilon)}}{1-e^{iW(k-i\epsilon)}}
\, ,\end{aligned}$$ where the two complex integrals along $-\epsilon<{\mbox { Im}}
k<\epsilon$ at ${\mbox {Re}} k=\pm\pi$ have been neglected thanks to the periodicity properties of $f(k)$ and $e^{iW(k)}$. A thumb rule to understand this [*logarithmic indicator formula*]{} goes as follows: since $W^\prime (k)> 0$ [^3], the first integral is simply the logarithmic derivative in the following formula, but the second one is not because of the non-analyticity of the logarithm [^4]. Nevertheless, the latter can be simply manipulated into a logarithmic derivative of an analytic function plus an extra piece: $$\begin{aligned}
\sum _{j=1}^L f(k_j)&=&-\int _{-\pi}^{\pi} \frac {dk}{2\pi
i}f(k+i\epsilon)
\frac {d}{dk}\ln \left [1-e^{iW(k+i\epsilon)}\right]+ \\
&+& \int _{-\pi}^{\pi} \frac {dk}{2\pi i}f(k-i\epsilon) \frac
{d}{dk}\ln \left [1-e^{-iW(k-i\epsilon)}\right] + \int
_{-\pi}^{\pi} \frac {dk}{2\pi} f(k-i\epsilon) W^\prime
(k-i\epsilon) \, . \nonumber\end{aligned}$$ To make the last term useful, we can compute it along the real axis without any harm, because of the periodicity of $f(k)$ and $W'(k)$; then, after integrating by parts the two integrals before it, we arrive at $$\begin{aligned}
\sum _{j=1}^L f(k_j)&=&\int _{-\pi}^{\pi} \frac {dk}{2\pi
i}f^\prime
(k+i\epsilon) \ln \left [1-e^{iW(k+i\epsilon)}\right]- \label {Wsum0}\\
&-&\int _{-\pi}^{\pi} \frac {dk}{2\pi i}f^\prime (k-i\epsilon) \ln
\left [1-e^{-iW(k-i\epsilon)}\right] +\int _{-\pi}^{\pi} \frac
{dk}{2\pi} f(k)W^\prime (k) \, , \nonumber\end{aligned}$$ because the boundary terms vanish as a consequence of the periodicity of $f(k)$ and $e^{iW(k)}$. Upon integrating by parts the last term, we finally obtain $$\begin{aligned}
\sum _{j=1}^L f(k_j)&=&-\int _{-\pi}^{\pi} \frac {dk}{2\pi}
f^\prime (k)W(k) + {\mbox { Im}} \int _{-\pi}^{\pi} \frac {dk}{\pi
}f^\prime (k+i\epsilon ) \ln \left [
1-e^{iW(k+i\epsilon )}\right] + \nonumber \\
&+& \left [ \frac {f(k)W(k)}{2\pi} \right ]^{\pi}_{-\pi} \, .
\label {Wsumeps}\end{aligned}$$ We will mainly use such formula in the $\epsilon \rightarrow 0^+$ limit, $$\sum _{j=1}^L f(k_j)=-\int _{-\pi}^{\pi} \frac {dk}{2\pi} f^\prime
(k)W(k) + \int _{-\pi}^{\pi} \frac {dk}{\pi
}f^\prime (k) {\mbox { Im}} \ln \left [
1-e^{iW(k+i0)}\right] + \left [ \frac {f(k)W(k)}{2\pi} \right
]^{\pi}_{-\pi} \, , \label {Wsum}$$ which reads as (\[Wsum\]) because of the supposed analyticity of $f(k)$ on the real axis.
For what concerns a sum of a generic function $g(x)$ for $x$ being any root $u_l$ (which is in principle everywhere in the real axis for the ground state[^5]), we can go along similar steps and repeat the original procedure for $x\in \mathbb{R}$ [@DDV; @FMQR]. In this case the boundary terms appearing during the computations can be neglected thanks to different applicable reasons. One sufficient set of conditions, which apply to the case $g=\Phi$, relevant for the derivations of the NLIEs for $W$ and $Z$, turns out to be [^6] $$\begin{aligned}
&&Z'(\pm \infty + iy)=0 \, , \, -\epsilon <y<\epsilon \, ; \quad g(+\infty)=-g(-\infty) \, , \quad Z(+\infty)=-Z(-\infty) \, , \nonumber \\
&& \quad Z(+\infty\pm i\epsilon)=-Z(-\infty\mp i\epsilon ) \, ,
\quad g(+\infty\pm i\epsilon)=-g(-\infty\mp i\epsilon ) . \label
{fZcond}\end{aligned}$$ In formulæ we can write $$\sum _{l=1}^M g(u_l)=-\int _{-\infty}^{\infty} \frac
{dx}{2\pi}g^{\prime}(x)Z(x)+{\mbox { Im}} \int _{-\infty}^{\infty}\frac
{dx}{\pi}g^{\prime}(x+i\epsilon ) \ln \left
[1+e^{iZ(x+i\epsilon )}\right ] \, , \label {Zsumeps}$$ or, in the $\epsilon \rightarrow 0^+$ limit, $$\sum _{l=1}^M g(u_l)=-\int _{-\infty}^{\infty} \frac
{dx}{2\pi}g^{\prime}(x)Z(x)+\int _{-\infty}^{\infty}\frac
{dx}{\pi}g^{\prime}(x){\mbox { Im}} \ln \left
[1+e^{iZ(x+i0 )}\right ] \, . \label {Zsum}$$ Eventually, we have our building blocks in formulæ (\[Wsum\]) and (\[Zsum\]), where the r.h.s. is written through each counting function respectively. Let us apply them to the definition of the $W(k)$, $$\begin{aligned}
W(k)&=&L(k-\phi)+\int _{-\infty}^{\infty} \frac
{dx}{2\pi}\Phi ^ {\prime} \left ( x-\frac
{2t}{U} \sin k, \frac {1}{2} \right ) Z(x) - \nonumber \\
&-& \int _{-\infty}^{\infty}\frac
{dx}{\pi} \Phi ^{\prime} \left ( x-\frac
{2t}{U} \sin k, \frac {1}{2} \right ) {\mbox {Im}}\ln \left
[1+e^{iZ(x+i0)}\right ] \, , \label {Weq1}\end{aligned}$$ and to the definition of $Z(u)$, $$\begin{aligned}
Z(u)&=&L \Phi \left (u,\frac {1}{2}\right )+ \frac {2t}{U}\int
_{-\pi}^{\pi} \frac {dk}{2\pi}\Phi ^{\prime}
\left( u-\frac {2t}{U}\sin k, \frac {1}{2} \right) \cos k \: W(k)-\nonumber \\
&-&\frac {2t}{U}\int _{-\pi}^{\pi} \frac {dk}{\pi}\Phi
^{\prime}\left ( u-\frac {2t}{U}\sin k, \frac {1}{2} \right ) \cos
k
{\mbox { Im}}\ln \left [1-e^{iW(k+i0)}\right ]- \label {Zeq1} \\
&-& \int _{-\infty}^{\infty}\frac
{dy}{2\pi} \Phi ^{\prime} (u-y,1)Z(y)
+ \int _{-\infty}^{\infty}\frac
{dy}{\pi} \Phi ^{\prime} (u-y,1){\mbox { Im}}\ln \left
[1+e^{iZ(y+i0)}\right ] \, . \nonumber\end{aligned}$$ Inserting in the equation for $Z$ the expression for $W$ coming from (\[Weq1\]), we get $$\begin{aligned}
Z(u)&=&L \Phi \left (u ,\frac {1}{2}\right ) + L \frac {2t}{U}\int
_{-\pi}^{\pi} \frac {dk}{2\pi}\Phi ^{\prime}\left (
u-\frac {2t}{U}\sin k , \frac {1}{2} \right ) \cos k \:(k-\phi) -\nonumber \\
&-& \int _{-\infty}^{\infty}\frac
{dy}{2\pi} \Phi ^{\prime} (u-y,1)\, Z(y)
+ \int _{-\infty}^{\infty}\frac
{dy}{\pi} \Phi ^{\prime} (u-y,1){\mbox { Im}}\ln \left
[1+e^{iZ(y+i0)}\right ] - \nonumber \\
&-&\frac {2t}{U}\int _{-\pi}^{\pi} \frac {dk}{\pi}\Phi
^{\prime}\left ( u-\frac {2t}{U}\sin k , \frac {1}{2} \right )
\cos k {\mbox { Im}} \ln \left [1-e^{iW(k+i0)}\right ] \, ,
\label {Zeq2}\end{aligned}$$ where we used the following cancellation of terms, $$\int _{-\pi}^{\pi} dk~ \Phi ^{\prime}\left ( x-\frac {2t}{U}\sin k
, \frac {1}{2} \right ) \cos k ~ \Phi ^{\prime}\left ( y-\frac
{2t}{U}\sin k , \frac {1}{2} \right ) =0 \, ,$$ which can be easily proven by performing the change of variable $k\rightarrow \pi - k$. We now write the equation for $Z(u)$ (\[Zeq2\]) in terms of Fourier transforms[^7], using $$\hat \Phi (p, \xi)=\frac {2\pi} {i}P\left (\frac {1}{p}\right)
e^{-\xi |p|} \, ,$$ where $P$ indicates the principal value distribution. We obtain the following expression, $$\begin{aligned}
\hat Z(p)&=& L 2 \pi \frac{e^{-\frac {|p|}{2}} }{i}P\left (\frac
{1}{p}\right) + L \frac {2t}{U} \int _{-\pi}^{\pi} dk\ e^{-i\frac
{2 t p}{U}\sin k } e^{-\frac {|p|}{2}}
\cos k ~(k-\phi) - \nonumber \\
&-& e^{-|p|} \hat Z(p) +2 e^{-|p|} \hat L _Z (p) -\frac {4t}{U}
e^{-\frac {|p|}{2}} \int _{-\pi}^{\pi} dk\
e^{-i\frac {2tp}{U}\sin k } \cos k ~ L_W(k) = \nonumber \\
&=& L \frac {2\pi}{i}P\left (\frac {1}{p}\right) e^{-\frac
{|p|}{2}} J_0\left (\frac{2tp}{U}\right )
- e^{-|p|} \hat Z(p)+ 2 e^{-|p|} \hat L _Z (p)- \nonumber \\
&-&\frac {4t}{U} e^{-\frac {|p|}{2}} \int _{-\pi}^{\pi} dk \,
e^{-i\frac {2tp}{U}\sin k } \cos k~ L_W(k) \, ,\end{aligned}$$ where we used the integral definition of the Bessel function $J_0(z)$, $$J_0(z)=\int _{-\pi}^{\pi} \frac {dk}{2\pi}\, e^{i\,z\sin k}\,,$$ and also the following shorthand notations $$L_W(k)= {\mbox {Im}}\ln \left [1-e^{iW(k+i0)}\right ] \, , \quad
L_Z(x)= {\mbox {Im}}\ln \left [1+e^{iZ(x+i0)}\right ] \, .$$ The terms proportional to $\hat Z(p) $ are now collected and reorganized as $$\hat Z(p)= L \frac {\pi}{i}P\left (\frac {1}{p}\right) \frac
{J_0\left (\frac
{2tp}{U}\right )}{\cosh \frac {p}{2}} +
\frac {2}{1+e^{|p|}}\hat L _Z (p) -\frac {2t}{U} \frac {1}{\cosh
\frac {p}{2}} \int _{-\pi}^{\pi} dk~ e^{-i\frac {2tp}{U}\sin k}
\cos k \: L_W(k) \, ,$$ and, coming back to the ’coordinate’ space, we obtain the first of two nonlinear integral equations for our counting functions, $$\begin{aligned}
Z(u)&=&L \int _{-\infty}^{\infty} \frac {dp}{2p} \sin (pu) \frac
{J_0\left ( \frac {2tp}{U}\right )}{\cosh \frac {p}{2}}+2 \int
_{-\infty}^{\infty} dy \ G(u-y) \ {\mbox {Im}}\ln \left
[1+e^{iZ(y+i0)}\right ]- \nonumber \\
&-&\frac {2t}{U}\int _{-\pi}^{\pi} dk \cos k \frac {1}{\cosh \left
( \pi u - \frac {2t\pi}{U}\sin k \right ) } \
{\mbox {Im}}\ln \left [1-e^{iW(k+i0)}\right ] \, , \label {Zeq4}\end{aligned}$$ where $G(x)$ is the same kernel function that appears in the spin $1/2$-XXX chain and in the BDS Bethe Ansatz (eq. 2.24 of [@FFGR]), $$G(x)=\int _{-\infty}^{\infty} \frac {dp}{2\pi} e^{ipx} \frac
{1}{1+e^{|p|}} \, . \label {Gxxx}$$ We notice that the first line of the NLIE for $Z$ (\[Zeq4\]) coincides with the NLIE (eq. 3.15 of [@FFGR]) for the counting function of the highest energy state of the BDS model. The second line of (\[Zeq4\]) is the genuine contribution of the Hubbard model.
We finally remark that NLIE (\[Zeq4\]) can be written in the alternative form $$\begin{aligned}
Z(u)&=&L \int _{-\pi}^{\pi}\frac {dk}{2\pi} {\mbox { gd}}\left (
\pi u - \frac {2t\pi}{U} \sin k \right )+2 \int
_{-\infty}^{\infty} dy \ G(u-y) \ {\mbox {Im}}\ln \left
[1+e^{iZ(y+i0)}\right ]+ \nonumber \\
&+& \int _{-\pi}^{\pi}\frac {dk}{\pi}\ \frac {d}{dk} {\mbox {gd}}
\left ( \pi u - \frac {2t\pi}{U} \sin k \right ){\mbox {Im}}\ln
\left [1-e^{iW(k+i0)}\right ] \, ,\end{aligned}$$ after introducing the hyperbolic amplitude (the Gudermannian) ${\mbox {gd}}(x)$: $${\mbox {gd}}(x)=\int _{0}^{x} \frac {dt}{\cosh t}=2\arctan e^x -
\frac {\pi}{2} \, .$$ On the other hand, starting from (\[Weq1\]) and inserting in it the equation for $Z$ (\[Zeq4\]), we obtain the second of our nonlinear integral equations: $$\begin{aligned}
W(k)&=&L\left[ (k-\phi) + \int _{-\infty}^{\infty} \frac {dp}{p}
\sin \left ({\frac
{2tp}{U}\sin k }\right ) \frac {J_0\left (\frac {2tp}{U}\right )}{1+e^{|p|}}\right]
- \nonumber \\
&-&\int _{-\infty}^{\infty}dx \, \frac {1}{\cosh \left ( \frac
{2t\pi}{U}\sin k-\pi x \right ) } \,
{\mbox {Im}}\ln \left[1+e^{iZ(x+i0)}\right ]- \label {Weq2}\\
&-& \frac {4t}{U} \int _{-\pi}^{\pi} dh \ G \left ( \frac {2t}{U}
\sin h-\frac {2t}{U}\sin k \right ) \cos h \mbox{ Im} \ln
\left[1-e^{iW(h+i0)}\right ] \, . \nonumber\end{aligned}$$ The two equations (\[Zeq4\], \[Weq2\]) are coupled by integral terms and are completely equivalent to the Bethe equations for the highest energy state.
The energy or anomalous dimension.
==================================
The eigenvalues of the Hamiltonian (\[Hubbham2\]) on the Bethe states are given by (\[energia\]). The highest eigenvalue can be worked out by using (\[Wsum\]). We get: $$E=-2t \left\{ \int _{-\pi}^{\pi} \frac {dk}{2\pi}\sin k ~W(k)
-\int _{-\pi}^{\pi} \frac {dk}{\pi}\sin k {\mbox { Im}}\ln
\left[1-e^{iW(k+i0)}\right ] - L \right\} \,. \label {En1}$$ We now insert the NLIE for $W$ (\[Weq2\]) and observe the cancellation of the first and last terms: $$\int _{-\pi}^{\pi} \frac {dk}{2\pi} \sin k ~(k-\phi) -1=0 \, .$$ Therefore, we are left with $$\begin{aligned}
E&=&-2t \left\{ L \int _{-\pi}^{\pi} \frac {dk}{2\pi}\sin k \int
_{-\infty}^{\infty} \frac {dp}{p}\sin \left (\frac {2tp}{U} \sin k
\right) \frac {J_0\left ( \frac {2tp}{U} \right)}{e^{|p|}+1}
\right.
\nonumber \\
&-& \int _{-\pi}^{\pi} \frac {dk}{2\pi}\sin k \int
_{-\infty}^{\infty}dx \frac {L_Z(x)}{\cosh \left (\frac {2 \pi
t}{U}
\sin k -\pi x \right )} \nonumber \\
&-&\frac{2t}{U} \int _{-\pi}^{\pi} \frac {dk}{\pi}\sin k \int
_{-\pi}^{\pi} dh~ G \left [ \frac {2t}{U} (\sin h-\sin k)
\right ]\cos h ~L_W(h) \nonumber \\
&-&\left . \int _{-\pi}^{\pi} \frac {dk}{\pi}\sin k ~ L_W(k)
\right\} \, .\end{aligned}$$ We recognize the presence of the Bessel function $$J_1(z)=\frac {1}{2\pi i}\int _{-\pi}^{\pi} dk \sin k ~ e^{iz
\sin k} \, , \label {J1}$$ in the first three terms of the right hand side (in second and third we have used the Fourier representations, e.g. (\[Gxxx\]) for $G$). We finally obtain that the highest eigenvalue of (\[Hubbham2\]) is expressed in terms of the counting functions $Z$ and $W$ as follows, $$\begin{aligned}
E&=&-2t\left\{ L \int _{-\infty}^{\infty} \frac {dp}{p} \frac
{J_0\left (\frac
{2tp}{U}\right ) J_1\left (\frac {2tp}{U}\right ) }{e^{|p|}+1}
+ \int _{-\infty}^{\infty} dx \left [ \int _{-\infty}^{\infty}
\frac
{dp}{2\pi}\frac {e^{ipx}}{\cosh \frac {p}{2}}i J_1\left (\frac
{2tp}{U}\right ) \right ] L_Z(x) - \right. \nonumber \\
&-& \frac{2t}{U} \left. \int _{-\pi}^{\pi} \frac {dh}{\pi} L_W(h)
\cos h \left [ \int _{-\infty}^{\infty} \frac {dp}{i}
e^{i\frac {2tp}{U}\sin h } \frac {J_1\left (\frac
{2tp}{U}\right ) }{e^{|p|}+1} \right ]-
\int _{-\pi}^{\pi} \frac {dh}{\pi} L_W(h)
\sin h \right\} \nonumber \\
&\equiv& E_L+E_Z+E_{W1}+E_{W2} \,, \quad \text{and}\quad E_W\equiv
E_{W1}+E_{W2}. \label {Eexp}\end{aligned}$$ The first line of (\[Eexp\]), namely $E_L+E_Z$, coincides formally with the expression of the highest energy of the BDS chain as given in equation (3.24) of [@FFGR]. However, we have to remember that for the Hubbard model $Z$ satisfies a NLIE which is different from that of the BDS model. On the other hand, the second line, i.e. $E_W=E_{W1}+E_{W2}$, is a completely new contribution.
The large $L$ expansions of Hubbard and BDS energies in comparison \[comparison\]
=================================================================================
As it was first noticed by [@RSS], in the $L=\infty$ limit (thermodynamic limit) the leading term of the highest energy $E_{\text{BDS}}$ of the BDS model coincides with the thermodynamical expression of the Hubbard model, the first contribution in (\[Eexp\]). Since we can provide exact expressions for energies at any length $L$, we want to extract more information about the difference $E_{\text{BDS}}-E$ when $L$ is large, but finite. And we are in the position to obtain this for any value of the coupling constant $g$.
For the highest energy $E_{\text{BDS}}$, many detailed results were given in [@FFGR], where it was expressed as (cf. equation 3.24) $$\begin{aligned}
E_{\text{BDS}}&=&\frac{\sqrt{2}}{g} \left\{ L \int
_{-\infty}^{\infty} \frac {dp}{p} \frac { J_0 ( \sqrt{2}g p ) J_1
(\sqrt{2}g p ) }{e^{|p|}+1} + \right. \nonumber \\
&+& \left. \int _{-\infty}^{\infty} dx \left [ \int
_{-\infty}^{\infty} \frac
{dp}{2\pi}\frac {e^{ipx}}{\cosh \frac {p}{2}}i J_1 (\sqrt{2}g p) \right]
L_{Z_{\text{BDS}}}(x) \right\} \label {EBDS} \, ,\end{aligned}$$ (with the usual shorthand $L_{Z_{\text{BDS}}}(x)= {\mbox {Im}}\ln
[1+e^{iZ_{\text{BDS}}(x+i0)} ]$), in terms of the solution of the NLIE $$Z_{\text{BDS}}(x)=L \int _{-\infty}^{\infty} \frac {dp}{2p} \sin
px \frac {J_0 ( \sqrt{2}g p )}{\cosh \frac {p}{2}}+2 \int
_{-\infty}^{\infty} dy \ G(x-y) \ {\mbox {Im}}\ln \left
[1+e^{iZ_{\text{BDS}}(y+i0)}\right ].$$ We use in this section the parametrization (\[tUg\]) and we focus our attention on the energy formula (\[Eexp\]). For the purposes of this section, it is convenient to restore a finite (but small) value for the parameter $\epsilon>0$, used in the treatment of the function $W$.
As an effect of that, the last term of (\[Zeq4\]) becomes $$-\sqrt{2}g \ {\mbox {Im}} \int _{-\pi}^{\pi} dk \cos (k+i\epsilon)
\frac {1}{\cosh \left [ \pi u - \pi \sqrt{2}g \sin (k+i\epsilon)
\right ] } \
\ln \left [1-e^{iW(k+i\epsilon)}\right ] \, . \label{Zeq4last}$$ On the other hand, the last term of the NLIE (\[Weq2\]) for $W$ takes the form $$- 2\sqrt{2}g \mbox{ Im} \int _{-\pi}^{\pi} dh \ G \left [
\sqrt{2}g \sin (h+i\epsilon)-\sqrt{2}g \sin k \right ] \cos
(h+i\epsilon) \ln \left[1-e^{iW(h+i\epsilon)}\right ] \, .$$ Finally, $E_{W_1}$ and $E_{W_2}$ (\[Eexp\]) are rewritten as $$\begin{aligned}
&& E_{W_1}=-2\mbox{ Im} \int _{-\pi}^{\pi} \frac {dh}{\pi} \cos
(h+i\epsilon) \left [ \int _{-\infty}^{\infty} \frac {dp}{i}
e^{i\sqrt{2}g p \sin (h+i\epsilon) } \frac {J_1\left (
\sqrt{2}g p \right ) }{e^{|p|}+1} \right ] \ln \left[1-e^{iW(h+i\epsilon)}\right ] \nonumber \\
&& E_{W_2}=-\frac {\sqrt{2}}{g} \mbox{ Im} \int _{-\pi}^{\pi}
\frac {dh}{\pi} \sin (h+i\epsilon) \ln
\left[1-e^{iW(h+i\epsilon)}\right ] \, . \nonumber\end{aligned}$$ All these formul[æ]{} depend on $L$ through the function $\ln
\left[1-e^{iW(k+i\epsilon)}\right ]$. Therefore, we have to study such a function when $L$ is large. Since $\epsilon \ll 1$ (see Footnote 2), we can approximate, at first order, $$\label{appr}
\ln [1-e^{iW(k+i\epsilon)}] = \ln [1-e^{iW(k)-\epsilon
W^{\prime}(k)}]+ {\cal O}(\epsilon ^2) \, .$$ If we suppose also that $$\label{approx}
\epsilon\, W'(k) \gg 1 \qquad \forall ~ k\in [-\pi, \pi] \, ,$$ (this condition will be better stated in few lines), then the factor $\exp [-\epsilon \, W'(k)]$ becomes very small and we are led to the final approximation: $$\ln [1-e^{iW(k+i\epsilon)}] \simeq - e^{iW(k)} e^{-\epsilon W'(k)}
\, .$$ On the other hand, when $L\rightarrow \infty$ we can approximate $W(k)$ by its ’forcing term’, $$W(k)\simeq L\left[ k+ \int _{-\infty}^{\infty} \frac {dp}{p} \sin
\left (
\sqrt{2}g p \sin k \right )\frac {J_0\left (\sqrt{2}g p \right )}{1+e^{|p|}} \right]\, ,$$ and, consequently, its derivative by $$W'(k)\simeq L \left[ 1+ {\sqrt{2}} g\cos k \int
_{-\infty}^{\infty} dp ~ \cos \left(\sqrt{2}g p \sin k\right)
\frac {J_0 (\sqrt{2}g p )}{1+e^{ |p|}} \right]\, .$$ The function in the square brackets has a minimum at $k=\pm \pi$, which we call $\omega (g)$: $$\omega (g)= 1- {\sqrt{2}} g \int _{-\infty}^{\infty} dp ~ \frac
{J_0 (\sqrt{2}g p )}{1+e^{ |p|}} = 1-2 \int _{0}^{\infty} dx \frac
{J_0 (x)}{1+e^{\frac {x}{\sqrt {2}g} }}
\, .$$ Expanding the denominator in power series and integrating term by term we get $$\omega (g)= 1- 2 \int _{0}^{\infty} dx J_0 (x)\sum _{n=1}^{\infty}
(-1)^{n+1} e^{-n\frac {x}{\sqrt {2}g} } = 1- 2 \sum
_{n=1}^{\infty} (-1)^{n+1} \frac {1}{\sqrt {1+\frac {n^2}{2g^2}}}
\, .$$ The last expression can be seen as a result of an integration in the complex plane $$\omega (g)= 1 + \int _{\Gamma }\frac {dz}{i} \frac {1}{\sin \pi
z} \frac {1}{\sqrt {1+\frac {z^2}{2g^2}}} \, ,$$ on a curve $\Gamma $ (see Figure 6.4 of Takahashi’s book [@TAK]), which surrounds the poles on the positive real axis of $\frac {1}{\sin \pi z}$, excluding the origin. We can deform the integration contour to the curve consisting of the points $\delta
+ iy $, with $\delta >0$ fixed and $|y|>\rho>0$, and of a semicircle of radius $\rho$ around the origin; then we let $\delta
$ and $\rho$ go to zero. The pole at $z=0$ gives a contribution $-1$ to the integral in the previous formula. The integral on the points $|y|<{\sqrt {2}g}$ is zero by disparity of the integrand. On the other hand, the integrand computed for $\bar y>{\sqrt
{2}g}$ equals the integrand in $-\bar y$, because they contain square roots of complex numbers of the same modulus, but lying just above (for $\bar y>{\sqrt {2}g}$) or just below (for $\bar
y<-{\sqrt {2}g}$) the cut. Therefore we are left with $$\omega (g)= 2\int _{{\sqrt {2}g} }^{\infty} dy \frac {1}{\sinh
\pi y} \frac {1}{\sqrt {\frac {y^2}{2g^2}-1}} \, .$$ From this expression, it easily follows that $\omega (g) >0$. Moreover, one can show that $\omega (0)=1$ and that $$g \rightarrow \infty \, \Rightarrow \omega (g) \simeq g \, {\mbox
{exp}}(-\pi {\sqrt {2}} g) \, .$$ We conclude that $\forall \, g $ the derivative $W'$ is everywhere greater than a positive constant: $W'(k)>L~ \omega (g)>0, \,
\forall ~ k \in [-\pi,\pi]$. As a consequence of this, the assumed conditions on $\epsilon $ and $L$ can be stated as $$\frac {1}{L~\omega (g)} \ll \epsilon \ll 1 \, . \label {epsicond}$$ We now consider the two $W$-depending terms of (\[Eexp\]), $E_{W1}$ and $E_{W2}$, when (\[epsicond\]) holds. We have the following inequalities: $$\begin{gathered}
|E_{W1}| \leq 2 \int _{-\pi}^{\pi} \frac {dh}{\pi}~|\cos
(h+i\epsilon)|~ e^{-\epsilon W'(h)}~\left |
\int_{-\infty}^{\infty} \frac {dp}{i}
e^{i\sqrt{2}g p \sin (h+i\epsilon)} \frac {J_1\left (\sqrt{2}g p \right ) }{e^{|p|}+1}
\right | \nonumber \\
< 2 e^{-\epsilon L\, \omega (g)}\int _{-\pi}^{\pi} \frac
{dh}{\pi}~|\cos (h+i\epsilon) |~ \left | \int_{-\infty}^{\infty}
\frac {dp}{i} e^{i\sqrt{2}g p \sin (h+i\epsilon)} \frac {J_1\left
(
\sqrt{2}g p \right ) }{e^{|p|}+1} \right | \, . \label {Emaggio}\end{gathered}$$ The integral contained in this last line is finite, as far as $\epsilon$ is sufficiently small: it should be $\sinh \epsilon <
\frac {1}{\sqrt{2}g}$ and this condition is always satisfied, as we will show in the following Remark 1. Therefore, we conclude that, in the limit $L\rightarrow \infty$, $$|E_{W_1}|< f_1(g)\, e^{-\epsilon L\, \omega (g)} \, , \label
{ew1}$$ where we have indicated with $f_1$ the function of $g$ (but not of $L$) appearing in (\[Emaggio\]). The same conclusion for $E_{W2}$, $$|E_{W_2}|< f_2(g)\, e^{-\epsilon L\, \omega (g)} \, , \label
{ew2}$$ can be obtained, in the limit $L\rightarrow \infty$, by a similar reasoning.
On the other hand, the same procedure can be applied to the third term of the r.h.s. of the NLIE for $Z$ (\[Zeq4\]), which we have rewritten for finite $\epsilon$ in (\[Zeq4last\]). This term marks the difference between the NLIE for $Z$ in the Hubbard model and for $Z_{\text{BDS}}$ in the BDS Ansatz and acts a a forcing term in the NLIE for the difference $Z_{\text{BDS}}-Z$. One concludes that, in the limit $L\rightarrow \infty$, such a term is exponentially small and, consequently, that $$|Z_{\text{BDS}}(x)-Z(x)|< f_Z(x, g)\, e^{-\epsilon L\, \omega (g)}
\, , \label {zdiff}$$ with an analogous meaning of the function $f_Z(x,g)$.
Now, we turn to the expression for the highest energy in the Hubbard model (\[Eexp\]) and discuss its relation with the analogous one (\[EBDS\]) in the BDS context, when $L$ is large. We remark that the second term in the r.h.s. of (\[Eexp\]) is formally identical to the second term of (\[EBDS\]), the only difference being that in the latter $Z$ is replaced by $Z_{\text{BDS}}$. However, the result (\[zdiff\]) implies that the difference between these two terms is indeed smaller than $f_{E_Z}(g)\, e^{-\epsilon L\, \omega (g)}$, with $f_{E_Z}$ a positive function of $g$. This finding, together with (\[ew1\], \[ew2\]), allows us to state that, for all finite values of $g$, $$L\rightarrow \infty \quad \Rightarrow \, |E_{\text{BDS}}-E| <
f_E(g)\, e^{-\epsilon L\, \omega (g)} \, , \label {E-EBDS}$$ i.e. the difference between the highest energies in the Hubbard and in the BDS model is exponentially small at large $L$. Therefore, not only their leading terms coincide, but also all the [*power-like and logarithmic finite size corrections*]{}: this is exactly the usual asymptotic expansion for large volume in statistical field theory. As a confirmation of this statement, we observe that the $1/L$ correction to the highest energy of the BDS model, found in [@FFGR] and expressed in terms of the modified Bessel functions $I_0,\,I_1$ by $$\label{fsc}
\frac{\sqrt2}{L\pi g}\ \frac{I_1(\sqrt2 \pi g)}{I_0(\sqrt2 \pi
g)}\ \frac{\pi^2}{6} \, ,$$ exactly matches the same result for the Hubbard model, obtained with different methods by [@woynar; @DEGKKK]. This has been studied numerically in Fig. \[unosul\].
[**Remark 1.**]{} The variable $\epsilon>0$ introduced in (\[cauchyW\]) has to satisfy the condition $\epsilon \ll 1$ (see Footnote 2). In any case, an upper bound for $\epsilon$ comes from the condition that the integration contour of (\[cauchyW\]) contains no singularities of the functions $f(x)$ appearing in the integrand. As far as the NLIE for $W$ is concerned, the function appearing in the integrations is $\Phi$ (\[Wdef\]). Therefore, the singularities come from terms like $$\log \left( \frac{i}{2} \pm (u_l-\frac{2t}{U} \sin k) \right)\,.$$ More precisely, $\bar {k}$ is a singularity if $$\frac{1}2 \pm [-\frac{2t}{U} \mbox{ Im}(\sin \bar{k})] =0\,,
\qquad u_l-\frac{2t}{U} \mbox{ Re}(\sin \bar{k}) =0\,.$$ We concentrate on the first equation that takes the form $$\label{dominata}
|\sinh (\mbox{Im }\bar{k})| = \frac{U}{4t}~ \frac1{|\cos (\mbox{Re
} \bar{k})|}\geq \frac{U}{4t} \,.$$ The upper bound for $\epsilon$, $\epsilon _M>\epsilon $, is given by the smallest value of $\mbox{Im }\bar{k}$, namely $$\label{largeeps}
\epsilon_{M} = \mbox{arcsinh}{\frac{U}{4t}} = \mbox{arcsinh} \frac
{1}{2 \sqrt{2}g}\,.$$
[**Remark 2.**]{} When $g\ll 1$, we already know that $E_{\text{BDS}}-E = {\cal {O}}(g^{2L})= {\cal {O}}(e^{2L\ln g})$. In consequence of that, statement (\[E-EBDS\]) is already known to be valid when $g \ll 1$. The results of this section allow to extend the validity of (\[E-EBDS\]) – for the highest energy state – also to the non-perturbative region.
[**Remark 3.**]{} On the other hand, when $g\gg 1$, we can give an explicit expression for the estimated difference (\[E-EBDS\]). More precisely, we can exactly evaluate $E-E_{\text{BDS}}$ in the double limit $L\rightarrow \infty$, $g\rightarrow \infty$.
When $g\rightarrow \infty$, we have $\epsilon \leq \epsilon _M
=\frac {1}{2{\sqrt {2}}g}\ll 1$. Performing the $g\rightarrow
\infty$ limit of the $L\rightarrow \infty$ limit of $W(k+i\epsilon)$, we get $$W(k+i\epsilon)\simeq L [ k+i\epsilon +\arcsin \sin k +i\epsilon \,
{\mbox {sgn}} (\cos k )]\, .$$ The choice $\epsilon =\epsilon _M$ allows this expression to be an expansion in powers of $\frac {1}{g}$, exact up to terms ${\cal
O}(1/g)$. In the same limit, the $p$-integral contained in the formula for $E_{W_1}$ becomes $$\int _{-\infty}^{\infty} \frac {dp}{i}
e^{i\sqrt{2}g p \sin (h+i\epsilon) } \frac {J_1\left (
\sqrt{2}g p \right ) }{e^{|p|}+1} \rightarrow \frac {1}{{\sqrt {2}}g} \frac {\sin h }{|\cos h|} \, .$$ It follows that in the double limit $L\rightarrow \infty$, $g\rightarrow \infty$, $$\begin{aligned}
E_{W_1}&=& -\frac {{\sqrt {2}}}{g} \mbox{ Im} \int _{-\pi}^{\pi}
\frac {dh}{\pi} \sin h \, {\mbox {sgn}} (\cos h )
\, \ln \left [ 1-e^{iL (h+\arcsin \sin h )-\epsilon L (1+ {\text {sgn}} \cos h ) }\right ] \, , \nonumber \\
E_{W_2}&=& -\frac {{\sqrt {2}}}{g} \mbox{ Im} \int _{-\pi}^{\pi}
\frac {dh}{\pi} \sin h \, \ln \left [1- e^{iL (h+\arcsin \sin h
)-\epsilon L (1+ {\text {sgn}} \cos h )} \right ] \, . \nonumber\end{aligned}$$ Therefore, $$\begin{aligned}
E_{W_1}+E_{W_2}&=&-2\frac {{\sqrt {2}}}{g} \mbox{ Im} \int
_{-\pi/2}^{\pi/2} \frac {dh}{\pi} \sin h \, \ln \left
[1- e^{2iLh-2\epsilon L} \right ] \simeq \\
&\simeq & 2\frac {{\sqrt {2}}}{g} e^{-2\epsilon L } \int
_{-\pi/2}^{\pi/2} \frac {dh}{\pi} \sin h \,
\sin 2Lh \simeq \nonumber \\
&\simeq & - \frac {2{\sqrt {2}}}{\pi L g} e^{ -\frac {L}{{\sqrt
{2}}g}} \, , \nonumber\end{aligned}$$ where we have kept only the leading term proportional to $1/L$ and we have chosen $\epsilon = \epsilon _M=\frac {1}{2{\sqrt {2}}g}$.
On the other hand, the term (\[Zeq4last\]), which marks the difference between $Z$ and $Z_{\text {BDS}}$, in the double limit $L\rightarrow \infty$, $g\rightarrow \infty$ becomes $$\begin{aligned}
&& - \mbox{ Im} \int _{-\pi}^{\pi} dk \cos k \, \delta (\sin k) \,
\ln \left [1-e^{iL (k+\arcsin \sin k )-\epsilon
L (1+{\text {sgn}} \cos k )} \right ] = \nonumber \\
&=& - \mbox{ Im} \left [ \ln \left (1-e^{-2 \epsilon L }\right )-
\ln \left (1-e^{iL\pi} \right ) \right ] =0\end{aligned}$$ Therefore, in the double limit $L\rightarrow \infty$, $g\rightarrow \infty$ we have $Z=Z_{\text {BDS}}$ and, consequently, $$E-E_{\text {BDS}}=E_{W_1}+E_{W_2}=-\frac {2{\sqrt {2}}}{\pi L g}
e^ {-\frac {L}{{\sqrt {2}}g} } \, . \label {lglimit}$$ This behaviour is typical of the “wrapping effects”. A similar results in the context of string theory was found in [@NZZ].
[**Remark 4.**]{} For intermediate values of $g$ we can not make a prediction for the “velocity” of the exponential damping at large $L$ of $|E_{\text{BDS}}-E|$. However, numerical data in Section \[loafg\] are consistent with (\[E-EBDS\]).
Two limiting regimes: strong and weak coupling.
===============================================
Conversely to the previous Section, we want now to explore the Hubbard energy (\[Eexp\]) in two limiting regimes, $\frac{t}{U}
\rightarrow 0$ and $\frac{U}{t} \rightarrow 0$ for any fixed $L$. They define, respectively, the strong and the weak coupling in the Hubbard model and allow for simplifications and comparison between our results and analogous ones obtained by other methods. These computations are also useful as tests for the NLIEs of $W$ (\[Weq2\]) and of $Z$ (\[Zeq4\]). Besides, we analyse the analogous limit $g\rightarrow +\infty$ of the BDS energy for any fixed value of $L$.
Strong coupling limit in the Hubbard model, i.e. weak coupling in SYM: large $\frac{U}{t}$.
-------------------------------------------------------------------------------------------
A well known result of the perturbative expansion of the Hubbard Hamiltonian around $\frac{t}{U}=0$ at (strong) half filling shows that the leading term is the Heisenberg $1/2$-XXX spin chain Hamiltonian [@A]. This Section is devoted to derive how our formalism consistently reproduces this result and makes natural a linear expansion beyond this order. For this aim it is crucial to observe that the NLIE for $W$ becomes redundant for very small $\frac{t}{U}=0$. Indeed, the NLIE for $Z(x)$ (\[Zeq4\]) easily reduces to $$\begin{aligned}
Z(x)&=&L \int _{-\infty}^{\infty} \frac {dp}{2p} ~\frac{\sin px}
{\cosh \frac {p}{2}}+2 \int
_{-\infty}^{\infty} dy\ G(x-y) {\mbox { Im}}\ln \left
[1+e^{iZ(y+i0)}\right ] + {\cal O}\left (\frac {t}{U} \right )
\nonumber \\
&=& L \ {\mbox {gd }} \pi x +2 \int
_{-\infty}^{\infty} dy\ G(x-y) {\mbox { Im}}\ln \left
[1+e^{iZ(y+i0)}\right ] + {\cal O}\left (\frac {t}{U} \right )\, ,\end{aligned}$$ and hence it precisely agree with the single NLIE for the spin $1/2$-XXX chain (equation (2.25) of [@FFGR]) upon forgetting the ${\cal O}\left (\frac {t}{U} \right )$ terms, namely $$Z(x)=Z_{\text{XXX}}(x)+ {\cal O}\left (\frac {t}{U} \right ) \, .
\label {Zlimit}$$ In the same limit we evaluate the terms entering the rhs of the NLIE for $W$ (\[Weq2\]). The integration term on the first line behaves as follows: $$\begin{aligned}
&& L \int _{-\infty}^{\infty} \frac {dp}{p}\ \frac {\sin \left
(\frac
{2tp}{U}\sin k \right )}{1+e^{|p|}} J_0 \left (\frac
{2tp}{U}\right)=\frac {2tL}{U} \sin k\int
_{-\infty}^{\infty}dp \frac {1}{1+e^{|p|}} + {\cal O}\left (\frac
{t^3}{U^3} \right ) \nonumber \\
&=& \frac {4tL}{U} \sin k \ln 2 + {\cal O}\left (\frac
{t^3}{U^3} \right ) \, .\end{aligned}$$ The term on the second line contains $L_Z$ and is (at least) $
{\cal O}\left (\frac
{t}{U}\right)$, since $$\int _{-\infty}^{\infty} dx ~ \frac {L_{Z_{\text{XXX}}}(x) }
{\cosh \pi x} =0 \, .$$ The two terms just computed are enough to distinguish the leading order of $W$, $$W(k)=L(k-\phi)+ {\cal O}\left (\frac {t}{U} \right ) \, . \label
{leadingW}$$ This result can be used in the NLIE (\[Zeq4\]) for $Z$ to show that the third term of the r.h.s. is $ {\cal O}\left (\frac
{t^2}{U^2}\right)$, because $$\int _{-\pi}^{\pi} dk \cos k {\mbox { Im}} \ln \left
[1-e^{iL(k-\phi+i0 )}\right ]=0 \, .$$ Since also the first term of the r.h.s. of (\[Zeq4\]) is $
{\cal O}\left (\frac
{t^2}{U^2}\right)$, we can correct (\[Zlimit\]) as $$Z(x)=Z_{\text{XXX}}(x)+ {\cal O}\left (\frac {t^2}{U^2} \right )
\, . \label {Zlimit2}$$ Consequently, the term on the second line of (\[Weq2\]) is $$- \int _{-\infty}^{\infty} dx \frac {L_Z(x)}{\cosh \left [ \frac
{2 t\pi}{U}\sin k -\pi x \right ] } = -\frac {2t\pi}{U} \sin k
\int
_{-\infty}^{\infty} dx ~ \frac {L_{Z_{\text{XXX}}}(x) \sinh \pi x}{\cosh ^2 \pi x}
+ {\cal O}\left (\frac {t^2}{U^2} \right )\, .$$ For what concerns the third line of (\[Weq2\]), we use (\[leadingW\]) to get $$\begin{gathered}
- \frac {4t}{U} \int _{-\pi}^{\pi} dh ~G \left [ \frac {2t}{U}
(\sin h-\sin k ) \right] \cos h {\mbox { Im}}\ln \left [1-e^{i W(h+i0)}\right ]= \\
= - \frac {4t}{U}\, G(0)\int _{-\pi}^{\pi} dh\ \cos h {\mbox {
Im}} \ln \left [1-e^{iL(h-\phi+i0)}\right ] + {\cal O}\left (\frac
{t^2}{U^2} \right )= {\cal O}\left (\frac {t^2}{U^2} \right ) \, . \nonumber\end{gathered}$$ The integral in the last line vanishes, as one can see from the following calculation: $$\begin{gathered}
\int_{-\pi}^{\pi}dh~ \cos h ~\ln
\frac{1-e^{iL(h-\phi+i0)}}{1-e^{-iL(h-\phi-i0)}}=
\int_{-\pi}^{\pi}dh~ \cos (h+\phi) ~\ln
\frac{1-e^{iL(h+i0)}}{1-e^{-iL(h-i0)}}=
\nonumber \\
=\int_{-\pi}^{\pi}dh~ (\cos h \cos \phi -\sin h \sin \phi) ~\ln
\frac{1-e^{iL(h+i0)}}{1-e^{-iL(h-i0)}}=0-0=0 \,. \label{integrale}\end{gathered}$$ The summand containing $\cos h \cos \phi$ is odd under the change $h\rightarrow -h$, so its integral is zero. The remaining term is also odd under the change $h\rightarrow \pi-h$, thanks to the parity of $L$. Therefore, we conclude that in the limit $\frac{t}{U}\rightarrow 0$ the solution of (\[Weq2\]) becomes $$W(k)=L(k-\phi) + \frac {4tL}{U} \ \sin k \ln 2 -\frac
{2t\pi}{U}\sin k \int_{-\infty}^{\infty} dx \frac
{L_{Z_{\text{XXX}}}(x) \sinh \pi x}{\cosh ^2 \pi x}
+ {\cal O}\left (\frac {t^2}{U^2} \right ) \, . \label {Wlim1}$$ Curiously enough, we recognize in (\[Wlim1\]) the highest energy $E_{\text{AFX}}$ of the ferromagnetic spin $1/2$-XXX chain (eq. 2.33 of [@FFGR]): $$W(k)=L(k-\phi) +\frac {2t}{U} \sin k \ E_{\text{AFX}} + {\cal
O}\left (\frac
{t^2}{U^2} \right ) \, . \label {Wlim2}$$ We now compute the highest energy (\[Eexp\]) in that limit. Using (\[Zlimit\]), it is easy to see that the first line of (\[Eexp\]), $E_L+E_Z$, is proportional to the highest energy $E_{\text{AFX}}$ of the ferromagnetic spin $1/2$-XXX chain: $$\label{energiaxxx}
E_L+E_Z = -2t \left [ \frac {t}{U} E_{\text{AFX}} + {\cal O}\left (\frac
{t^3}{U^3} \right ) \right ] \,.$$ Among the remaining terms, the first one, $E_{W1}$, is $2t\, {\cal
O}\left (\frac
{t^3}{U^3} \right )$: therefore, it can be neglected, with respect to $E_L+E_Z$, in the same limit.
The last term to evaluate, $E_{W2}$, deserves more attention. We evaluate it using (\[Wlim2\]) and expanding the logarithm where it appears: $$\begin{aligned}
E_{W2}&=& 2t \int _{-\pi}^{\pi} \frac {dh}{\pi}\sin h \,
\frac1{2i}
\ln \frac{1-e^{iW(h+i0)}}{1-e^{-iW(h-i0)}} \nonumber \\
&=& 2t \left [ \int _{-\pi}^{\pi} \frac {dh}{\pi}\sin h \,
\frac1{2i} \ln \frac{ 1-e^{iL(h-\phi+i0)+i \frac {2t}{U} \sin
(h+i0) E_{\text{AFX}}}} { 1-e^{-iL(h-\phi-i0)-i \frac {2t}{U} \sin
(h-i0) E_{\text{AFX}}}}
+{\cal O}\left(\frac{t^2}{U^2}\right) \right ] \nonumber \\
&=&2t \left \{ \int _{-\pi}^{\pi} \frac {dh}{\pi} \ \sin h \,
\frac {1}{2i}\ln \frac{1-e^{iL(h-\phi+i0)}}{1-e^{-iL(h-\phi-i0)}}+\right. \nonumber \\
&+& \left. \frac {t}{U} E_{\text{AFX}} \int _{-\pi}^{\pi} \frac
{dh}{\pi} \ \sin^2 h \left [ \frac {-e^{iL (h-\phi+i0)}}{1-{e^{iL
(h-\phi+i0)}}} -\frac {e^{-iL (h-\phi-i0)}}{1-{e^{-iL
(h-\phi-i0)}}}\right ] +{\cal O}\left(\frac{t^2}{U^2}\right)
\right \} \, . \nonumber\end{aligned}$$ Now, we now shift the integration variable $h\rightarrow h+\phi$ $$\begin{aligned}
E_{W2}&=&2t \left [ \int _{-\pi}^{\pi}\frac {dh}{\pi} \ \sin
\left (h+\phi \right)\frac {1}{2i}\ln \frac
{1-e^{iL(h+i0)}}{1-e^{-iL(h-i0)}}+ \right.\nonumber \\
&+& \left. \frac {t}{U} \frac {E_{\text{AFX}}}{iL} \int
_{-\pi}^{\pi} \frac {dh}{\pi} \ \sin ^2 (h+\phi) \frac
{d}{dh}\ln \frac {1-e^{iL(h+i0)}}{1-e^{-iL(h-i0)}} + {\cal
O}\left(\frac{t^2}{U^2}\right) \right ] \, .\end{aligned}$$ The first integral is very similar to (\[integrale\]) and it vanishes in the same way as (\[integrale\]) does. After an integration by parts, the second integral is brought into the form $$\begin{aligned}
&& -2t \ \frac {t}{U} \frac {E_{\text{AFX}}}{iL} \int
_{-\pi}^{\pi} \frac {dh}{\pi} \ \sin [2(h+\phi)]\ \ln \frac
{1-e^{iL(h+i0)}}{1-e^{-iL(h-i0)}} = \nonumber \\
&& -2t \ \frac {t}{U} \frac {E_{\text{AFX}}}{iL} \int
_{-\pi}^{\pi} \frac {dh}{\pi} (\sin 2h \cos 2\phi +\cos 2h \sin
2\phi ) \ln \frac {1-e^{iL(h+i0)}}{1-e^{-iL(h-i0)}} \, . \nonumber\end{aligned}$$ The second integral is zero by parity; the first one can be shown to vanish by using the change of variable $h \rightarrow \frac
{\pi}{2} - h$, remembering (Section 2) that $L\in 4{\Bbb N}$. We conclude that, when $\frac{t}{U} \rightarrow 0$, $E_{W2}$ is $2t
\, {\cal O}\left(\frac{t^2}{U^2}\right)$: so, it can be neglected with respect to $E_L+E_Z$. We conclude that in the limit $t/U
\rightarrow 0$ the energy (\[Eexp\]) behaves as $$\begin{aligned}
E &=&-2t \left \{ \frac {t}{U} \left [ 2L\ln 2 + \int
_{-\infty}^{\infty }dy \left(\frac {d}{dy} \frac {1}{\cosh {\pi
y}}\right) L_{Z_{\text{XXX}}}(y) \right ]
+ {\cal O}\left(\frac{t^2}{U^2}\right) \right \}= \nonumber \\
&=& -2t \left [ \frac {t}{U} E_{\text{AFX}}+ {\cal
O}\left(\frac{t^2}{U^2}\right) \right ] \, , \label{Exxx}\end{aligned}$$ i.e. the Hubbard energy coincides – except for an overall factor – with the same quantity of the ferromagnetic spin $1/2$-XXX chain (equation 2.33 of [@FFGR]). Besides, with the parametrization (\[tUg\]) the overall constant in (\[Exxx\]) $-2t^2/U=1$. Therefore, in that case we have the exact coincidence $$\lim _{\frac {t}{U} \rightarrow 0} E=E_{\text{AFX}} \, ,$$ which actually encodes all the non-linearity of this expansion, being the rest, which we omit here, just a linear order by order addition to this.
Weak coupling limit in the Hubbard model, i.e. strong coupling in SYM: small $\frac{U}{t}$. {#strHubb}
-------------------------------------------------------------------------------------------
On the contrary, we now expand the Hubbard energy around $\frac{U}{t}=0$ in a systematic way, though we will stop at the first perturbative order. In fact, by means of the usual (time independent) perturbation theory the first terms of the expansion were produced by [@MV]. Instead, here we study the NLIE for $W$ (\[Weq2\]) by expanding $$W(k;\frac{U}{t})=W_0(k)+ \frac {U}{t} W_1(k)+
o\left(\frac{U}{t}\right) \, .$$ The second term of the r.h.s. of (\[Weq2\]) is expanded as $$\begin{gathered}
L \int _{-\infty}^{+\infty} \frac {dp}{p} \, \sin (
p\sin k )\frac {J_0\left (
{p}\right )}{1+e^{\frac {U|p|}{2t}}}=L \int _{-\infty}^{+\infty} \frac {dp}{p} \, \sin (
p\sin k )\frac {J_0\left (
{p}\right )}{2}+ o\left(\frac{U}{t}\right) = \nonumber \\
= L \arcsin \sin k + o\left(\frac{U}{t}\right) \, ,\end{gathered}$$ since the order ${\cal O}(U/t)$ contribution vanishes: $$-\frac {UL}{4t} \int _{0}^{+\infty} dp \sin (
p\sin k ) J_0(p) =0 \, .$$ The third term on the right hand side is $$\begin{aligned}
&-&\int _{-\infty}^{+\infty}dx \, \frac {L_Z(x)}{\cosh \left [
\frac {2t\pi}{U}\sin k -\pi x \right ] } =-\frac{U}{2t}\int
_{-\infty}^{+\infty}dx \left [ \int _{-\infty}^{+\infty} \frac
{dp}{2\pi} \frac {e^{ip\left (\sin k -\frac {U}{2t}x
\right)}}{\cosh \frac {pU}{4t}} \right ] L_Z(x)
= \nonumber \\
&=&- \frac {U}{2t} \delta(\sin k) \int _{-\infty}^{+\infty}dx~
L_{Z_0}(x) + {\cal O}\left(\frac{U^2}{t^2}\right) \, ,
\label{zetazero}\end{aligned}$$ where $Z_0$ indicates the order zero of $Z$ in the limit $U/t\rightarrow 0$.
For what concerns the fourth term, we obtain $$\begin{aligned}
&-& \frac {4t}{U} \int _{-\pi}^{\pi} dh \, G \left [ \frac
{2t}{U}\sin
h-\frac {2t}{U}\sin
k \right ] \cos h \, L_W(h) = \nonumber \\
&-& \int _{-\pi}^{+\pi}dh \, \cos h \, \delta (\sin h - \sin k) \,
\left [ L_{W_0}(h) - \frac {U}{t} \, {\mbox {Re}} \, \frac
{e^{iW_0(h+i0)}}{1-e^{iW_0(h+i0)}} \, W_1(h) +
o\left(\frac{U}{t}\right)\right ] \, , \nonumber\end{aligned}$$ since as a distribution $$\lim _{U/t \rightarrow 0} \frac {2t}{U} G\left (\frac {2t}{U} x
\right ) = \frac {1}{2} \delta (x) + {\cal
O}\left(\frac{U^2}{t^2}\right) \, . \label{Gdelta}$$
Therefore, at the order zero in $U/t$ the NLIE for $W$ reduces to $$\begin{aligned}
W_0(k)&=&L(k-\phi) + L \int _{-\infty}^{+\infty} \frac {dp}{2p}
\sin (p\sin k) J_0(p)-\nonumber \\
&-&\int _{-\pi}^{+\pi}dh \cos h \, \delta (\sin h-\sin k) \,
L_{W_0}(h)= \label
{Weqappr} \\
&=& L(k-\phi)+L \, {\mbox {arcsin}} \sin k - {\mbox {sgn}}(\cos k)
\, [ L_{W_0}(k) - L_{W_0}(\pi {\mbox {sgn}}k -k)] \, . \nonumber\end{aligned}$$ In the last step we used formula (6.693.1) from [@GR].
Now, it is not difficult to show that the solution of (\[Weqappr\]) is $$W_0(k)=L(k-\phi)+2\pi N(k) \, , \label {W_0}$$ where the function $N(k)$ takes only integer values. As far as energy calculations (\[Eexp\]) are concerned, the knowledge of $N(k)$ is not required.
Let us now focus ourselves on the NLIE for $Z$ (\[Zeq4\]). After the change of variable $p\rightarrow pU/2t$ in the integrand of the forcing term, we can rewrite it as follows: $$\begin{aligned}
Z(x)&=&L \int _{-\infty}^{+\infty} \frac {dp}{2p} \, \sin \frac
{pUx}{2t}\, \frac{J_0\left ( {p} \right )}{\cosh \frac{pU}{4t}
}+2 \int_{-\infty}^{+\infty} dy \, G(x-y) \, {\mbox {Im}}\ln \left
[1+e^{iZ(y+i0)}\right ]-\nonumber \\
&-&\frac{2t}{U}\int _{-\pi}^{\pi} dk \, \cos k \frac{1}{\cosh
\left [ \pi x - \frac{2t\pi}{U}\sin k \right ] }\,
{\mbox {Im}}\ln \left
[1-e^{iW(k+i0)}\right ] \, . \label {Zeq5}\end{aligned}$$ When $\frac{U}{t}\rightarrow 0$ the forcing term is clearly ${\cal
O}(U/t)$. In order to estimate the last term we express the inverse of the $\cosh $ function in terms of its Fourier transform and we get $$\begin{aligned}
&-& \frac{2t}{U}\int _{-\pi}^{\pi} dk \, \cos k \int _{-\infty}^{\infty}\frac{dp}{2\pi}{e^{ip\left (x-\frac {2t}{U}\sin k \right)}} \frac {1}{{\cosh \frac{p}{2}}} L_W(k)= \nonumber \\
&=&- \int _{-\pi}^{\pi} dk \, \cos k \int _{-\infty}^{\infty}\frac {dp}{2\pi} \frac {e^{ip\left (\frac {U}{2t}x-\sin k \right)}} {\cosh \frac {pU}{4t}} L_W(k)= \nonumber \\
&=& - \int _{-\pi}^{\pi} dk \, \cos k \, \delta (\sin k) \, L_{W_0}(k) + {\cal O}\left ( \frac{U}{t}\right)= -L_{W_0}(0)+L_{W_0}(\pi) + {\cal O}\left( \frac {U}{t}\right) \nonumber \\
&=& {\cal O}\left( \frac {U}{t} \right) \, ,\end{aligned}$$ as follows from the form (\[W\_0\]) of $W_0(k)$. This allows us to say that the solution of the NLIE for $Z$ in the limit $U/t
\rightarrow 0$ is ${\cal O}(U/t)$.
Stepping back to the $U/t$ expansion for $W$, the results on $Z$ allow to say that the term (\[zetazero\]) is in fact ${\cal
O}(U^2/t^2)$. It follows that, at order $U/t$, the NLIE for $W$ reads $$W_1(k)=\int _{-\pi}^{+\pi}dh \, \cos h \, \delta (\sin h - \sin k)
\, {\mbox {Re}} \frac {e^{iW_0(h+i0)}}{1-e^{iW_0(h+i0)}} \,
W_1(h) \, ,$$ whose solution is $W_1(k)=0$. Therefore, we can write that $$W(k)=L(k-\phi)+2\pi N(k) + o\left(\frac{U}{t}\right) \, . \label
{W2ord}$$
We are now ready to compute the leading term and its first correction for the energy (\[Eexp\]) of the highest energy state in the limit $U/t\rightarrow 0$. For what concerns $E_L$, Economou and Poulopoulos re-casted it [@EP] as an asymptotic series in powers of $U/t$. We write only the first three terms of such series: $$E_L=-2t \left [ \frac {2L}{\pi} -\frac {UL}{8t}+ \frac {7 L \zeta
(3) U^2}{32 \pi ^3 t^2}+ {\cal O}\left(\frac{U^3}{t^3}\right)
\right ] \, . \label{BDS}$$ We rewrite $E_Z$ after the change of variable $p\rightarrow pU/2t$ as $$E_Z=2t \, \frac {U}{2t} \int _{-\infty}^{\infty} dx \left [ \int
_{-\infty}^{\infty} \frac {dp}{2\pi} \frac {\sin \frac
{pUx}{2t}}{\cosh \frac {pU}{4t}} J_1(p) \right ] L_Z(x) \, .$$ Since $ {\mbox {Im}}\ln \left [1+e^{iZ(x+i0)}\right ]$ is ${\cal
O}(U/t)$ and the $p$-integral is also ${\cal O}(U/t)$, we conclude that $$E_Z=2t \, {\cal O}(U^3/t^3) \, . \label {Ezeta}$$ We are left with the contributions coming from the third and the fourth term of (\[Eexp\]), $E_{W_1}$ and $E_{W_2}$, which we rearrange, by reintroducing the function $G$, as follows $$2t \int _{-\pi}^{\pi} \frac {dh}{\pi}\cos h\, L_W(h)\int
_{-\pi}^{+\pi} dk\ \frac {2t}{U} \sin k\ G\left [ \frac {2t}{U}
(\sin h -\sin k)\right] + 2t \int _{-\pi}^{\pi} \frac
{dh}{\pi}\sin h\ L_W(h) \, .$$ Using (\[Gdelta\]), we get $$E_{W_1}+E_{W_2}=2t \left [ \int _{-\pi}^{\pi} \frac {dh}{\pi} \cos
h\ L_{W_0}(h)\frac {\sin h}{|\cos h|} + \int _{-\pi}^{\pi} \frac
{dh}{\pi}\sin h\ L_{W_0}(h) + o\left(\frac{U}{t}\right) \right ]
\, . \label {extra1}$$ Putting together these two integrals, we are left with the following addition to (\[BDS\]): $$E_{W_1}+E_{W_2}=4t \lim _{\epsilon \rightarrow 0} \left [ \int
_{-\frac {\pi}{2}}^{\frac {\pi}{2}} \frac {dh}{\pi}\sin h \ {\mbox
{Im}}\ln \left [1-e^{iW_0(h+i\epsilon)}\right ]+
o\left(\frac{U}{t}\right) \right ] \, . \label {extra2}$$ After the insertion of (\[W\_0\]) in (\[extra2\]), we are left with the computation of $$\begin{aligned}
&&4t \int _{-\frac {\pi}{2}}^{\frac {\pi}{2}} \frac {dh}{\pi}\sin
h \ {\mbox {Im}}\ln \left
[1-e^{iL(h-\phi+i\epsilon)}\right ]= \\
&&\frac {2t}{i} \int _{-\frac {\pi}{2}}^{\frac {\pi}{2}} \frac
{dh}{\pi}\sin h\ \ln \frac {1-e^{iL(h-\phi)-L\epsilon
}}{1-e^{-iL(h-\phi)-L\epsilon }}
\, .\end{aligned}$$ Integrating by parts gives $$-\frac {2Lt}{\pi} \int _{-\frac {\pi}{2}}^{\frac {\pi}{2}} dh\
\cos h \left [\frac
{e^{iL(h-\phi)-L\epsilon}}{1-e^{iL(h-\phi)-L\epsilon}}+ \frac
{e^{-iL(h-\phi)-L\epsilon}}{1-e^{-iL(h-\phi)-L\epsilon}}\right
]\, .$$ In order to perform these integrations, we can see the ratios involved as sums of geometric series $$-\frac {2Lt}{\pi} \int _{-\frac {\pi}{2}}^{\frac {\pi}{2}} dh \cos
h \left [ \sum _{n=1}^{\infty}e^{iLn(h-\phi)-Ln\epsilon} +
\sum _{n=1}^{\infty}e^{-iLn(h-\phi)-Ln\epsilon} \right ] \, .$$ Now the integrations can be easily performed, giving $$-\frac {2Lt}{\pi}\sum _{n=1}^{\infty}e^{-Ln\epsilon}\left [ \frac
{2\cos
Ln\phi}{1-Ln}+ \frac {2\cos Ln\phi}{1+Ln} \right ]=
-\frac {8Lt}{\pi}\sum _{n=1}^{\infty}e^{-Ln\epsilon} \cos Ln\phi
\frac
{1}{1-L^2n^2} \, .$$ Going to the limit $\epsilon \rightarrow 0$ and rearranging this series we get $$E_{W_1}+E_{W_2}=2t \left [ \frac {4}{L\pi} \sum _{n=1}^{\infty}
\frac{\cos L n \phi}{n^2-\frac{1}{L^2}} +
o\left(\frac{U}{t}\right) \right] \, .$$ When $0\leq \phi <2\pi$ the sum of this series is (relation 1.445.6 of [@GR]) $$E_{W_1}+E_{W_2}=2t \left [ \frac {2L}{\pi}-2 \frac {\cos \left
(\frac {\pi}{L} - \phi \right)}{\sin \frac {\pi}{L}} +
o\left(\frac{U}{t}\right) \right ] \, . \label{sum}$$ Summing (\[BDS\]) with (\[sum\]), we conclude that in the limit $U/t\rightarrow 0$ the energy of the anti-ferromagnetic state of the twisted Hubbard model behaves as $$E=-2t \left [\frac {2\cos \left (\frac {\pi}{L} - \phi
\right)}{\sin \frac {\pi}{L}} -\frac {UL}{8t} +
o\left(\frac{U}{t}\right) \right ] \, . \label{Efin}$$ When $\phi =0$ we get the highest energy of the Hubbard model at small coupling [@MV], $$E=-2t \left [ 2\, {\mbox {cotan}} \frac {\pi}{L} -\frac {UL}{8t}+
o\left(\frac{U}{t}\right) \right ] \, .$$ On the other hand, according to [@RSS], the Hamiltonian of the twisted Hubbard model makes contact with the dilatation operator of the $SU(2)$ sector of ${\cal N}=4$ SYM if $\phi =\pi
/2L$. In this case we get $$E=-2t \left [ \frac {1}{\sin \frac {\pi}{2L}} -\frac {UL}{8t} +
o\left(\frac{U}{t}\right) \right ] \, .$$ We could go to higher orders, but this result already matches the findings of [@BO] obtained by the usual first order perturbation theory.
### On the strong coupling of the BDS Bethe Ansatz {#strBDS}
In [@FFGR] we proposed a NLIE for the BDS Bethe Ansatz and we analysed in a careful detail the analytic form of the finite size corrections. Because of the asymptotic nature of such an Ansatz, we limited our discussion to the case when the limit $L \to \infty$ is taken first, i.e. for finite $g$.
However, in the following we will compare the Hubbard model and the BDS Ansatz for any range of $g$, and hence it is interesting to return on the subject and discuss the structure of the finite size corrections of the strong coupling limit of the NLIEs derived in [@FFGR], i.e. the residual $L$ dependence when the $g \to
\infty$ limit is taken first.
In the previous sections we have shown that the BDS Ansatz NLIE can be formally obtained by those of the Hubbard model by simply taking $W(k)=0$ *tout court*. We can apply the same reasoning here and derive the strong coupling behaviour of the energy for the BDS Ansatz from the computation of the previous section.
Hence, by neglecting all those contributions coming from the counting function $W(k)$ and collecting only those coming from $Z(x)$, i.e. $E_L+E_Z$, we obtain $$g\rightarrow \infty \, \Rightarrow E_{\text {BDS}} = \frac{ 2 L
\sqrt 2}{\pi \, g } \, -\frac{ L}{4 \, g^2 } +\frac{7L\zeta
(3){\sqrt {2}}}{16 \pi ^3 g^3 } + {\cal O}\left (\frac {1}{g^4}
\right ) \, , \quad \forall L \, . \label {EBDSg}$$ Obviously, we have reintroduced the parametrization (\[tUg\]) for the constants $t$ and $U$. It follows that the first three terms of (\[EBDSg\]), all coming from $E_L$, provide, up to order ${\cal O}(g^{-3})$, the exact large $g$ limit of $E_{\text
{BDS}}$ for any $L$.
One can immediately realize that there is a stark difference between the finite length corrections of this expression and those obtained in [@FFGR]. We will return to this point in the next section.
Order of limits analysis
========================
In order to achieve a satisfactory understanding of the behaviour of the anomalous dimension for any value of the coupling constant, it is important to analyse what happens to our equations when the order of the limits $g,L \to \infty$ is changed. Such an aspect can be addressed both in the twisted Hubbard model (with $\phi=\pi/2L$) and in the BDS Ansatz, allowing us to compare them explicitly.
It is important to stress that the NLIEs derived for those models play a crucial role in order to have the sub-leading corrections (in $g$ and $L$) under control. With this a piece of information we will be able to infer some interesting properties about the global behaviour of the anomalous dimension.
[**[The Hubbard model.]{}**]{}\
: The analysis of Section \[comparison\] allows us to immediately write down the following expression in the limit $L\to \infty$ and fixed $g$ $$L \to \infty \ \ \ \ \ \ E= \frac{\sqrt{2}\, L}{g} \int
_{-\infty}^{+\infty} \frac {dp}{p} \frac {J_0 (\sqrt2 g \, p ) J_1
(\sqrt2 g \, p ) }{e^{|p|}+1}+ \frac{\sqrt2}{L\pi g}\
\frac{I_1(\sqrt2 \pi g)}{I_0(\sqrt2 \pi g)}\ \frac{\pi^2}{6} \, +
\dots$$ where we have an explicit expression of the $g$-dependent coefficient of the $L$ and $1/L$ terms of the $L \to \infty$ expansion. A further expansion in $g$ gives $$\label{H1} g \to \infty \ \ \ \ \ \ E^{(L,g)} = \frac{ 2 L \sqrt
2}{\pi \, g } +\frac{ \pi \, \sqrt 2}{6 \, L \, g } \, -\frac{
L}{4 \, g^2 }
\, + \dots$$ : Let us repeat the same calculation, but reversing the order of the limits. Upon exploiting one (known) result of Section \[strHubb\], we easily conclude $$\label{Hstg} g \to \infty \ \ \ \ \ \ E = \frac{\sqrt 2}{g \;
\sin \frac{\pi}{2L}} \, -\frac{ L}{4 \, g^2 } + \dots$$ which gives the exact $L$-dependence of the $1/g$ term. By expanding in $L$ we have $$\label{H2} L \to \infty \ \ \ \ \ \ E^{(g,L)} = \frac{ 2 L \sqrt
2}{\pi \, g } +\frac12 \,\frac{ \pi \, \sqrt 2}{6 \, L \, g } \,
-\frac{ L}{4 \, g^2 } \, + \dots$$ The conclusion is that, for the Hubbard model, the limits commute *only* at leading order in both $L$ and $g$. The disagreement begins with the first sub-leading correction: it is interesting to remark that when the order of the limits is exchanged, such a correction conserves the same functional form and the only change is in the numerical coefficient in front of it.
[**[The BDS Bethe Ansatz.]{}**]{}\
: In ref. [@FFGR] we computed explicitly the large $L$ behaviour of the anomalous dimension which turned out to be $$L \to \infty \ \ \ \ \ \ E_{\text {BDS}} =\frac{\sqrt{2}\, L}{g}
\int _{-\infty}^{+\infty} \frac {dp}{p} \frac {J_0 (\sqrt2 g \, p
) J_1 (\sqrt2 g \, p ) }{e^{|p|}+1}+ \frac{\sqrt2}{L\pi g}\
\frac{I_1(\sqrt2 \pi g)}{I_0(\sqrt2 \pi g)}\ \frac{\pi^2}{6} \, +
\dots$$ As explained in Section \[comparison\], when $L\rightarrow
\infty$ the highest energies of BDS and Hubbard models coincide, up to exponentially small terms. Hence, expanding in $g$, we have again $$\label{BDS1} g \to \infty \ \ \ \ \ \ E_{\text {BDS}}^{(L,g)}=
\frac{ 2 L \sqrt 2}{\pi \, g } +\frac{ \pi \, \sqrt 2}{6 \, L \, g
} \,
\, -\frac{ L}{4 \, g^2 } + \dots$$ : This case was discussed in Section \[strBDS\]. The $g\to \infty$ limit gives $$g \to \infty \ \ \ \ \ \ E_{\text {BDS}} = \frac{ 2 L \sqrt
2}{\pi \, g } \, -\frac{ L}{4 \, g^2 } +\frac{7L\zeta (3){\sqrt
{2}}}{16 \pi ^3 g^3 } +
\dots$$ Therefore, we conclude that $$\label{BDS2} L \to \infty \ \ \ \ \ \ E_{\text {BDS}}^{(g,L)}=
\frac{ 2 L \sqrt 2}{\pi \, g } \, -\frac{ L}{4 \, g^2 } +
\frac{7L\zeta (3){\sqrt {2}}}{16 \pi ^3 g^3 } +
\dots$$ As expected, in the BDS Bethe Ansatz the order of the limits commutes only at leading order in $g$ and $L$. It is important to point out that the sub-leading corrections differs also in their functional form, because the term which behaves as $1/ (g \, L)$ is absent. 0.4cm [**[Remarks.]{}**]{}
1. [The discussion of this section has many points of contact with that of Section 3 of [@BO2]. The main difference is related to the treatment of the sub-leading corrections in $L$ for $ E_{\text {BDS}}^{(g,L)}$. If one takes the $g \to \infty$ limit starting from the NLIEs of [@FFGR], one immediately realizes that the sub-leading correction used in ref. [@BO2] (given by equation (\[BDS1\])) is not the correct one, because in this limit the structure of the counting function changes dramatically giving the structure observed in (\[BDS2\]). In particular our analysis of Section \[strBDS\] shows that sub-leading corrections in $1/L$ will appear only beyond the order $1/g^3$.]{}
2. [As expected from the results of Section \[comparison\], if we take first the limit $L \to \infty$, the Hubbard and BDS Ansatz behave the same way. This is because, in such a limit the charge degrees of freedom (described by the counting function $W$ in the language of the NLIEs) are exponentially depressed.]{}
3. [In the BDS Ansatz it was somehow expected that the limits do not commute, in particular because of the so-called “wrapping problem”. What is more surprising is that the limits do not commute also in the Hubbard model which is believed not to be plagued by such a pathology.]{}
4. [As previously pointed out in [@BO2], the leading strong coupling behaviour at infinite length is the same, no matter what the model and order of limits are considered.]{}
In the following sections about the numerical analysis at fixed $L$ we will use the following expressions for the strong coupling expansion $$\begin{aligned}
\label{strcoup}
E & = & \frac{\sqrt 2}{g \; \sin \frac{\pi}{2L}} \, -\frac{ L}{4 \, g^2 } + \dots \nonumber \\
E_{\text {BDS}} & = & \frac{ 2 L \sqrt 2}{\pi \, g } \, -\frac{
L}{4 \, g^2 } + \dots.\end{aligned}$$
Numerical analysis
==================
The equations obtained in the previous sections are suitable for numerical evaluations, and, as in [@FFGR], it is not difficult to solve them by iteration. Our analysis is mainly devoted to investigate the difference between the highest energies (or anomalous dimensions) in the Hubbard model and in the BDS Bethe Ansatz. For this reason, we will uniquely use the coupling $g$ and we will use (\[tUg\]) to express $t$ and $U$. In particular, we observe the value of the ratio $$\frac{t}{U}=\frac{g}{\sqrt{2}} \, .$$ For BDS, calculations were also performed in [@FFGR]. We remember that this model is conjectured to work only when $g\ll 1$ and up to the order $g^{2L-2}$, after which the wrapping problem is present. In spite of this we will compare the energies of Hubbard and BDS predictions out of the strictly perturbative regime. We will begin with the analysis of the Konishi operator ($L=4$), then we will study the behaviour of the highest energy state in the case of chains with an intermediate number of sites. We conclude with a numerical study of the difference between the BDS Ansatz and the Hubbard model for large $L$ and finite $g$ in order to provide a numerical support to the analytic results of Section \[comparison\].
A test for the NLIEs: the Konishi operator
------------------------------------------
(100,104)(-3,0)(0,0)[![\[konishi1\] Comparison of the highest Hubbard and BDS energies (anomalous dimensions) for a system with $L=4$ sites, corresponding to the anomalous dimension of the Konishi operator. The curves indicated by “NLIE” are obtained by the non-linear integral equation – (\[Eexp\]) for Hubbard and eq. 3.24 of [@FFGR] for BDS – and those indicated by “power” are obtained with the power expansions (\[powexpH\] and \[powexpBDS\]) up to the thirtieth order. The value of $g$ where they get far away gives an idea of the convergence radius. The small image is a zoom of the surrounded area of the largest one.](konishi1.eps "fig:"){width="0.95\linewidth"}]{} (139,40)
------------------------------------------------------------------------
(79,35)[![\[konishi1\] Comparison of the highest Hubbard and BDS energies (anomalous dimensions) for a system with $L=4$ sites, corresponding to the anomalous dimension of the Konishi operator. The curves indicated by “NLIE” are obtained by the non-linear integral equation – (\[Eexp\]) for Hubbard and eq. 3.24 of [@FFGR] for BDS – and those indicated by “power” are obtained with the power expansions (\[powexpH\] and \[powexpBDS\]) up to the thirtieth order. The value of $g$ where they get far away gives an idea of the convergence radius. The small image is a zoom of the surrounded area of the largest one.](konishi2.eps "fig:")]{} (30,66)[(0,0)(21,0)(21,25)(0,25)(0,0)]{} (50.8,91)[(0,0)(38.4,-11)]{} (50.8,66)[(0,0)(38.4,-26)]{}
![\[nliemin\] Comparison of numerical NLIE data (the same as in Fig. \[konishi1\]) and the exact albeit implicit solution of [@MIN], from weak to strong coupling. The two curves perfectly overlap in the common interval $g\in[0,1.5]$.](konishi_nlie_minahan.eps){width="0.7\linewidth"}
The highest energy state with $L=4$, corresponding to the Konishi operator, has been extensively studied from the perturbative point of view, both in the context of the BDS Ansatz and within the Hubbard model. Furthermore, for the latter, an exact implicit expression for the anomalous dimension has been found by Minahan [@MIN].
In the present section we will use such known results as a test for the validity of the NLIEs derived in this paper.
Let us begin with the comparison between our NLIE and the perturbative expansions calculated in [@RSS] (for our convenience we pushed the computation up to the thirtieth order by using the routines in Appendix B of [@RSS], see below).
The result of such a comparison is summarized in Fig. \[konishi1\]. Firstly, it is impressing the behaviour of the perturbative expansion when compared to the exact result coming from the NLIEs: the convergence seems to be quite slow and the perturbative window turns out to be very small. We can also observe in Fig. \[konishi1\] that the first line which separates is the one corresponding to (\[powexpH\]) and the second is (\[powexpBDS\]). In the small image we have magnified the region where the perturbative expansions and the exact curves separate.
The emergence of such a behaviour can be explained by the appearance of rapidly growing coefficients in both the Hubbard model and BDS Ansatz cases [^8] $$\begin{aligned}
E &=& 6-12\ g^2 +42\ g^4 - 318\ g^6+ 4524\ g^8-63786\
g^{10}+783924\ g^{12}
-\nonumber \\
&& -8728086\ g^{14} +93893622\ g^{16}-1038217494\ g^{18} +
12181236666\ g^{20}
-\nonumber\\
&&-150141359712\ g^{22} + 1888713236976\ g^{24} - 23751656065164\ g^{26} \nonumber\\
&& +297019282258320\ g^{28}
- 3710023076959086\ g^{30}+\dots \,,
\label{powexpH} \\
E_{\text{BDS}} &=& 6-12\ g^{2} +42\ g^{4} -\frac{705}{4}\
g^{6}+\frac{6627}{8} g^{8} -\frac{67287}{16}\ g^{10}
+\frac{359655}{16}\ g^{12}-\nonumber \\ && -\frac{7964283}{64}\
g^{14} +\frac{22613385}{32}\ g^{16} - \frac{261928101}{64}\ g^{18}
+\frac{6164759913}{256}\ g^{20}- \nonumber \\&& -
\frac{147007778043}{1024}\ g^{22} + \frac{1772167996011}{2048} \
g^{24} -\frac{10781715497325}{2048}\ g^{26}
\nonumber \\
&& +\frac{66122074282395}{2048}\ g^{28}
-\frac{3266715687275811}{16384}\ g^{30}+\dots \, .
\label{powexpBDS}\end{aligned}$$
Another interesting check is given by the comparison of our numerical results with the exact (implicit) form for the anomalous dimension of the Konishi operator given by Minahan in [@MIN]. As shown in Fig. \[nliemin\], we found a complete agreement within the numerical precision of our computation. This allowed us to estimate our relative error to be less than $2\cdot 10^{-6}$.
It is interesting to notice that, even if $L$ is not at all large, the difference between the exact curves for Hubbard and BDS remains small as $g$ is increased. This fact seems to suggest that the BDS Ansatz can be considered as a good approximation of the Hubbard model even when $g$ is large (i.e. non-perturbative) and $L$ is fixed to a small value (and not only in the limit of infinite chain).
From this point of view it would be nice to study what happens in the strong coupling regime. Unfortunately such a regime is difficult to reach with the numerical integration of our NLIEs, because of a reduced numerical precision for large $g$.
However, it is possible to use the strong coupling expansions eq. (\[strcoup\]) for the present $L=4$ case. We obtain $$\begin{aligned}
E &=& \frac{3.69552}{g}-\frac{1}{g^2}+\dots \\
E_{\text {BDS}} &=& \frac{3.60127}{g}-\frac{1}{g^2}+\dots.\end{aligned}$$ This result is interesting for two reasons. Firstly, it shows that already for $L=4$ the strong coupling prediction for the BDS Ansatz is a good approximation of the corresponding result obtained in the Hubbard model. Moreover, one can see that the strong coupling expansions smoothly joins the numerical data in Fig. \[konishi1\]: we will further comment on this issue in the next section where we will analyse operators of intermediate length.
Intermediate length operators
-----------------------------
Since our equations are suitable to the study of the energy at any $L$, we can use them to analyze operators of intermediate length. This is the case where the NLIEs can be exploited at their best, because in such an intermediate regime it is quite difficult to directly use the Bethe Ansatz equations due to the large number of terms. In particular we choose to analyze highest energy states with length $L=12$ and $L=40$.
![\[antif12\] Comparison of Hubbard and BDS energies (anomalous dimensions) for . The two curves are almost indistinguishable in this range of $g$ and start to separate at the right border of the plot.](antif_12_ass.eps){width="0.8\linewidth"}
![\[ws\] The behaviour of the energies $E(g)$ and $E_{\text {BDS}} $ from small to strong coupling is plotted here for a lattice of 12 sites. The left branches of the curves are the same as in Fig. \[antif12\] while the right branches are given by the strong coupling expansions (\[strong\]). In the small picture there is a zoom of the region where the branches overlap.](antif_12_ws.ps){width="0.95\linewidth"}
![\[antif40\] Comparison of Hubbard and BDS energies (anomalous dimensions) for $L=40$. The two curves are visibly hard to distinguish in this range of $g$. The largest reached absolute difference is 0.0038 namely a relative difference of $0.019\%$.](antif_40_ass.eps){width="0.8\linewidth"}
Let us begin with $L=12$. Figure \[antif12\] shows that already at such a small value of $L$, and beyond the perturbative region in the coupling $g$, the curves computed by means of both BDS Ansatz and Hubbard model overlap almost completely. This is an indication that even if we are dealing with a chain which is far from the thermodynamic limit, the predictions of the BDS Ansatz can be considered as a good effective approximation of the Hubbard model behaviour.
Hence, even if the BDS Ansatz is plagued by the wrapping problem, at a quantitative level it is able to reproduce all the significant features of the Hubbard model, beginning from $L
\simeq 12$.
Again, we use eqs. (\[strcoup\]) to describe the strong coupling behaviour $$\begin{aligned}
E & = & \frac{10.8347}{g} - \frac{3}{g^2}
+\dots \nonumber \\
E_{\text {BDS}} & = & \frac{10.8038}{g} - \frac{3}{g^2} +\dots .
\label{strong}\end{aligned}$$ The behaviour from weak to strong coupling for both the Hubbard model and the BDS anstaz is plotted in Fig. \[ws\]: the left branches are the same as in Fig. \[antif12\] (numerical solution of the NLIEs), while the right branches are given by the equations (\[strong\]) (strong coupling expansions). As we stated at the end of previous subsection, left and right branches smoothly join.
Let us remark that it was quite unexpected to find the observed good agreement between the Hubbard model and the BDS Ansatz predictions for such a small value of $L$. It is also important to stress the crucial role played by the NLIEs in order to obtain the exact behaviour of the energy outside the perturbative domain. As shown in the study of the anomalous dimension of the Konishi operator, the perturbation theory alone is not enough to reach an overlap with the strong coupling expansion.
Finally, we compared the Hubbard and BDS anomalous dimensions for $L=40$ in the range $g \in [0,3]$: as expected, the agreement between them is further enhanced and the two curves can be hardly distinguished, see Fig. \[antif40\].
The interesting feature of this case is that we were able to explicitly follow the evolution of the relative difference between Hubbard and BDS from weak to strong coupling. We observed that the two curves begin to separate at small $g$, then they achieve a maximum in the relative difference, and after that they start to approach again. We think that such a pattern is valid at any $L$, but the reduced numerical precision does not allow us to observe it for smaller values of $L$.
[**Remark**]{}. The fact that $E_{\text {BDS}}>E$ at weak coupling, but $E_{\text {BDS}}<E$ at strong coupling suggests that the curves will cross at some intermediate value of $g$. This is consistent with our numerical observation of the approaching of them as $g$ increases. Unfortunately, the reduced precision of our data at large $g$ prevented us to observe such a crossing explicitly.
Large operators at fixed $g$\[loafg\]
-------------------------------------
$$\begin{array}{r|ccc}
g & 1.2 & 1.6 & 2 \\
\hline
a(g) & 0.0076 &0.0011 & 0.00017\\
\omega(g) & 0.0174422 & 0.00342347 & 0.000649349 \\
\epsilon_M & 0.290524 & 0.219211 & 0.175869\\
\epsilon_M \omega(g)& 0.00506738 &0.000750462 & 0.000114200 \\
\frac{a(g)}{\epsilon_M\, \omega(g)} &1.5 &1.5 & 1.5
\end{array}$$
![\[relative\] Difference between the highest Hubbard and BDS energies (anomalous dimensions) at different sizes of the system and at two fixed values of the coupling, and $g=2$. A logarithmic scale is used on the vertical axis. The linear behaviour is clear indication of the exponential damping discussed in Section \[comparison\] and summarized in (\[expdamp2\]). Related numerical data are provided in Table \[tabella\].](antiferr_1.2_logdiff.eps "fig:"){width="0.49\linewidth"} ![\[relative\] Difference between the highest Hubbard and BDS energies (anomalous dimensions) at different sizes of the system and at two fixed values of the coupling, and $g=2$. A logarithmic scale is used on the vertical axis. The linear behaviour is clear indication of the exponential damping discussed in Section \[comparison\] and summarized in (\[expdamp2\]). Related numerical data are provided in Table \[tabella\].](antiferr_2.0_logdiff.eps "fig:"){width="0.49\linewidth"}
Another interesting analysis concerns the difference between the highest energies of Hubbard and BDS as a function of $L$ and at fixed coupling. Our choices were the values $g=1.2,~1.6$ and $g=2$, which lie in an intermediate region for which our asymptotic result (\[lglimit\]) does not apply. The solution of our NLIEs gives the result depicted in Table \[tabella\] and in Fig. \[relative\] on a log diagram: for large $L$ ($L>300$) the behaviour is linear, meaning that the difference between the energies decays exponentially as the length $L$ is increased. This confirms our analytical findings of Section \[comparison\]. Consistently with that Section, we introduce the numerical rate of decay $a(g)$ and write $$\label{expdamp2}
|E_{\text{BDS}}-E| \propto e^{-a(g) L}\,, \qquad a(g)>0 \,.$$ According to our results of Section 5, the inequality $$\label{ineq}
\epsilon\, \omega(g) \leq \epsilon_M \, \omega(g) \leq a(g) \,$$ must hold and in this respect, Table \[tabella\] suggests that (\[ineq\]) is actually correct for the values of $g$ chosen.
The values of $a(g)$ shown in Table \[tabella\] interpolate between the behaviour at small $g$, $a(g)=-2\ln g$, and at large $g$, $a(g)=\frac {1}{\sqrt {2}g}$. Consistently, the values of $a(g)$ in Table \[tabella\] decrease as $g$ increases. However, understanding how $a(g)$ passes from the small coupling to the large coupling behaviour on the basis of numerical data seems difficult. Indeed, the comparison of the two plots in Fig. \[relative\] shows that the exponential behaviour in $L$ of $E-E_{\text {BDS}}$ is strongly dependent on the actual value of the coupling. In addition, it is clear that linearity (i.e. exponential damping) is reached at values of $L$ which rapidly increase with $g$.
![\[unosul\] The finite size correction $1/L$ is compared with the theoretical BDS prediction (\[fsc\]), represented with the dashed straight line. As observed in Fig. \[relative\], for higher values of $g$ linearity is reached at higher values of L.](antiferr_1.2_1suL.eps "fig:"){width="0.49\linewidth"} ![\[unosul\] The finite size correction $1/L$ is compared with the theoretical BDS prediction (\[fsc\]), represented with the dashed straight line. As observed in Fig. \[relative\], for higher values of $g$ linearity is reached at higher values of L.](antiferr_2.0_1suL.eps "fig:"){width="0.49\linewidth"}
Finally, we remember, as pointed out before equation (\[fsc\]), that the exponential damping of $E-E_{\text {BDS}}$ forces the equality of logarithmic-like and power-like finite size corrections in the Hubbard and BDS models. The most relevant case is the $1/L$ behaviour, that is explicitly plotted for the Hubbard model in Fig. \[unosul\] and compared with the analytical prediction (\[fsc\]) coming from the BDS Ansatz.
Summary and perspective
=======================
In this paper we have derived the NLIEs describing the highest energy state of the half-filled (attractive) Hubbard model with and without a suitable flux which is responsible for a precise contact with the highest possible anomalous dimension (at fixed bare dimension $L$). In particular, according to the important correspondence pioneered in [@RSS], we computed the energy/anomalous dimension for this state/operator, and all the other conserved quantity eigenvalues may find an exact expression. The dimension is of course a function of the ’t Hooft coupling $\lambda$ (and of the operator bare dimension $L$), thanks to a duality connexion between this latter and the Hubbard coupling. While exact analytical formulæ for energy/dimension have been extracted only in the strong and weak coupling perturbative regime, numerical solutions of the NLIEs and corresponding energy values can be obtained for arbitrary values[^9] of $g$ and $L$. In this respect, we have been able to provide many plots showing clearly the dependence of the highest energy (anomalous dimension) on the coupling $g$. In particular, we have concentrated ourselves on the comparison with the highest energy of the (simpler) BDS model. We have shown, first analytically then numerically, that at large sizes ($L\rightarrow
\infty$), they coincide up to corrections exponentially small (i.e. $o(L^{-\infty}$)), i.e. of non-analytic form. Of course, the finite size corrections are also important in the string theory since they come out as quantum loop corrections. As regards future perspectives, we would like to stress that the NLIE approach is a very good and effective method to provide the observables dependence on the model parameters and size. One good quality is that the NLIE can be written whenever the Bethe Ansatz equations are available, and even under more general circumstances, thanks to its equivalence to some functional equations. In particular the application of the NLIE to the so-called string Bethe Ansatz [@BES] and larger field sectors is particularly desirable, in view of tests of the AdS/CFT correspondence. For the time being, indeed, in spite of the progress following the discovery of integrability in both sides of the duality, such tests could be performed only for a limited number of cases, because of the technical difficulty in handling the Bethe equations for general length and coupling. In the previous work [@FFGR] and in the present one we have showed that by means of the NLIE framework we are able to overcome this difficulty and provide exact analytic scaling with the length $L$ and the coupling $\lambda$ (and also numerical evaluation of the conformal dimensions).
Acknowledgments {#acknowledgments .unnumbered}
===============
We have the pleasure to acknowledge useful discussions with D. Bombardelli, A. Cappelli, A. Doikou, E. Ercolessi, G. Ferretti, A. Montorsi, F. Ravanini, M. Staudacher and K. Zarembo; V. Rittenberg enthusiastic support was, besides, invaluable. We are all indebted to EUCLID, the EC FP5 Network with contract number HPRN-CT-2002-00325, which, in particular, has supported the work of PG. GF thanks INFN for a post-doctoral fellowship. MR thanks the INFN and the Department of Physics in Bologna for warm hospitality and support. DF thanks the INFN (especially grant [*Iniziativa specifica TO12*]{}) for travel and invitation financial support.
[99]{}
A. Kapustin, E. Witten, [*Electric-Magnetic Duality and the Geometric Langlands Program*]{}, hep-th/0604151;
J.M. Maldacena, [*The large N limit of superconformal field theories and supergravity*]{}, Adv. Theor. Math. Phys. [**2**]{} (1998) 231 and hep-th/9711200;\
E. Witten, [*Anti-de Sitter space and holography*]{}, Adv. Theor. Math. Phys. [**2**]{} (1998) 253 and hep-th/9802150;\
S.S. Gubser, I.R. Klebanov, A.M. Polyakov, [*Gauge theory correlators from non-critical string theory*]{}, Phys.Lett. [**B428**]{} (1998) 105 and hep-th/9802109; J.A. Minahan, K. Zarembo, [*The Bethe Ansatz for ${\cal N}=4$ Super Yang-Mills*]{}, JHEP[**03**]{} (2003) 013 and hep-th/0212208; H. Bethe, [*On the theory of metals, 1. Eigenvalues and eigenfunctions for the linear atomic chain*]{}, Z. Phys. [**71**]{} (1931) 205;
R. Hernandez, E. Lopez, A. Perianez, G. Sierra, [*Finite size effects in ferromagnetic spin chains and quantum corrections to classical strings*]{}, JHEP[**06**]{} (2005) 011 and hep-th/0502188;
N. Beisert, A.A. Tseytlin, K. Zarembo, [*Matching quantum strings to quantum spins: one-loop vs. finite size corrections*]{}, Nucl. Phys. [**B715**]{} (2005) 190 and hep-th/0502173;
N. Gromov, V. Kazakov, [*Double scaling and Finite Size Corrections in $sl(2)$ Spin Chain*]{}, Nucl. Phys. [**B736**]{} (2006) 224 and hep-th/0510194;
D. Berenstein, J.M. Maldacena, H. Nastase, [*Strings in flat space and pp waves from ${\cal N}=4$ super Yang-Mills*]{}, JHEP [**04**]{} (2002) 013 and hep-th/0202021; N. Beisert, C. Kristjansen, M. Staudacher, [*The dilatation operator of ${\cal N}=4$ super Yang-Mills theory*]{}, Nucl. Phys. [**B664**]{} (2003) 131 and hep-th/0303060; D. Serban, M. Staudacher, [*Planar ${\cal N}=4$ gauge theory and the Inozemtsev long range spin chain*]{}, JHEP[**06**]{} (2004) 001 and hep-th/0401057; N. Beisert, V. Dippel, M. Staudacher, [*A novel long range spin chain and planar ${\cal N}=4$ super Yang-Mills*]{}, JHEP[**07**]{} (2004) 075 and hep-th/0405001; A. Rej, D. Serban, M. Staudacher, [*Planar ${\cal N}=4$ gauge theory and the Hubbard model*]{}, JHEP[**03**]{} (2006) 018 and hep-th/0512077; J.A. Minahan, [*Strong coupling from the Hubbard model*]{}, J.Phys.A[**39**]{} (2006) 13083 and hep-th/0603175; N. Beisert, B. Eden, M. Staudacher, [*Trascendentality and Crossing*]{}, hep-th/0610251;
E.H. Lieb, F.Y. Wu, [*Absence of Mott-transition in an exact solution of the short-range, one-band model in one dimension*]{}, Phys. Rev. Lett. [**20**]{} (1968) 1445;\
E.H. Lieb, F.Y. Wu, [*The one-dimensional Hubbard model: A reminiscence*]{}, cond-mat/0207529;
G.Feverati, D.Fioravanti, P.Grinza, M.Rossi, [*On the finite size corrections of anti-ferromagnetic anomalous dimensions in ${\cal N}=4$ SYM*]{}, JHEP[**05**]{} (2006) 068 and hep-th/0602189;
P.A. Pearce, A. Klümper, [*Finite-size corrections and scaling dimensions of solvable lattice models: an analytic method*]{}, Phys. Rev. Lett. [**66**]{}, volume 8 (1991) 974;\
A. Klümper, M.T. Batchelor and P.A. Pearce, [*Central charges of the 6- and 19-vertex models with twisted boundary conditions*]{}, J. Phys. [**A24**]{} (1991) 3111;
C. Destri, H.J. de Vega, [*New thermodynamic Bethe Ansatz equations without strings*]{}, Phys. Rev. Lett. [**69**]{} (1992) 2313;\
C. Destri, H.J. de Vega, [*Unified approach to Thermodynamic Bethe Ansatz and finite size corrections for lattice models and field theories*]{}, Nucl. Phys. [**B438**]{} (1995) 413 and hep-th/9407117; D. Fioravanti, A. Mariottini, E. Quattrini, F. Ravanini, [*Excited state Destri-de Vega equation for sine-Gordon and restricted sine-Gordon models*]{}, Phys. Lett. [**B390**]{} (1997) 243 and hep-th/9608091;\
C. Destri, H.J. de Vega, [*Non linear integral equation and excited–states scaling functions in the sine-Gordon model*]{}, Nucl. Phys. [**B504**]{} (1997) 621 and hep-th/9701107;
J. Hubbard, [*Electron correlation in narrow energy bands*]{}, Proc. Roy. Soc. (London) [**A276**]{} (1963) 238;
E. Ercolessi, G. Morandi, A. M. Srivastava, A. P. Balachandran, [*The Hubbard Model and Anyon Superconductivity*]{}, World Scientific;
A.Klümper, R.Z. Bariev, [*Exact thermodynamics of the Hubbard chain: free energy and correlation lengths*]{}, Nucl. Phys. [**B458**]{} (1996) 623;
G. Jüttner, A. Klümper, J.Suzuki, [*The Hubbard chain at finite temperatures: ab initio calculations of Tomonaga-Luttinger liquid properties*]{}, Nucl. Phys. [**B522**]{} (1998) 471 and cond-mat/9711310; T. Deguchi, F. H. L. Essler, F. Göhmann, A. Klümper, V. E. Korepin, K. Kusakabe, [*Thermodynamics and excitations of the one-dimensional Hubbard model*]{}, Phys. Rep. [**331**]{} (2000) 197 and cond-mat/9904398; D. Fioravanti, M. Rossi, [*From finite geometry exact quantities to (elliptic) scattering amplitudes for spin chains: the 1/2-XYZ*]{}, JHEP[**08**]{}(2005) 010 and hep-th/0504122; M. Takahashi, [*Thermodynamics of one-dimensional solvable models*]{}, Cambridge University Press;
F. Woynarovich, H.P. Eckle, [*Finite-size corrections for the low lying states of a half-filled Hubbard chain*]{}, J. Phys. [**A 20**]{} (1987) L443;
S. Schäfer-Nameki, M. Zamaklar, K. Zarembo, [*How accurate is the quantum string Bethe Ansatz*]{}, hep-th/0610250;
P.W. Anderson, [*New Approach to the Theory of Superexchange Interactions*]{}, Phys. Rev. [**115**]{} (1959) 2;
I.S. Gradshteyn, I.M. Ryzhik, [*Table of integrals, series and products*]{}, Academic Press;
E.N. Economou, P.N. Poulopoulos, [*Ground state energy of the half-filled one-dimensional Hubbard model*]{}, Phys. Rev. [**B20**]{} (1979) 4756;
W. Metzner, D. Vohllhardt, [*Ground state energy of the $d=1,2,3$ dimensional Hubbard model in the weak-coupling limit*]{}, Phys. Rev. [**B39**]{} (1989) 4462;
M. Beccaria, C. Ortix, [*Strong coupling anomalous dimensions of ${\cal N} = 4$ super Yang-Mills*]{}, JHEP [**0609**]{} (2006) 016 and hep-th/0606138; M. Beccaria, C. Ortix, [*AdS/CFT duality at strong coupling*]{}, hep-th/0610215.
[^1]: UMR 5108 du CNRS, associée à l’Université de Savoie.
[^2]: A simple one is the limiting case of the Hubbard model in strong coupling (and half filling) [@BEM], i.e. the $1/2-$XXX revealed, originally, at one loop by Minahan and Zarembo [@MZ].
[^3]: We have numerical evidence for that; in Section 5, we analytically prove this statement when $L\rightarrow \infty$.
[^4]: We use the approximation $e^{iW(k+i\epsilon)}=e^{iW(k)}e^{-\epsilon W'(k)}$: therefore, we suppose $\epsilon \ll 1$.
[^5]: As it is well-known after [@DDV; @FMQR], for other states complex roots and holes have to be included.
[^6]: Obviously, $\epsilon$ appearing in the equations for $Z$ is different from the homonymous constant related to $W$.
[^7]: We define the Fourier transform $\hat{f}(p)$ of a function $f(x)$ as given by $$\hat{f}(p) = \int_{-\infty}^{\infty} dx ~ e^{-ipx} f(x)\, .$$
[^8]: Note that the Hubbard coefficients are (multiple of $6$) integers and so are the BDS ones if $g$ is properly re-scaled.
[^9]: Despite this, we experienced an increasing numerical error while increasing $g$.
|
---
abstract: |
[**Abstract**]{}
A new accelerating cosmology driven only by baryons plus cold dark matter (CDM) is proposed in the framework of general relativity. In this model the present accelerating stage of the Universe is powered by the negative pressure describing the gravitationally-induced particle production of cold dark matter particles. This kind of scenario has only one free parameter and the differential equation governing the evolution of the scale factor is exactly the same of the $\Lambda$CDM model. For a spatially flat Universe, as predicted by inflation ($\Omega_{dm}+\Omega_{baryon}=1$), it is found that the effectively observed matter density parameter is $\Omega_{meff} = 1- \alpha$, where $\alpha$ is the constant parameter specifying the CDM particle creation rate. The supernovae test based on the Union data (2008) requires $\alpha\sim 0.71$ so that $\Omega_{meff} \sim 0.29$ as independently derived from weak gravitational lensing, the large scale structure and other complementary observations.
author:
- 'J. A. S. Lima$^{1,2}$'
- 'J. F. Jesus$^{1}$'
- 'F. A. Oliveira$^{1}$'
title: 'CDM Accelerating Cosmology as an Alternative to $\Lambda$CDM model'
---
Introduction
============
It is well known that observations from Supernovae Type Ia (SNeIa) provide strong evidence for an expanding accelerating Universe [@Riess07; @Union08]. In relativistic cosmology, such a phenomenon is usually explained by the existence of a new dark component (in addition to cold dark matter), an exotic fluid endowed with negative pressure [@review].
Many candidates for dark energy have been proposed in the literature [@decaying; @XM; @SF; @CGas], among them: (i) A cosmological constant ($\Lambda$), (ii) a decaying vacuum energy density or $\Lambda(t)$-term, (iii) a relic scalar field slowly rolling down its potential, (iv) the “X-matter”, an extra component characterized by equation of state $p_x=\omega \rho_x$, where $\omega$ may be constant or a redshift dependent function, (iv) a Chaplygin-type gas whose equation of state is $p=-A/\rho^{\gamma}$, where A and $\gamma$ are positive parameters. All these models explain the accelerating stage, and, as such, the space parameter of the basic observational quantities is rather degenerate. Nowadays, the most economical explanation is provided by the flat $\Lambda$CDM model which has only dynamic free parameter, namely, the vacuum energy density. It seems to be consistent with all the available observations provided that the vacuum energy density is fine tuned to fit the data ($\Omega_{\Lambda} \sim 0.7$). However, even considering that the addition of these fields explain the late time accelerating stage and other complementary observations [@CMB; @Clusters], the need of (yet to be observed) dark energy component with unusual properties is certainly a severe hindrance.
In general relativistic cosmology, the presence of a negative pressure is the key ingredient to accelerate the expansion. In particular, this means that cosmological models dominated by pressureless fluid like a CDM component expands in a decelerating way. However, as first discussed by Prigogine and coworkers [@Prigogine] and somewhat clarified by Calvão and collaborators [@LCW] through a manifestly covariant formulation, the matter creation process at the expense of the gravitational field is also macroscopically described by a negative pressure. Later on, it was also demonstrated that the matter creation is an irreversible process completely different from the bulk viscosity description [@LG92] originally proposed by Zeldovich [@Zeld70] to avoid the singularity, as well as to describe phenomenologically the emergence of particles in the begin of the Universe evolution early (see also [@SLC02] for a more complete discussion comparing particle creation and bulk viscosity).
Microscopically, the gravitationally induced particle creation mechanism has also been discussed by many authors [@Parker; @BirrellD]. A non-stationary gravitational background influences quantum fields in such a way that the frequency becomes time-dependent. In the case of a flat Friedmann-Robertson-Walker (FRW) spacetime described in conformal time coordinates, the key result is that the scalar field obeys the same equation of motion as a massive scalar field in Minkowski spacetime, except that the effective mass becomes time dependent (the dispersion relation in the FRW metric involves the scale factor and its second derivative). When the field is quantized, this leads to particle creation, with the energy for newly created particles being supplied by the classical, time-varying gravitational background.
In this context, we are proposing here a new flat cosmological scenario where the cosmic acceleration is powered uniquely by the creation of cold dark matter particles. It will be assumed that the CDM particles are described by a real scalar field so that only particle creation takes place because in this case it is its own antiparticle. The model can be seen as a workable alternative to the cosmic concordance cosmology because it has only one free parameter and the equation of motion is exactly the same of the $\Lambda$CDM model. As we shall see, in the case of a spatially flat Universe ($\Omega_{dm}+\Omega_{bar}=1$), the effectively observed matter density parameter is $\Omega_{eff} = 1- \alpha$, where $\alpha$ is the constant parameter defining the creation rate. The supernovae test requires the central value $\alpha\sim 0.71$ so that $\Omega_{eff} \sim 0.29$ in accordance with the large scale structure and other complementary observations.
Cosmic Dynamics with creation of CDM Particles
==============================================
For the sake of generality, let us start with the homogeneous and isotropic FRW line element $$\label{line_elem}
ds^2 = dt^2 - R^{2}(t) \left(\frac{dr^2}{1-k r^2} + r^2 d\theta^2+
r^2{\rm sin}^{2}\theta d \phi^2\right),$$ where $R$ is the scale factor and $k= 0, \pm 1$ is the curvature parameter. Throughout we use units such that $c=1$.
In that background, the nontrivial cosmological equations for the mixture of radiation, baryons and cold dark matter (with creation of dark matter particles), and the energy conservation laws for each component takes the following form [@ZP2; @LG92; @LGA96; @LSS08]
$$\label{fried}
8\pi G (\rho_{rad} + \rho_{bar} + \rho_{dm}) = 3 \frac{\dot{R}^2}{R^2} + 3 \frac{k}{R^2},$$
$$\label{frw_p}
8\pi G (p_{rad} + p_{c}) = -2 \frac{\ddot{R}}{R} - \frac{\dot{R}^2}{R^2} -
\frac{k}{R^2},$$
$$\label{energy}
\frac{\dot{\rho}_{rad}}{{\rho}_{rad}} + 4 \frac{\dot{R}}{R} = 0, \,\,\,\,\,\, \, \,\,\,\,\, \frac{\dot{\rho}_{bar}}{{\rho}_{bar}} + 3 \frac{\dot{R}}{R} = 0,$$
and $$\label{ConsDM}
\frac{\dot{\rho}_{dm}}{{\rho}_{dm}} + 3 \frac{\dot{R}}{R} = \Gamma.$$ In the above expressions, an overdot means time derivative and $\rho_{rad}$, $\rho_{bar}$ and $\rho_{dm}$, are the radiation, baryonic and dark matter energy densities, whereas $p_{rad}$ and $p_{c}$, denote the radiation and creation pressure, respectively. The quantity $\Gamma$ with dimension of $(time)^{-1}$ is the creation rate of the cold dark matter. As should be expected, the creation pressure is defined in terms of the creation rate and other physical quantities. In the case of adiabatic creation of dark matter, it is given by [@Prigogine; @LCW; @LG92; @ZP2; @LGA96; @SLC02; @LSS08] $$\label{CP}
p_{c} = -\frac{\rho_{dm} \Gamma}{3H},$$ where $H = {\dot {R}}/R$ is the Hubble parameter.
The above expressions show how the matter creation rate, $\Gamma$, modifies the evolution of the scale factor and the density of the cold dark matter as compared to the case with no creation. By taking $\Gamma=0$ the above set reduces to the differential equations governing the evolution of radiation plus a pressureless fluid mixture (baryons + CDM), as given by the FRW type cosmologies.
Creation Cold Dark Matter (CCDM) Cosmology
===========================================
Let us now propose a new class of models defined by the choice for the particle creation rate, $\Gamma$. In principle, the most natural choice would be a particle creation rate which favors no epoch in the evolution of the Universe, that is, $\Gamma \propto H$, where $H$ is the Hubble parameter.
Recently, we have investigated CCDM scenarios with $\Gamma=3\beta H$, where $\beta$ is a time-dependent dimensionless parameter [@LSS08; @SSL09]. In the first paper [@LSS08], we have demonstrated that CCDM models solve the age problem and are generically capable of accounting for the SNIa observations. In the subsequent paper [@SSL09], we have included baryons and tested the evolution of such models at high redshift using the constraint on $z_{eq}$, the redshift of the epoch of matter - radiation equality, provided by the WMAP constraints on the early Integrated Sachs-Wolfe effect (ISW). Such a comparison revealed a tension between the high redshift CMB constraint on $z_{eq}$ and that which follows from the low redshift SNIa data, thereby challenging the viability of that class of models. A minor caveat is related to the relative mathematical difficulty faced when the baryon component was introduced in the CCDM cosmology proposed by Lima, Silva and Santos [@LSS08]. Actually, in the most interesting scenario discussed by Steigman et al. [@SSL09], the comparison with the observations became possible only after to expand the Hubble parameter function at low and high redshifts.
On the other hand, the most essential difficulty of such CCDM models comes from the fact that all of them are flat ($\Omega_{dm} + \Omega_{bar}= 1$), but it is not clear how they can account for the clusters data which are consistently pointing to $\Omega_{dm}+ \Omega_{bar} \sim 0.3$ from a large set of observations [@Clusters]. In particular, this means that the following question is challenging these CCDM scenarios. How the matter creation rate affects the present amount of matter so that the effectively measured matter density parameter is close to that one obtained from the available observations?
In what follows, we show that all these shortcomings can be solved at once through a reasonable choice of $\Gamma$. First we observe that the acceleration is a low redshift phenomenon, that is, it must be suppressed during the radiation phase. This may be obtained by considering that the $\beta(t)$ function is inversely proportional to the CDM energy density itself. Since such a quantity is dimensionless it may depend on some ratio involving the dark matter energy density. To be more specific, let us consider the following creation rate: $$\label{Gamma}
\Gamma=3\alpha \left(\frac{\rho_{co}}{\rho_{dm}}\right)H,$$ where $\alpha$ is a constant parameter, ${\rho_{co}}$ is the present day value of the critical density, and the factor 3 has been maintained for mathematical convenience.
Now, by inserting the above expression in the energy conservation for dark matter as given by ($\ref{ConsDM}$) one obtains $$\dot{\rho}_{dm} + 3H\rho_{dm}=\Gamma \rho_{dm}\equiv 3\alpha {\rho_{co}}H,$$ which can be readily integrated to give a solution for $\rho_{dm}$ $$\label{rhodm}
\rho_{dm} = (\rho_{dmo} - \alpha\rho_{co})\left({\frac{R_0}{R}}\right)^{3} + \alpha\rho_{co}$$ or, in terms of the redshift, $1+z=R_0/R$,
$$\rho_{dm} = (\rho_{dmo} - \alpha\rho_{co})(1 + z)^{3} + \alpha\rho_{co}.$$
Since the solution of the energy conservation laws for radiation and baryons are the usual ones, namely, $\rho_{rad}=\rho_{rad0}(1+z)^4$, $\rho_b=\rho_{bar0}(1+z)^3$, by inserting these expressions into Friedmann equation (\[fried\]), we arrive to (from now on we neglect the radiation fluid since in the present paper we test only the matter stage) $$\label{Hz}
\left(\frac{H}{H_0}\right)^2=(\Omega_{m}-\alpha)(1+z)^3+\alpha+(1-\Omega_{m})(1+z)^2,$$ where we have defined $$\Omega_m \equiv \Omega_{dm} + \Omega_{bar},$$ and used the normalization condition to fix $\Omega_k=1-\Omega_{m}$. The similarity of the above expression with the one of the $\Lambda$CDM model is astonishing because it has been obtained by considering just one dark component. Actually, the Hubble parameter for a $\Lambda$CDM model reads: $$\left(\frac {H_{\Lambda CDM}}{H_0}\right)^2 = \Omega_m(1+z)^3 + \Omega_{\Lambda} + (1 -\Omega_m -\Omega_{\Lambda})(1+z)^2$$ One may see that the models have the same Hubble parameter $H(z)$, with $\alpha$ playing the dynamical role of $\Omega_{\Lambda}$ and $\Omega_m$ now being replaced by $\Omega_m-\alpha$. Such a map becomes even more clear by defining an ‘effective’ matter density parameter, as $\Omega_{meff}\equiv\Omega_m-\alpha$ and inserting the result into the expression (\[Hz\]) for the CCDM model.
This intriguing equivalence can also be seen even more directly through the evolution equation of the scale factor function. As one may check, by inserting the expression of the creation pressure $p_c$ in the second Friedman equation we obtain:
$$\label{evolR}
2R{\ddot R}+ {\dot{R}}^2 + k - 3\alpha{H_0}^{2} R^{2} = 0,$$
which should be compared to: $$\label{evolRLCDM}
2R{\ddot R}+ {\dot{R}}^2 + k - {\Lambda}R^{2} = 0,$$ provided by the $\Lambda$CDM model. The above equations imply that the models will have the same dynamic behavior when we identify the creation parameter by the expression $\alpha = {\Lambda}/3{H_0}^{2} \equiv \Omega_{\Lambda}$, which is exactly the same result derived earlier with basis on the Hubble parameter, $H(z)$.
On the other hand, although considering that $CCDM$ and $\Lambda$CDM models have the same dynamic history, such cosmologies are based on different starting hypothesis, and, as such, they can be differentiated by the present observations. Mathematically, this happens because of the special role played by the $\alpha$ parameter in the basic equations of the CCDM model. In particular, the positiviness of the dark matter density (and Hubble parameter) at high redshifts implies that the creation parameter must satisfy the constraint, $\alpha \leq \Omega_m$, a condition absent in the $\Lambda$CDM case (see Eqs. \[rhodm\] and \[Hz\]). Further, the redshift dependence of the contribution involving the $\alpha$ parameter, namely, $\alpha(1 - (1+z)^{3})$ must modify slightly the predictions involving the evolution of small perturbations and the structure formation problem.
0.1in
If spatial flatness is to be imposed, as predicted by inflation and suggested from CMB data, we would have $\Omega_m=1$, thus Eq. (\[Hz\]) reduces to: $$\label{HzFlat}
\left(\frac{H}{H_0}\right)^2=(1-\alpha)(1+z)^3+\alpha,$$ with $\alpha$ being the only free parameter, besides $H_0$, just like on the standard flat $\Lambda$CDM model. Note that now $\Omega_{meff}= 1 - \alpha$.
Transition redshift and Supernova bounds
========================================
To begin with, we first observe that by combining Eqs. (\[fried\]) and (\[frw\_p\]), we have $$\frac{\ddot{R}}{R}=-\frac{4\pi G}{3}(\rho_{bar} + \rho_{dm} +3p_c)$$
Given that $p_c=-\alpha\rho_{c0}$, one may find: $$\frac{\ddot{R}}{R}=-\frac{4\pi G}{3\rho_{c0}}\left[(\Omega_{bar} + \Omega_{dm}- \alpha)(1+z)^3-2\alpha\right]$$
When this expression vanishes, one may find the following expression for the transition redshift: $$\label{zt}
z_t=\left(\frac{2\alpha}{\Omega_m-\alpha}\right)^{1/3}-1.$$ Naturally, in order to estimate the transition redshift it is necessary to constrain the value of $\alpha$ from observations.
Let us now discuss the constraints from distant type Ia SNe data on the class of CCDM accelerating cosmologies proposed here. In what follows we consider both the curved and flat scenarios. In principle, since $H_0$ can be determined from the Hubble Law, the model has only two independent parameters, namely, $\alpha$ and $\Omega_m$ or, equivalently, $\Omega_{meff}$ (see Eq. (\[Hz\]) for $H(z)$). However, in the following analyzes we marginalize over the Hubble parameter. The predicted distance modulus for a supernova at redshift $z$, given a set of parameters $\mathbf{s}$, is $$\label{dm}
\mu_p(z|\mathbf{s}) = m - M = 5\,\mbox{log} d_L + 25,$$ where $m$ and $M$ are, respectively, the apparent and absolute magnitudes, the complete set of parameters is $\mathbf{s} \equiv
(H_0, \alpha, \Omega_m)$, and $d_L$ stands for the luminosity distance (in units of megaparsecs), $$d_L = c(1 + z)\int_{0}^{z} {\frac{dz'}{{H}(z';\mathbf{s})}},$$ with $z'$ being a convenient integration variable, and ${H}(z; \mathbf{s})$ the expression given by Eq. (\[Hz\]).
0.1in
In order to constrain the free parameters of the model consider now the recent sample (Union 2008), containing 307 Supernovas as published by Kowalski and coworkers [@Union08]. The best fit to the set of parameters $\mathbf{s}$ can be estimated by using a $\chi^{2}$ statistics with $$\chi^{2} = \sum_{i=1}^{N}{\frac{\left[\mu_p^{i}(z|\mathbf{s}) -
\mu_o^{i}(z)\right]^{2}}{\sigma_i^{2}}},$$ where $\mu_p^{i}(z|\mathbf{s})$ is given by Eq. (\[dm\]), $\mu_o^{i}(z)$ is the extinction corrected distance modulus for a given SNe Ia at $z_i$, and $\sigma_i$ is the uncertainty in the individual distance moduli. In the joint analysis, by marginalizing on the nuisance parameter $h$ ($H_0 = 100h$ km s$^{-1}$ Mpc$^{-1}$) we find $\alpha=0.93^{+0.22+0.35+0.46}_{-0.26-0.44-0.63} $ and $\Omega_m = 1.34^{+0.34+0.54+0.72}_{-0.40-0.68-0.98}$ at $68.3\%$, $95.4\%$ and $99.7\%$ of confidence level, respectively, with $\chi^{2}_{min}=310.23$ and $\nu=305$ degrees of freedom. The reduced $\chi^{2}_{r}=1.017$ where ($\chi^{2}_{r}=\chi^{2}_{min} /\nu$), thereby showing that the model provides a very good fit to these data and that a closed Universe dominated only by CDM and Baryons is favored by these data.
In figure 1a we display the space parameter $\Omega_{meff} - \alpha$. Note that the best fit for such a quantity (the effectively measured density parameter) is $\Omega_{meff} = \Omega_m - \alpha = 0.41^{+0.13+0.21+0.29}_{-0.15-0.26-0.37}$.
In the flat case, the only free parameter is $\alpha$, and, as it should be expected, the model also provides a good fit to the SN data. In Figure 1b, we show the likelihood function for $\alpha$, given by $L\propto e^{-\chi^2/2}$. In this analysis, we find $\alpha=0.713^{+0.027+0.052+0.077}_{-0.028-0.058-0.089}$, for $\chi^2_{min}=311.94$ and $\chi^2_r=1.019$, for 306 degrees of freedom. This is also an extremely good fit thereby showing that we can fit the SNe Ia data with only pressureless matter and creation of CDM particles.
The value of the transition redshift is implicitly dependent on the curvature parameter. For the curved CCDM model, by inserting the best-fits of $\alpha$ and $\Omega_m$ into $(\ref{zt})$ one obtains the central value $z_t =0.65$ whereas for the flat scenario the transition redshift is a little higher ($z_t = 0.71$) which is in accordance to the fact that we have more matter in the former case. In Figure 2 we display the likelihood for the flat case.
Conclusion
==========
A new creation cold dark matter (CCDM) cosmology has been proposed. In this late time CDM dominated model, the vacuum energy density parameter is $\Omega_{\Lambda} = 0$, and, therefore, the so-called cosmological constant problem [@weinb; @decaying] is absent. The late time acceleration is powered here by an irreversible creation of CDM particles and the value of $H_0$ does not need to be small in order to solve the age problem.
------------------------------------------ ------------------------------------------
$\Lambda$CDM CCDM
\[0.5ex\] $\Omega_\Lambda$ $\alpha$
$\Omega_m$ $\Omega_{meff}\equiv \Omega_m - \alpha$
Vacuum DE Creation of CDM
Acceleration ($z_t \approx 0.71$, $k=0$) acceleration ($z_t \approx 0.71$, $k=0$)
\[1ex\]
------------------------------------------ ------------------------------------------
: $\Lambda$CDM vs. CCDM
\[tab1\]
It is worth noticing the existence of a dynamic equivalence between CCDM and $\Lambda$CDM cosmologies at the level of the background equations. Actually, the CCDM scenario can formally be interpreted as a two component fluid mixture: a pressureless matter with density parameter, $\Omega_{eff} = \Omega_m -\alpha$, plus a “vacuum fluid" with $\rho_v=-p_v = \alpha \rho_{co}$, where $\rho_{co}$ is the critical density parameter. A simple qualitative comparison between both approaches is summarized on Table 1. Note that for nonflat CCDM, there are two dynamic free parameters, namely, $\alpha$ and $\Omega_m$ (or $\Omega_{meff}$), similarly to nonflat $\Lambda$CDM, whose dynamic free parameters are $\Omega_{\Lambda}$ and $\Omega_m$. For the flat CCDM case, there is just one dynamic free parameter (like in flat $\Lambda$CDM model), say, $\alpha$. This formal equivalence explains why CCDM scenarios provide an excellent fit to the observed dimming of distant type Ia supernovae (see text and Figs. 1a and 1b). As it appears, one may say that $\Lambda$CDM cosmology is one of the possible effective descriptions of cold dark matter creation scenarios.
On the other hand, since the creation mechanism adopted here is classically described as an irreversible process [@Prigogine; @LCW], the basic problem with this new cosmology is related to the absence of a consistent approach based in quantum field theory in curved spacetimes. However, the last 30 years thinking about the cosmological constant problem are suggesting that the possible difficulties in searching for a more rigorous quantum approach for matter creation in the expanding Universe are much smaller than in the so-called $\Lambda$-problem. Indeed, the basic tools have already been discussed long ago [@Parker; @BirrellD], and, as such, the problem now is reduced to take into account properly the entropy production rate present in the creation mechanism and the associated creation pressure.
Finally, for those believing that $\Lambda$CDM contains all the physics that will be needed to confront the next generation of cosmological tests we call attention for the model proposed here. It is simple like $\Lambda$CDM, has the same dynamics, and, more important, it is based only in cold dark matter whose status is relatively higher than any kind of dark energy. Naturally, new constraints on the relevant parameters ($\alpha$ and $\Omega_m$) from complementary observations need to be investigated in order to see whether the CCDM model proposed here provides a realistic description of the observed Universe. In principle, additional tests measuring the matter power spectrum, the weak gravitational lensing distortion by foreground galaxies and the cluster mass function may decide between $\Lambda$CDM and CCDM cosmologies. New bounds on the CCDM parameters coming from background and perturbed cosmological equations will be discussed in a forthcoming communication.
JASL would like to thank Gary Steigman for helpful discussions and by the warm hospitality during the visit to CCAPP at the Ohio State University. JFJ is supported by CNPq, FAO is supported by CNPq (Brazilian Research Agencies) and JASL is partially supported by CNPq and FAPESP under grants 304792/2003-9 and 04/13668-0, respectively.
[99]{}
P. Astier et al., Astron. Astrophys. [**447**]{}, 31 (2006); A. G. Riess et al., Astrop. J. [**659**]{}, 98 (2007).
M. Kowalski [*et al.*]{}, Astrophys. J. [**[686]{}**]{}, 749 (2008), \[arXiv:0804.4142\]. (Union 2008).
P. J. E. Peebles and B. Ratra Rev. Mod. Phys. [**[75]{}**]{}, 559 (2003); T. Padmanabhan, Phys. Rept. [**[380]{}**]{}, 235 (2003); J. A. S. Lima, Braz. Journ. Phys., [**34**]{}, 194 (2004), \[astro-ph/0402109\]; E. J. Copeland, M. Sami and S. Tsujikawa, Int. J. Mod. Phys. [**[D15]{}**]{}, 1753 (2006); J. A. Frieman, M. S. Turner and D. Huterer Ann. Rev. Astron. & Astrophys., [**46**]{}, 385 (2008).
M. Özer and M. O. Taha, Phys. Lett. B [**171**]{}, 363 (1986); W. Chen and Y-S. Wu, Phys. Rev. D [**41**]{}, 695 (1990); J. C. Carvalho, J. A. S. Lima and I. Waga, Phys. Rev. D [**[46]{}**]{}, 2404 (1992); J. A. S. Lima and J. M. F. Maia, Phys. Rev D [**49**]{}, 5597 (1994); I. L. Shapiro, J. Sola and H. Stefancic, JCAP [**[0501]{}**]{}, 012 (2005); J. S. Alcaniz and J. A. S. Lima, Phys. Rev. D [**72**]{}, 063516 (2005), \[astro-ph/0507372\], H. A. Borges, S. Carneiro, J. C. Fabris and C. Pigozzo Phys. Rev. D [**78**]{}, 123522 (2008); S. Basilakos, arXiv:0903.0452 \[astro-ph.CO\] (2009); E. M. C. Abreu, L. P. G. De Assis, C. M. L. dos Reis, arXiv:0904.0953 \[gr-qc\] (2009); F. E. M. Costa and J. S. Alcaniz, arXiv:0908.4251 \[astro-ph.CO\] (2009).
M. S. Turner and M. J. White, Phys. Rev. D [**56**]{}, R4439 (1997); T. Chiba, N. Sugiyama and T. Nakamura, MNRAS [**289**]{}, L5 (1997); J. A. S. Lima and J. A. S. Alcaniz, MNRAS [**317**]{}, 893 (2000) \[astro-ph/0005441\]; S. Nesseris and L. Perivolaropoulos, Phys. Rev. D [**[70]{}**]{}, 043531 (2004); R. C. Santos, J. V. Cunha and J. A. S. Lima, Phys. Rev. D77, 023519 (2008), arXiv:0709.3679 \[astro-ph\]; J. A. S. Lima, J. F. Jesus and J. V. Cunha, Astrophys. J. Lett. [**690**]{}, L85 (2009), arXiv:0709.2195 \[astro-ph\]; L. Samushia, A. Dev, D. Jain, B. Ratra, arXiv:0906.2734 \[astro-ph.CO\] (2009).
P. J. E. Peebles, Astrophys. J. Lett. 325, L17 (1988); B. Ratra and P. J. E. Peebles, Phys. Rev D[**37**]{}, 3406 (1988); C. Wetterich, Nucl. Phys. B [**302**]{} 668 (1988); R. Caldwell, R. Dave, P. Steinhardt, Phys. Rev. Lett. D 59, 123504 (1999); R. Caldwell, Braz. J. Phys. Braz. J. Phys. [**[30]{}**]{}, 215 (2000); J. M. F. Maia and J. A. S. Lima, Phys. Rev. D [**65**]{}, 083513 (2002); \[astro-ph/0112091\]; P. T. Silva and O. Bertolami, Astrophys. J. [**[599]{}**]{}, 829 (2003); C. Rubano, P. Scudellaro, E. Piedipalumbo, S. Capozziello, M. Capone, Phys. Rev. D[**69**]{}, 103510 (2004); F. C. Carvalho, J. S. Alcaniz, J. A. S. Lima, R. Silva, Phys. Rev. Lett. [**97**]{}, 081301 (2006),\[astro-ph/0608439\]; V. Faraoni and M. N. Jensen, Class. Quant. Grav. [**23**]{}, 3005 (2006); J. S. Alcaniz et al., Eur. Phys. Lett. [**83**]{}, 29001 (2008).
A. Kameneschik, U. Moschella and V. Pasquier, Phys. Lett. B [**511**]{}, 265 (2001); R. Colistete Jr. and J. C. Fabris, Class. Quant. Grav. [**[22]{}**]{}, 2813 (2005); J. V. Cunha, J. S. Alcaniz and J. A. S. Lima, Phys. Rev. D [**69**]{}, 083501 (2004) \[astro-ph/0306319\]; B. Wang, C. Y. Lin, and E. Abdalla, Phys. Lett. [**B637**]{}, 357 (2006) T. Clifton and J. D. Barrow Phys. Rev. D [**73**]{} 104022 (2006); J. V. Cunha, L. Marassi and R. C. Santos, IJMP D [**16**]{}, 403 (2007); M. Bouhmadi-Lopez and R. Lazkoz, Phys. Lett. B [**654**]{}, 51 (2007); J. A. S. Lima, J. V. Cunha and J. S. Alcaniz, Astropart. Phys. [**31**]{}, 233 (2009), astro-ph/0611007; [**ibdem**]{}, Astropart. Phys. [**30**]{}, 196 (2008), astro-ph/0608469, Z. Li, P. Wu, H. W. Yu, JCAP 0909:017 (2009).
D. N. Spergel et al. Astrophys. J. Suppl. Ser. [**170**]{}, 377 (2007); E. Komatsu [*et al.*]{}, Astrophys. J. Suppl. [**180**]{}330 (2009).
S. W. Allen, (2002); J. A. S. Lima, J. V. Cunha and J. S. Alcaniz, Phys. Rev. D [**68**]{}, 023510 (2003); D. Rapetti, S. W. Allen, and A. Mantz, MNRAS [**388**]{}, 1265 (2008); A. Vikhlinin [*et al.*]{}, arXiv:0903.5320v1 (2009).
I. Prigogine [*et al.*]{}, Gen. Rel. Grav., [**21**]{}, 767 (1989).
J. A. S. Lima, M. O. Calvão, I. Waga, “Cosmology, Thermodynamics and Matter Creation”, [*Frontier Physics, Essays in Honor of Jayme Tiomno*]{}, World Scientific, Singapore (1990), \[arXiv:0708.3397\]; M. O. Calvão, J. A. S. Lima, I. Waga, Phys. Lett. [**A162**]{}, 223 (1992).
J. A. S. Lima, A. S. M. Germano, Phys. Lett. A [**170**]{}, 373 (1992).
Ya. B. Zeldovich, JETP Lett. [**12**]{}, 307 (1970).
R. Silva, J. A. S. Lima and M. O. Calvão, Gen. Rel. Grav. [**34**]{}, 865 (2002), gr-qc/0201048.
L. Parker, [ Phys. Rev. Lett.]{} [**21**]{}, 562 (1968); [*Phys. Rev.*]{} [**183**]{}, 1057 (1969); S. A. Fulling, L. Parker and B. L. Hu, [Phys. Rev.]{} [**10**]{}, 3905, (1974).
N. D. Birrell and P. C. Davies, [*Quantum Fields in Curved Space*]{}, Cambridge Univ. Press, Cambridge, (1982); A. A. Grib, S. G. Mamayev and V. M. Mostepanenko,[*Vaccum Quantum Effects in Strong Fields*]{}, Friedmann Laboratory Publishing, St. Petersburg (1994); V. F. Mukhanov and S. Winitzki, [*Introduction to Quantum Effects in Gravity*]{}, Cambridge UP, Cambridge (2007).
W. Zimdahl and D. Pavón, Phys. Lett. A [**176**]{}, 57 (1993); R. A. Susmann, Class. Q. Grav. [**11**]{}, 1445 (1994); W. Zimdahl and D. Pavón, Mon. Not. R. Astr. Soc. [**266**]{}, 872 (1994); W. Zimdahl and D. Pavón, GRG [**26**]{}, 1259 (1994); J. Gariel and G. Le Denmat, Phys. Lett. A [**200**]{} 11 (1995).
J. A. S. Lima, A. S. M. Germano and L. R. W. Abramo, Phys. Rev. D [**53**]{}, 4287 (1996) \[gr-qc/9511006\]; L. R. W. Abramo and J. A. S. Lima, Class. Quant. Grav. [**13**]{}, 2953 (1996), \[gr-qc/9606064\]. J. A. S. Lima and J. S. Alcaniz, Astron. Astrophys. [**348**]{}, 1 (1999), \[astro-ph/9902337\]; W. Zimdahl, D. J. Schwarz, A. B. Balakin and D. Pavón, Phys. Rev. D [**64**]{}, 063501 (2001); M. de Campos and J. A. Souza, Astron. Astrophys. [**422**]{}, 401 (2004); Y. Quiang, T-J. Zhang and Z-L. Yi, Astrop. Space Sci. [**311**]{}, 407 (2007).
J. A. S. Lima, F. E. Silva and R. C. Santos, Class. Quant. Grav. [**25**]{}, 205006 (2008), arXiv:0807.3379 \[astro-ph\].
G. Steigman, R. C. Santos and J. A. S. Lima, JCAP [**06**]{}033 (2009), arXiv:0812.3912 \[astro-ph\].
Ya. B. Zeldovich, Usp. Phys. Nauk [**94**]{}, 2009 (1968) \[translated by Sov. Phys. Usp. [**11**]{}, 381 (1968)\]; S. Weinberg, Rev. Mod. Phys., [**61**]{}, 1 (1989).
|
---
abstract: |
A theorem of Hoffman gives an upper bound on the independence ratio of regular graphs in terms of the minimum $\lam$ of the spectrum of the adjacency matrix. To complement this result we use random eigenvectors to gain lower bounds in the vertex-transitive case. For example, we prove that the independence ratio of a $3$-regular transitive graph is at least $$q = \frac{1}{2} - \frac{3}{4 \pi} \arccos \left( \frac{1-\lam}{4} \right) .$$ The same bound holds for infinite transitive graphs: we construct factor of i.i.d. independent sets for which the probability that any given vertex is in the set is at least $q-o(1)$.
We also show that the set of the distributions of factor of i.i.d. processes is not closed provided that the spectrum of the graph is uncountable.
address:
- 'Department of Mathematics, University of Toronto'
- 'Department of Mathematics, University of Toronto'
author:
- Viktor Harangi
- Bálint Virág
bibliography:
- 'refs.bib'
title: Independence ratio and random eigenvectors in transitive graphs
---
[^1]
Introduction
============
The independence ratio and the minimum eigenvalue
-------------------------------------------------
An *independent set* is a set of vertices in a graph, no two of which are adjacent. The *independence ratio* of a graph $G$ is the size of its largest independent set divided by the total number of vertices. If $G$ is regular, then the independence ratio is at most $1/2$, and it is equal to $1/2$ if and only if $G$ is bipartite.
The adjacency matrix of a $d$-regular graph has real eigenvalues between $-d$ and $d$. The least eigenvalue $\lam$ is at least $-d$, and it is equal to $-d$ if and only if the graph is bipartite.
So the distance of the independence ratio from $1/2$ and the distance of $\lam$ from $-d$ both measure how far a $d$-regular graph is from being bipartite. The following natural question arises: what kind of connection is there between these two graph parameters?
A theorem of Hoffman [@hoffman1] gives a partial answer to this question. It says that the independence ratio of a $d$-regular graph is at most $$\label{eq:hoffman}
\frac{-\lam}{d-\lam}
= \frac{1}{2} - \frac{\frac{1}{2}(\lam + d)}{2d - ( \lam + d )} .$$ (For a simple proof see [@ellis1 Theorem 11]. Also see [@lyons_nazarov Section 4] for certain improvements.)
Hoffman’s bound implies that $\lam \to -d$ as the independence ratio tends to $1/2$. The converse statement is not true in general: it is easy to construct $d$-regular graphs with $\lam$ arbitrarily close to $-d$ and the independence ratio separated from $1/2$. However, for transitive graphs the converse is also true. A graph $G$ is said to be *vertex-transitive* (or *transitive* in short) if its automorphism group $\operatorname{Aut}(G)$ acts transitively on the vertex set $V(G)$.
\[thm:crude\] Let $G$ be a finite, $d$-regular, vertex-transitive graph with least eigenvalue $\lam$. Then the independence ratio of $G$ is at least $$\frac{1}{2} - \frac{1}{3} \sqrt{ d(\lam + d) } .$$ In particular, if $\lam \to -d$, then the independence ratio converges to $1/2$.
The idea behind the proof is to consider random eigenvectors with eigenvalue $\lam$. Let $\la$ be an arbitrary eigenvalue of the adjacency matrix of some transitive graph $G$ and let $E_\la$ denote the eigenspace corresponding to $\la$, that is, the space of eigenvectors with eigenvalue $\la$. (Note that $E_\la$ is typically more than one dimensional, since $G$ is transitive.) Furthermore, let $S_\la$ be the unit sphere in $E_\la$. Now we pick a uniform random vector from $S_\la$. Note that $S_\la$ is $\operatorname{Aut}(G)$-invariant, therefore the distribution of this random vector is $\operatorname{Aut}(G)$-invariant, too. Let us choose the vertices $v$ with the property that the value of the eigenvector at $v$ is larger than at each neighbor of $v$. (If $\la$ is negative, then we expect many of the vertices with positive value to have this property.) Clearly, these vertices form an independent set. Since our random vector is invariant, the probability $q$ that a given vertex is chosen is the same for all vertices. Therefore the expected size of this random independent set is $q |V(G)|$, and consequently, the independence ratio of $G$ is at least $q$. An estimate of $q$ yields Theorem \[thm:crude\] above. In many cases we obtain much sharper bounds.
When the graph has a lot of symmetry (for example, when any pair of neighbors of a fixed vertex can be mapped to any other pair by a suitable graph automorphism), then the probability $q$ defined above is actually determined by $\la$. In this case it equals $q_d(\la)$, the relative volume of the $d-1$-dimensional regular spherical simplex defined by normal vectors with pairwise scalar product $\frac{d-2-\la}{2(d-1)}$ (see Definition \[def:qd\]). There is a simple formula for $q_3(\la)$, see Theorem \[thm:3reg\_main\]. We conjecture that $q \geq q_d(\la)$ for arbitrary transitive graphs (provided that $\la$ is sufficiently small). In other words, the worst-case scenario is when the graph has a lot of symmetry. Of course, this would yield a lower bound $q_d(\lam)$ for the independence ratio. We managed to prove this conjecture for $3$-regular transitive graphs and $4$-regular arc-transitive graphs. We also showed that a well-known conjecture in geometry would imply the $d$-regular, arc-transitive case. (A graph is said to be *arc-transitive* or *symmetric* if for any two pairs of adjacent vertices $(u_1,v_1)$ and $(u_2,v_2)$, there is an automorphism of the graph mapping $u_1$ to $u_2$ and $v_1$ to $v_2$.) The following theorems were obtained.
\[thm:arc\_tr\] Suppose that $G$ is a finite, $d$-regular, arc-transitive graph with least eigenvalue $\lam$. Then the independence ratio of $G$ is at least $$\frac{1}{2} - \frac{1}{3} \sqrt{\lam+d} .$$ In fact, a well-known conjecture in geometry (see Conjecture \[conj:geom\]) would imply that the independence ratio is at least $q_d(\lam)$. This has been proven in the case $d=4$: the independence ratio of a finite, $4$-regular, arc-transitive graph is at least $$\label{eq:q4}
q_4(\lam) \geq \frac{1}{2} - \frac{1}{4} \sqrt{\lam+4} .$$
\[thm:3reg\_main\] Suppose that $G$ is a finite, $3$-regular, vertex-transitive graph with minimum eigenvalue $\lam$. Then the independence ratio of $G$ is at least $$q_3(\lam) = \frac{1}{8} + \frac{3}{4 \pi} \arcsin \left( \frac{1-\lam}{4} \right) =
\frac{1}{2} - \frac{3}{4 \pi} \arccos \left( \frac{1-\lam}{4} \right) .$$ In fact, the following stronger statement holds: $G$ contains two disjoint independent sets $I_1, I_2$ with total size $|I_1 \cup I_2| \geq 2 q_3(\lam) |V(G)|$. This means that the induced subgraph $G[I_1 \cup I_2]$ is bipartite and has at least $2 q_3(\lam) |V(G)|$ vertices.
See Figure \[fig:compare\] to compare the lower bound given in Theorem \[thm:3reg\_main\] to Hoffman’s upper bound . Note that $-3 \leq \lam \leq -2$ for any $3$-regular transitive graph with the only exception of the complete graph $K_4$ for which $\lam = -1$. (See Proposition \[prop:la\_min\_3reg\] in the Appendix.)
![Hoffman’s upper bound and the lower bound of Theorem \[thm:3reg\_main\] for $\lam \in [-3,-1]$[]{data-label="fig:compare"}](fig){width="15cm"}
Random wave functions on infinite transitive graphs
---------------------------------------------------
In order to generalize the above theorems we define random wave functions on infinite transitive graphs $G$. A *wave function* with eigenvalue $\la$ on $G$ is a function $f \colon V(G) \to \IR$ such that $$\sum_{u \in N(v)} f(u) = \la f(v) \mbox{ for each vertex } v \in V(G) ,$$ where $N(v)$ denotes the set of neighbors of $v$ in $G$. So a wave function is basically an eigenvector of the adjacency operator of $G$, except that it does not need to be in $\ell_2(V(G))$.
These random wave functions will also let us answer an open question concerning factor of i.i.d. processes. Suppose that we have independent standard normal random variables $Z_u$ assigned to each vertex $u$ of an infinite transitive graph $G$. By a *factor of i.i.d. process* on $G$ we mean random variables $X_v$, $v \in V(G)$ that are all obtained as measurable functions of the random variables $Z_u$, $u \in V(G)$ and that are $\operatorname{Aut}(G)$-equivariant (i.e., they commute with the natural action of $\operatorname{Aut}(G)$). It is easy to see that for any factor of i.i.d. process the correlation of $X_v$ and $X_{v'}$ converges to $0$ as the distance of $v$ and $v'$ goes to infinity. So a random process that is $0$ everywhere with probability $1/2$ and $1$ everywhere with probability $1/2$ cannot be a factor of i.i.d. However, it can be seen easily that this process can be approximated by factor of i.i.d. processes provided that $G$ is amenable. So the space of factor of i.i.d. processes is not closed, that is, the distributions of these processes do not form a closed set w.r.t. the weak topology. It has been an open question whether the same is true on non-amenable graphs, for example, on the $d$-regular tree, see [@birs_report Section 4, Question 4]. We will show that the space of factor of i.i.d. processes is not closed provided that the spectrum of $G$ is uncountable.
We say that a factor of i.i.d. process $X_v$, $v \in V(G)$ is a *linear factor of i.i.d.* if each $X_v$ is obtained as a (possibly infinite) linear combination of $Z_u$, $u \in V(G)$. Note that linear factors have the following properties.
\[def:gaussian\_process\] We call a collection of random variables $X_v$, $v \in V(G)$ a *Gaussian process* on $G$ if they are jointly Gaussian and each $X_v$ is centered (i.e., has mean $0$). (Random variables are jointly Gaussian if any finite linear combination of them is Gaussian.) We say that a Gaussian process $X_v$ is $\operatorname{Aut}(G)$-invariant (or simply invariant) if for any $\Phi \in \operatorname{Aut}(G)$ the joint distribution of the Gaussian process $X_{\Phi(v)}$ is the same as that of the original process.
We will prove that the adjacency operator $A_G$ has approximate eigenvectors (satisfying a certain invariance property) for any $\la$ in the spectrum $\la \in \si(A_G)$. Then we will use these approximate eigenvectors as coefficients to define linear factor of i.i.d. processes converging in distribution to an invariant Gaussian process $X_v$ that satisfies the eigenvector equation at each vertex.
\[thm:gaussian\_ev\] Let $G$ be an infinite vertex-transitive graph with adjacency operator $A_G$. Then for each point $\la$ of the spectrum $\si(A_G)$ there exists a nontrivial invariant Gaussian process $X_v$, $v \in V(G)$ such that $$\label{eq:eigen}
\sum_{u \in N(v)} X_u = \la X_v \mbox{ for each vertex } v \in V(G) ,$$ where $N(v)$ denotes the set of neighbors of $v$ in $G$. Furthermore, the process $X_v$ can be approximated (in distribution) by linear factor of i.i.d. processes. Clearly, we can assume that these approximating linear factors have only finitely many nonzero coefficients.
An invariant Gaussian process satisfying will be called a *Gaussian wave function* with eigenvalue $\la$. If the spectrum of $G$ is not countable, then we can conclude that some of these Gaussian wave functions cannot be obtained as factor of i.i.d. processes.
\[thm:not\_closed\] Let $G$ be an infinite transitive graph such that the spectrum of the adjacency oparator $A_G$ is not countable. Then there exist (linear) factor of i.i.d. processes on $G$ with the property that the weak limit of their distributions cannot be obtained as the distribution of a factor of i.i.d. process.
We can say more for Cayley graphs.
\[thm:cayley\] Suppose that $G$ is the Cayley graph of a finitely generated infinite group. Then a Gaussian wave function with eigenvalue $\lama {\stackrel{\textrm{\scriptsize def}}{=}}\sup \si(A_G)$ can never be obtained as the distribution of a factor of i.i.d. process.
In view of Theorems \[thm:gaussian\_ev\] and \[thm:cayley\] there exists a Gaussian wave function with eigenvalue $\lama$ that can be approximated by factor of i.i.d. processes but cannot be obtained as one. An independent and different proof of this result was given by Russell Lyons in the special case when $G$ is a regular tree (personal communication).
Factor of i.i.d. independent sets
---------------------------------
Let $X_v$, $v \in V(G)$ be a random process on our infinite transitive graph $G$. As in the finite setting, $I_{+} {\stackrel{\textrm{\scriptsize def}}{=}}\left\{ v \, : \, X_v > X_u, \forall u \in N(v) \right\}$ is a random independent set. If our process is invariant, then the probability that $v \in I_+$ is the same for each vertex $v$, and thus this probability can be used to measure the size of $I_+$. If our process is a factor of some i.i.d. process $Z_v$, then the resulting independent set is also a factor of $Z_v$.
In the infinite setting let $\lam$ denote the minimum of the spectrum $\si(A_G)$ and let $X_v$ be a linear factor of $Z_v$ approximating the Gaussian eigenvector with eigenvalue $\lam$ (see Theorem \[thm:gaussian\_ev\]). As the process $X_v$ converges in distribution to the Gaussian eigenvector, the probability $P( v \in I_+ )$ approaches the corresponding probability for the Gaussian eigenvector process, which, as we will see, can be computed the exact same way as in the finite case.
\[thm:inf\] Theorems \[thm:crude\], \[thm:arc\_tr\] and \[thm:3reg\_main\] give lower bounds $q$ (in terms of $\lam$) for the independence ratio of finite transitive graphs with least eigenvalue $\lam$. These bounds remain true in the following framework. Let $\lam$ denote the minimum of the spectrum of an infinite transitive graph $G$. Then for any $\eps > 0$ there exists a factor of i.i.d. independent set on $G$ such that the probability that any given vertex is in the set is at least $q-\eps$.
A special case of this infinite setting was investigated in [@csghv]. When $G$ is the $d$-regular tree $T_d$, then any factor of i.i.d. independent set on $G$ automatically gives a lower bound for the independence ratio of $d$-regular finite graphs with sufficiently large girth. In particular, for the $3$-regular tree $T_3$ one has $\lam = -2\sqrt{2}$. Therefore the infinite version of Theorem \[thm:3reg\_main\] tells us that there exists factor of i.i.d. independent set in $T_3$ with density $$\frac{1}{2} - \frac{3}{4\pi} \arccos\left(
\frac{1+2\sqrt{2}}{4} \right) \approx 0.4298 .$$ In [@csghv] the somewhat better bound $0.4361$ was obtained, which is the current best. In fact, [@csghv] was the starting point for the work in the present paper. For previous results on the independence ratio of large-girth graphs see [@Bo_ind_set; @mckay; @shearer; @shearer2; @lauer_wormald; @cubic].
Acknowledgments {#acknowledgments .unnumbered}
---------------
The authors are grateful to Péter Csikvári for the elegant proof of Proposition \[prop:la\_min\_3reg\], and to Gergely Ambrus, Károly Böröczky, Gábor Fejes Tóth, and Endre Makai for their remarks on Conjecture \[conj:geom\].
Finite vertex-transitive graphs {#sec:2}
===============================
Throughout this section $G$ will denote a vertex-transitive, finite graph with degree $d$ for some positive integer $d \geq 3$. The least eigenvalue of its adjacency matrix $A_G$ will be denoted by $\lam$. For now let $\la$ be an arbitrary eigenvalue of $A_G$. Eventually, we will choose $\la$ as the minimum eigenvalue. First we define what we mean by a random eigenvector.
\[def:random\_ev\] Let $E_\la$ be the eigenspace corresponding to $\la$, that is, $$E_\la {\stackrel{\textrm{\scriptsize def}}{=}}\left\{ x \in \ell_2( V(G) ) \ : \ A_G x = \la x \right\} .$$ We fix some orthonormal basis $e_1, \ldots, e_l$ in $E_\la$, and take independent standard normal random variables $\ga_1, \ldots, \ga_l$. We call $\sum_{i=1}^l \ga_i e_i$ the *random eigenvector with eigenvalue $\la$*.
The (distribution of the) random eigenvector is clearly independent of the choice of the basis $e_1, \ldots, e_l$, so it is well defined. It also follows that the distribution of the random eigenvector is $\operatorname{Aut}(G)$-invariant. (Note that in the introduction we defined the random eigenvector differently: a uniform random vector on the unit sphere of $E_\la$, which is just the normalized version of the random eigenvector of Definition \[def:random\_ev\].)
We will think of this random eigenvector as a collection of real-valued random variables $X_v$, $v \in V(G)$ with the property that they are jointly Gaussian and $\operatorname{Aut}(G)$-invariant, each $X_v$ is centered, and $$\sum_{u \in N(v)} X_u = \la X_v \mbox{ for each vertex } v,$$ where $N(v)$ denotes the set of neighbors of $v$ in $G$. Since $G$ is transitive, each $X_v$ has the same variance. After multiplying these random variables with a suitable positive constant we might assume that $\operatorname{var}(X_v) = 1$ for each vertex $v$. Next we define random independent sets by means of these random eigenvectors.
\[def:ind\_set\] Let $$\begin{aligned}
I_{+} &= I^\la_{+} {\stackrel{\textrm{\scriptsize def}}{=}}\left\{ v\in V(G) \ : \ X_v > X_u \mbox{ for each } u \in N(v) \right\} \mbox{, and}\\
I_{-} &= I^\la_{-} {\stackrel{\textrm{\scriptsize def}}{=}}\left\{ v\in V(G) \ : \ X_v < X_u \mbox{ for each } u \in N(v) \right\} .\end{aligned}$$ Clearly, $I_{+}$ and $I_{-}$ are disjoint (random) independent sets in $G$.
The $\operatorname{Aut}(G)$-invariance implies that the probability of the event $v \in I_{+}$ is the same for all vertices $v$. So from now on, we will focus on a fixed vertex $v$ (that we will call the root) and its neighbors $u_1, \ldots, u_d$. For $X_v$ and $X_{u_i}$ we will simply write $X$ and $Y_i$, respectively. Therefore we have $$\label{eq:ev}
\sum_{i=1}^d Y_i = \la X .$$ Let us denote the covariance $\operatorname{cov}(Y_i,Y_j)$ by $c_{i,j}$. It follows from that $$\label{eq:cov_sum}
\la^2 = \operatorname{cov}(\la X, \la X) = \sum_{i,j} c_{i,j} =
d + 2 \sum_{i<j} c_{i,j} \mbox{, thus }
\sum_{i<j} c_{i,j} = \frac{\la^2 - d}{2} .$$ Setting $ U_i {\stackrel{\textrm{\scriptsize def}}{=}}X - Y_i$ we have $$P(v \in I_{+}) = P( U_i > 0, 1 \leq i \leq d ) .$$ As we will see, this probability can be expressed as the volume of a certain spherical simplex.
Let $S^{d-1}$ denote the unit sphere in $\IR^d$. A half-space is said to be *homogeneous* if the defining hyperplane (i.e., the boundary of the half-space) passes through the origin. A vector $n$ orthogonal to the defining hyperplane and “pointing outward” is called an *outer normal vector*. Then the given (open) half-space consists of those $x \in \IR^d$ for which the inner product $n \cdot x$ is negative.
A $d-1$-dimensional *spherical simplex* is the intersection of $S^{d-1}$ and $d$ homogeneous half-spaces in $\IR^d$. Up to congruence, a spherical simplex is determined by the $\binom{d}{2}$ pairwise angles enclosed by the outer normal vectors of the $d$ half-spaces. If these $\binom{d}{2}$ angles are all equal, then we say that the spherical simplex is *regular*.
Since $Y_1, \ldots, Y_d$ are centered and jointly Gaussian, they can be written as the linear combinations of independent standard normal variables: there exist independent standard Gaussians $Z_1, \ldots, Z_d$ and (deterministic) vectors $y_1, \ldots, y_d \in \IR^d$ such that $Y_i$ is the inner product of $y_i$ and $Z=(Z_1,\ldots,Z_d)$. Setting $x = (y_1 + \cdots + y_d)/\la$ and $u_i = x - y_i$ we have $$Y_i = y_i \cdot Z; X = x \cdot Z; U_i = u_i \cdot Z .$$ It is easy to see that for any deterministic vectors $a,b \in \IR^d$ the covariance $\operatorname{cov}(a \cdot Z, b \cdot Z)$ is equal to the inner product $a \cdot b$. In particular, $$\label{eq:cov_inner_pr}
x \cdot x = \operatorname{var}(X) = 1; \ y_i \cdot y_j = \operatorname{cov}(Y_i, Y_j) = c_{i,j}; \
u_i \cdot u_j = \operatorname{cov}(U_i, U_j) .$$ In this formulation the event $U_i > 0$ is that the random point $Z$ lies in the homogeneous open half-space with outer normal vector $-u_i$. So the probability in question is equal to the measure of the intersection of the homogeneous half-spaces with outer normal vectors $-u_i$ with respect to the standard multivariate Gaussian measure on $\IR^d$. This is simply the volume of the corresponding $d-1$-dimensional spherical simplex divided by the volume $\operatorname{vol}(S^{d-1})$ of the unit sphere $S^{d-1}$, which is determined by the pairwise angles $$\label{eq:phi}
\varphi_{i,j} {\stackrel{\textrm{\scriptsize def}}{=}}\angle(u_i,u_j) =
\arccos\left( \frac{u_i \cdot u_j}{\| u_i \| \| u_j \|} \right) ,$$ which, in turn, can be expressed using the inner products $y_i \cdot y_j = c_{i,j}$. The probability $P(v \in I_{+})$ seems to be the smallest when $G$ has a lot of symmetry. To make this more precise, we first define what we mean by a “lot of symmetry”.
We say that $G$ is *cherry-transitive* if any cherry (path of length $2$) in $G$ can be mapped to any other cherry using a suitable graph automorphism of $G$.
\[prop:ch\_tr\] If $G$ is cherry-transitive, then $$c_{i,j} = \frac{\la^2 - d}{d(d-1)} \mbox{ for all } i \neq j ,$$ and, consequently, the pairwise angles $\varphi_{i,j}$ are all equal to $$\label{eq:angle}
\arccos\left( \frac{d-2-\la}{2(d-1)} \right) .$$
If $G$ is cherry-transitive, then for any $i_1 \neq j_1$ and $i_2 \neq j_2$ there exists an automorphism $\Phi \in \operatorname{Aut}(G)$ such that $\Phi$ fixes the root $v$ and takes the unordered pair $u_{i_1}, u_{j_1}$ to $u_{i_2}, u_{j_2}$, that is, $$\Phi v = v, \Phi u_{i_1} = \Phi u_{i_2}, \Phi u_{j_1} = \Phi u_{j_2} \mbox{ or }
\Phi v = v, \Phi u_{i_1} = \Phi u_{j_2}, \Phi u_{j_1} = \Phi u_{i_2} .$$ Together with the $\operatorname{Aut}(G)$-invariance of the random eigenvector this implies that $c_{i_1,j_1} = c_{i_2,j_2}$. Since this holds for any two pairs of indices, it follows that all $c_{i,j}$, $i \neq j$ are the same. Using we conclude that for $i \neq j$ $$c_{i,j} = \frac{\la^2 - d}{d(d-1)} .$$ Then easy calculation shows (using notations introduced earlier) that $$u_i \cdot u_j = \frac{2(d-\la)}{d} \mbox{ and }
\| u_i \|^2 = \| u_j \|^2 = \frac{(d-\la)(d-2-\la)}{d(d-1)} .$$ Plugging this into gives $$\varphi_{i,j} = \arccos\left( \frac{d-2-\la}{2(d-1)} \right) .$$
We are now in a position to define the functions $q_d(\la)$.
\[def:qd\] For $-d \leq \la \leq d$ let $q_d(\la)$ denote the volume of the $d-1$-dimensional regular spherical simplex corresponding to the angle divided by $\operatorname{vol}(S^{d-1})$. Then $P(v \in I_{+}) = q_d(\la)$ for any cherry-transitive $G$. In particular, the independence ratio of any cherry-transitive graph $G$ is at least $q_d(\lam)$.
So $P(v \in I_{+}) = q_d(\la)$ provided that $G$ has enough symmetry. The following conjecture says that in the general (i.e., vertex-transitive) case the probability should be larger than that.
\[conj:q\_d\] For any transitive graph $G$ it holds that $$P(v \in I_{+}) \geq q_d(\la)$$ for any $\la$, or at least for sufficiently small $\la$: $\la \leq \la_0$ for some $\la_0$.
This would, of course, imply that the independence ratio of $G$ is at least $q_d(\lam)$ provided that $\lam \leq \la_0$.
We will prove this conjecture for $d=3$ and $\la_0=-2$ in Section \[sec:3reg\]. The conjecture might be true for arbitrary $\la$, but proving for $\la \leq \la_0 = -2$ will be sufficient for our purposes, because $\lam \leq -2$ for any $3$-regular transitive graph except $K_4$.
A few properties of the functions $q_d(\la)$ are collected in the next proposition.
\[prop:q\_d\_prop\] For any $d \geq 3$ $q_d$ is a monotone decreasing continuous function on $[-d,-1]$ with $$q_d(-d) = \frac{1}{2} \mbox{ and } q_d(-1) = \frac{1}{d+1} .$$ As for the behavior of $q_d$ around $-d$ we have $$q_d(\la) \geq \frac{1}{2} - \frac{ \pi \operatorname{vol}(S^{d-2}) }{ 4 \operatorname{vol}(S^{d-1}) }
\sqrt{\frac{\la+d}{d}} \geq \frac{1}{2} - \frac{1}{3} \sqrt{\la+d} .$$
Monotonicity and continuity follow readily from the definition of $q_d$.
For $\la = -d$ the angles $\varphi_{i,j}$ are $0$, so the corresponding (degenerate) spherical simplex is a hemisphere, thus $q_d(-d) = 1/2$ as claimed.
For $\la = -1$ the angles $\varphi_{i,j}$ are $\pi/3$. It is not hard to see that the vertices of our spherical simplex in that case will be the $d$ vertices of a face of a regular (Euclidean) simplex in $\IR^d$. Then each of the $d+1$ spherical simplices belonging to the $d+1$ faces has volume $\operatorname{vol}(S^{d-1})/(d+1)$. (We could also argue that for $G = K_{d+1}$ and $\la = -1$ we have $P( v \in I_{+} ) = 1/(d+1)$, and since $K_{d+1}$ is cherry-transitive, $P( v \in I_{+} ) = q_d(-1)$.)
See Section \[sec:near\_neg\_d\] for a proof of the claimed behavior around $-d$.
The $3$-regular, vertex-transitive case {#sec:3reg}
---------------------------------------
Now we turn to the proof of Theorem \[thm:3reg\_main\] that gives a lower bound for the independence ratio of $3$-regular transitive graphs. We will basically show that Conjecture \[conj:q\_d\] is true when $d=3$ and $\la_0 = -2$.
For $d=3$ the surface of the unit sphere $S^{d-1} = S^2$ is $4\pi$ and the area of a spherical triangle is $\al + \be + \ga - \pi$, where $\al, \be, \ga$ are the angles enclosed by the sides of the spherical triangle. As we have seen, the probability $P(v \in I_{+})$ equals the area of a certain spherical triangle divided by $4 \pi$. The angles of the spherical triangle in question are $\pi - \varphi_{1,2}$, $\pi - \varphi_{1,3}$ and $\pi - \varphi_{2,3}$. Therefore $$\begin{gathered}
\label{eq:prob_3reg}
P(v \in I_{+}) = \frac{1}{4\pi} \left( \sum_{1\leq i<j \leq 3} (\pi - \varphi_{i,j} ) - \pi \right) =
\frac{1}{4\pi} \left(\frac{\pi}{2} + \sum_{1\leq i<j \leq 3} (\frac{\pi}{2} - \varphi_{i,j} ) \right) = \\
\frac{1}{4\pi} \left( \frac{\pi}{2} + \sum_{1\leq i<j \leq 3}
\arcsin\left( \frac{u_i \cdot u_j}{\| u_i \| \| u_j \|} \right) \right) .\end{gathered}$$ By Proposition \[prop:ch\_tr\] we have $c_{i,j} = (\la^2 - 3)/6$ and $\varphi_{i,j} = \arccos( (1-\la)/4 )$ in the cherry-transitive case, thus $$\label{eq:q_3}
q_3(\la) = \frac{1}{8} + \frac{3}{4 \pi} \arcsin \left( \frac{1-\la}{4} \right)
= \frac{1}{2} - \frac{3}{4 \pi} \arccos \left( \frac{1-\la}{4} \right) .$$
The statement of the theorem is true for the complete graph $K_4$ as the independence ratio is $1/4$ and the minimum eigenvalue is $-1$ in that case. For any other $3$-regular transitive graph $G$ we have $\lam \leq -2$. (See Proposition \[prop:la\_min\_3reg\] in the Appendix.) Therefore it suffices to prove that $P(v \in I_{+}) \geq q_3(\la)$, whenever $\la \leq -2$.
Recall that $Y_1, Y_2, Y_3$ are standard Gaussians with pairwise covariances $c_{i,j}$. Therefore the matrix $$\begin{pmatrix}
1 & c_{1,2} & c_{1,3}\\
c_{1,2} & 1 & c_{2,3} \\
c_{1,3} & c_{2,3} & 1
\end{pmatrix}$$ is positive semidefinite. In particular, its determinant is nonnegative: $$1 + 2 c_{1,2} c_{1,3} c_{2,3} - c_{1,2}^2 - c_{1,3}^2 - c_{2,3}^2 \geq 0 .$$ Furthermore, according to we have $c_{1,2}+c_{1,3}+c_{2,3} = (\la^2-3)/2 \geq 1/2$, because $\la \leq -2$. It follows that each $c_{i,j}$ must be between $-1/2$ and $1$.
Indeed, let $x,y,z$ be real numbers between $-1$ and $1$ with $x+y+z \geq 1/2$ and $1+2xyz-x^2-y^2-z^2 \geq 0$. Assume that $z < -1/2$. Then $$\begin{gathered}
0 \leq 1+2xyz-x^2-y^2-z^2 = 1 + 2(z+1)xy - (x+y)^2 - z^2 \leq \\
1 + 2(z+1)\left( \frac{x+y}{2} \right)^2 - (x+y)^2 - z^2 =
1 + \frac{z-1}{2} (x+y)^2 - z^2 \leq
1 + \frac{z-1}{2} \left(\frac{1}{2}-z\right)^2 - z^2 < 0 ,$$\end{gathered}$$ contradiction. Therefore $z \geq -1/2$. Similarly, $x,y \geq -1/2$, too.
Next we bound $u_i \cdot u_j/(\| u_i \| \| u_j \|)$ from below. Using , $x = (y_1+y_2+y_3)/\la$ and $c_{1,2}+c_{1,3}+c_{2,3} = (\la^2-3)/2$: $$\begin{aligned}
x \cdot y_1 &= \frac{1}{\la}( 1 + c_{1,2} + c_{1,3} ) =
\frac{1}{\la} \left( 1 + \frac{\la^2-3}{2} - c_{2,3} \right) =
\frac{\la}{2} - \frac{1}{2\la}-\frac{1}{\la} c_{2,3}, \\
\|u_1\|^2 &= \| x - y_1 \|^2 = 2 - 2 x \cdot y_1 =
2 - \la + \frac{1}{\la} + \frac{2}{\la} c_{2,3} .\end{aligned}$$ Similar formulas hold for $x \cdot y_i$ and $\|u_i\|$, $i=2,3$. By the inequality of arithmetic and geometric means it follows that $$\| u_1 \| \| u_2 \| \leq \frac{\|u_1\|^2 + \|u_2\|^2}{2} =
2 - \la + \frac{1}{\la} + \frac{1}{\la}( c_{1,3} + c_{2,3} ) =
\frac{-1}{\la} \left( \frac{1}{2} - 2\la + \frac{\la^2}{2} + c_{1,2} \right) .$$ Note that this holds with equality when all $c_{i,j}$ are equal. Furthermore, $$\begin{gathered}
u_1 \cdot u_2 = (x-y_1) \cdot (x-y_2) = 1 + c_{1,2} - x \cdot (y_1+y_2) =
1 + c_{1,2} + x \cdot (y_3 - \la x) = \\
1 + c_{1,2} + \left( \frac{\la}{2} - \frac{1}{2\la}-\frac{1}{\la} c_{1,2} \right) - \la =
\frac{-1}{\la} \left( \frac{1}{2} - \la + \frac{\la^2}{2} + (1-\la) c_{1,2} \right) .\end{gathered}$$ It follows that $$\frac{u_1 \cdot u_2}{\|u_1\| \|u_2\|} \geq
\frac{ \frac{1}{2} - \la + \frac{\la^2}{2} + (1-\la) c_{1,2} }
{ \frac{1}{2} - 2\la + \frac{\la^2}{2} + c_{1,2} } ,$$ because the numerator is positive (note that $-3 \leq \la \leq -2$ and $c_{1,2} \geq -1/2$). The analogous inequality holds for any other pair of indices $i,j$. Since $\arcsin$ is a monotone increasing function, yields that $$P(v \in I_{+}) \geq
\frac{1}{8} + \frac{1}{4\pi} \sum_{1\leq i < j \leq 3}
\arcsin\left( \frac{ \frac{1}{2} - \la + \frac{\la^2}{2} + (1-\la) c_{i,j} }
{ \frac{1}{2} - 2\la + \frac{\la^2}{2} + c_{i,j} } \right) .$$ Setting $$f(t) {\stackrel{\textrm{\scriptsize def}}{=}}\arcsin\left( \frac{ \frac{1}{2} - \la + \frac{\la^2}{2} + (1-\la) t }
{ \frac{1}{2} - 2\la + \frac{\la^2}{2} + t } \right) ,$$ we have $$\label{eq:proved_ineq}
P(v \in I_{+}) \geq
\frac{1}{8} + \frac{1}{4\pi} \sum_{1\leq i < j \leq 3} f( c_{i,j} ) .$$ On the other hand, $$\label{eq:q3_with_f}
q_3(\la) = \frac{1}{8} + \frac{3}{4\pi} f\left( \frac{\la^2-3}{6} \right) ,$$ which follows from and the definition of $f$. (It also follows from the fact that when each $c_{i,j}$ is equal to $(\la^2-3)/6$, then should hold with equality.) In view of and we need to show that $$\label{eq:ineq_f}
\frac{1}{3} \sum_{1\leq i < j \leq 3} f( c_{i,j} )
\geq f\left( \frac{\la^2-3}{6} \right) ,$$ where each $c_{i,j}$ is between $-1/2$ and $1$ and their average is $(\la^2-3)/6$. This, of course, would follow from the convexity of $f$. Unfortunately, $f$ is not convex on the entire interval $[-1/2,1]$. We claim, however, that the tangent line to $f$ at $t_0 = (\la^2-3)/6$ is below $f$ on the entire interval $[-1/2,1]$, which still implies . The rather technical proof of this claim can be found in the Appendix (Lemma \[lem:tangent\_line\]).
Now let $\la = \lam \leq -2$, then $P(v \in I_{+}) \geq q_3(\lam)$. So the expected size of the random independent set $I_{+}$ is at least $q_3(\lam) |V(G)|$, thus the independence ratio of $G$ is at least $q_3(\lam)$.
To prove the second part of the statement we notice that the random independent set $I_{-}$ (see Definition \[def:ind\_set\]) has the same expected size. Indeed, if we replace $X_v$, $v \in V(G)$ with $X'_v = - X_v$, then $X'_v$, $v \in V(G)$ have the same joint distribution and the roles of $I_{+}$ and $I_{-}$ interchange. Since $I_{+}$ and $I_{-}$ are always disjoint, the expected size of their union $I_{+} \cup I_{-}$ is at least $2 q_3(\lam) |V(G)|$. Consequently, there must exist disjoint independent sets $I_1,I_2$ in $G$ with $|I_1 \cup I_2| \geq 2 q_3(\lam) |V(G)|$.
For graphs with very large odd-girth Theorem \[thm:3reg\_other\] of the Appendix gives a slightly better bound. The proof is based on the same random eigenvector but uses a different method to find large independent sets.
The arc-transitive case {#subsec:edge_tr}
-----------------------
The following innocent-looking, and very plausible, conjecture is open in dimension $n \geq 4$.
\[conj:geom\] Let $S$ be a sphere in the $n$-dimensional Euclidean space $\IR^n$. We have $n+1$ spherical caps with the same given radius on $S$. We want to find the configuration for which the volume of the union of the caps is maximal. It is conjectured that this optimal configuration is always the one where the $n+1$ centers are the vertices of a regular simplex in $\IR^n$.
The statement of the conjecture is trivial for $n=2$, while the $n=3$ case follows from the so-called Moment Theorem of L. Fejes Tóth [@fejes_toth Theorem 2].
In what follows we will explain how the case $n=d-1$ of the above conjecture implies that $P( v \in I_{+} ) \geq q_d(\la)$ holds for every $d$-regular arc-transitive graph $G$, and consequenlty the independence ratio of $G$ is at least $q_d(\lam)$. In particular, the $d=4$ case follows from the $n=3$ case of the conjecture which is known to be true, see Theorem \[thm:arc\_tr\]. Using our previous notations, $P( v \in I_{+} )$ is the volume of the spherical simplex $T$ determined by the half-spaces with outer normal vectors $-u_i$, $i=1, \ldots, d$, while $q_d(\la)$ is the volume of the same simplex in the case when all the angles $\varphi_{i,j} = \angle(u_i,u_j)$, $i \neq j$ are the same. In other words, we need to show that the volume of the spherical simplex $T$ is minimal when the angles $\angle(u_i,u_j)$ are the same.
If $G$ is arc-transitive, then the covariances $\operatorname{cov}(X,Y_i) = x \cdot y_i$ are all equal. Since $$x \cdot y_1 + \cdots + x \cdot y_d = x \cdot (y_1+ \cdots + y_d) = x \cdot (\la x) = \la ,$$ we get that $x \cdot y_i = \la/d$ for each $i$. It follows that the angle enclosed by $x$ and $u_i$ $$\label{eq:delta}
\angle(x,u_i) = \de {\stackrel{\textrm{\scriptsize def}}{=}}\frac{ \pi - \arccos( \la / d) }{2} =
\frac{ \arccos( - \la/d ) }{2} \mbox{ for each } i.$$ Now let $S_l$ be the set of points on $S^{d-1}$ that has some fixed distance $l$ from $x$, thus $S_l$ is a $d-2$-dimensional sphere for any $l$. The intersection of $S_l$ and the half-space with outer normal vector $u_i$ is a spherical cap of radius depending only on $l$ and $\la$. So the intersection of $S_l$ and our spherical simplex $T$ can be obtained by removing $d$ spherical caps of the same given radius from $S_l$. If Conjecture \[conj:geom\] is true for $n=d-1$, then the total volume of the removed area is maximal for the “regular configuration” when each $\angle(u_i,u_j)$ is the same. Therefore the $d-2$-dimensional volume of $T \cap S_l$ is minimal for the regular configuration for any $l$. It follows that the $d-1$-dimensional volume of $T$ is also minimal for the regular configuration, and this is what we wanted to prove.
Bounds near $-d$ {#sec:near_neg_d}
----------------
Even if Conjecture \[conj:geom\] is not assumed to be true, the above observations yield a lower bound for the independence ratio of $d$-regular arc-transitive graphs in the case when the least eigenvalue is close to $-d$. As we have seen in , $ \angle(x,u_i) = \de $ for each $i$, which means that each point of $S^{d-1}$ at (spherical) distance less than $\pi/2 - \de$ from $x$ is contained in our spherical simplex $T$. These points form a spherical cap with center $x$ and radius $\pi/2 - \de$. (In fact, this spherical cap is the “inscribed ball” of $T$.) Using and that $\arccos(t) \leq \pi/2 \sqrt{1-t}$ for any $t \in [0,1]$, we get $$\de = \frac{ \arccos( - \la/d ) }{2} \leq \frac{\pi}{4} \sqrt{1+\la/d}$$ provided that $\la \leq 0$.
This spherical cap can be obtained by taking the hemisphere (around $x$) and removing a strip of “width” $\de$ (in spherical distance). The volume of this strip is clearly at most $\de \operatorname{vol}(S^{d-2})$, therefore the volume of the spherical cap is at least $ \operatorname{vol}(S^{d-1})/2 - \de \operatorname{vol}(S^{d-2})$, whence $$P( v \in I_{+} ) \geq \frac{ \operatorname{vol}(S^{d-1})/2 - \de \operatorname{vol}(S^{d-2}) }{ \operatorname{vol}(S^{d-1}) } =
\frac{1}{2} - \frac{ \pi \operatorname{vol}(S^{d-2}) }{ 4 \operatorname{vol}(S^{d-1}) } \sqrt{\frac{\la+d}{d}} .$$ For $d=4$ we have $\operatorname{vol}(S^{2}) / \operatorname{vol}(S^{3}) = (4 \pi) / (2 \pi^2) = 2/\pi$, so the bound is $$\frac{1}{2} - \frac{1}{4} \sqrt{\la+4} .$$ For general $d$, we use the estimate $\operatorname{vol}(S^{d-2}) / \operatorname{vol}(S^{d-1}) \leq \sqrt{d}/\sqrt{2\pi}$ (see Lemma \[lem:vol\_ratio\] of the Appendix) to obtain the following bound $$\frac{1}{2} - \frac{\sqrt{\pi}}{4\sqrt{2}} \sqrt{\la+d} >
\frac{1}{2} - \frac{1}{3} \sqrt{\la+d} .$$ These are lower bounds for the probability $P( v \in I_{+} )$, in particular, for $q_d(\la)$. Thus the first part of Theorem \[thm:arc\_tr\] follows, as well as the estimate for $q_4(\la)$ and the last statement of Proposition \[prop:q\_d\_prop\].
We can even say something in the general (vertex-transitive) case. Using $ x \cdot y_1 + \cdots + x \cdot y_d = \la $ and $x \cdot y_j \geq -1$: $$x \cdot y_i \leq \la + d - 1 \mbox{ for each } 1 \leq i \leq d .$$ Therefore the angle $\angle(x,y_i)$ is at least $\arccos( \la + d - 1 )$. Using that $\arccos(t) \leq \pi/2 \sqrt{1-t}$ for any $t \in [0,1]$, it follows that $$\angle(x,u_i) \leq \de' {\stackrel{\textrm{\scriptsize def}}{=}}\frac{ \pi - \arccos( \la + d - 1 ) }{2} =
\frac{ \arccos( 1 - \la - d ) }{2} \leq \frac{\pi}{4} \sqrt{\la+d}$$ provided that $\la \leq -d+1$. This means that our spherical simplex $T$ contains the spherical cap with center $x$ and radius $\pi/2 - \de'$. Therefore $$P( v \in I_{+} ) \geq \frac{ \operatorname{vol}(S^{d-1})/2 - \de' \operatorname{vol}(S^{d-2}) }{ \operatorname{vol}(S^{d-1}) } =
\frac{1}{2} - \frac{ \pi \operatorname{vol}(S^{d-2}) }{ 4 \operatorname{vol}(S^{d-1}) } \sqrt{\la+d}
\geq \frac{1}{2} - \frac{\sqrt{\pi}}{4\sqrt{2}} \sqrt{d(\la+d)} .$$ Since $\sqrt{\pi}/(4\sqrt{2}) < 1/3$, Theorem \[thm:crude\] follows.
Infinite transitive graphs {#sec:3}
==========================
Random wave functions
---------------------
Our goal now is to generalize the random eigenvectors we introduced in Section \[sec:2\] for infinite transitive graphs $G$. For an infinite graph $G$ the adjacency operator $A_G: \ell_2( V(G) ) \to \ell_2( V(G) )$ might not have any eigenvectors (i.e., the point spectrum might be empty). So the approach we used in the finite setting will not work here. Instead, we will define random wave functions as the limit of linear factor of i.i.d. processes. The coefficients of these linear factors will be approximate eigenvectors of $A_G$ that are invariant under automorphisms fixing some root $x \in V(G)$. We start with proving that such approximate eigenvectors exist for any $\la$ in the spectrum $\si(A_G)$. Let $\operatorname{Stab}_x(G)$ denote the *stabilizer subgroup*, that is, the group of automorphisms fixing $x$.
\[thm:inv\_appr\_ev\] Let $G$ be an infinite vertex-transitive graph with adjacency operator $A_G$ and with some fixed root $x$. Then for any $\eps >0$ and any $\la$ in the spectrum $\si(A_G)$ there exists a $\operatorname{Stab}_x(G)$-invariant vector $\al \in \ell_2( V(G) )$ such that $$\| \al \| =1 \mbox{ and } \|A_G \al - \la \al \| \leq \eps .$$
Consider the projection-valued measure $P_\la$ corresponding to the self-adjoint operator $A_G$. This “measure” assigns an orthogonal projection $P_S$ to each Borel set $S \subseteq \IR$. According to spectral theory, one can integrate with respect to this measure. For instance, the following formula holds: $$A_G = \int_\IR \la \, \mathrm{d} P_\la .$$
Furthermore, the projections $P_S$ have the property that if an operator $T$ commutes with $A_G$, then it also commutes with each projection $P_S$. There is a unitary operator $U_\Phi$ corresponding to each $\Phi \in \operatorname{Aut}(G)$ (the one that permutes the coordinates of $\ell_2( V(G) )$ according to $\Phi$). Since $U_\Phi$ commutes with $A_G$, it also commutes with the projections $P_S$.
Now let $\la_0$ be an arbitrary element of the spectrum $\si(A_G)$ and set $S=[\la_0-\eps, \la_0+\eps]$. We define $\al$ as the image of the indicator function $\ind_x$ under the projection $P_S$: $$\al {\stackrel{\textrm{\scriptsize def}}{=}}P_{[\la_0-\eps, \la_0+\eps]} \ind_x .$$ Note that $\ind_x$ is a fixed point of $U_\Phi$ for any $\Phi \in \operatorname{Stab}_x(G)$, therefore $$U_\Phi \al = U_\Phi P_{[\la_0-\eps, \la_0+\eps]} \ind_x =
P_{[\la_0-\eps, \la_0+\eps]} U_\Phi \ind_x = P_{[\la_0-\eps, \la_0+\eps]} \ind_x = \al ,$$ thus $\al$ is $\operatorname{Stab}_x(G)$-invariant. On the other hand, since $P_S P_{\IR \sm S} = 0$, we have $$A_G \al - \la_0 \al = \left( \int_{\IR} \la-\la_0 \, \mathrm{d} P_\la \right) \al =
\left( \int_{[\la_0-\eps, \la_0+\eps]} \la-\la_0 \, \mathrm{d} P_\la \right) \al ,$$ which clearly implies that $$\| A_G \al - \la_0 \al \| \leq \eps \| \al \| .$$ It remains to show that $\al = P_{[\la_0-\eps, \la_0+\eps]} \ind_x \neq 0 $. Assume that $P_{[\la_0-\eps, \la_0+\eps]} \ind_x = 0$. It follows that $P_{[\la_0-\eps, \la_0+\eps]} \ind_v = 0$ for every vertex $v \in V(G)$. Indeed, let $\Phi \in \operatorname{Aut}(G)$ such that $\Phi x = v$. Then $U_\Phi \ind_x = \ind_v$ and $$P_{[\la_0-\eps, \la_0+\eps]} \ind_v = P_{[\la_0-\eps, \la_0+\eps]} U_\Phi \ind_x =
U_\Phi P_{[\la_0-\eps, \la_0+\eps]} \ind_x = 0 .$$ This holds for each vertex $v$, which clearly implies that $P_{[\la_0-\eps, \la_0+\eps]} = 0$. Then the operator $$B = \int_{\IR \sm [\la_0-\eps, \la_0+\eps]}
\frac{1}{\la-\la_0} \, \mathrm{d} P_\la$$ would be the inverse of $A_G - \la_0 I$ contradicting our assumption that $\la_0 \in \si(A_G)$.
There is a general theorem for Hilbert spaces saying that every point of the spectrum of a self-adjoint operator is an approximate eigenvalue [@sunder Corollary 4.1.3]. So the real content of the above theorem is that one can find approximate eigenvectors that are $\operatorname{Stab}_x(G)$-invariant. This invariance will be crucial for us later on, when we will use these approximate eigenvectors as coefficients to define linear factor of i.i.d. processes.
Suppose now that we have an i.i.d. process on $G$: independent standard normal random variables $Z_u$ assigned to each vertex $u$. We will consider processes $X_v$, $v \in V(G)$, where each $X_v$ is a (possibly infinite) linear combination of $Z_u$, $u \in V(G)$. We collected some obvious properties of such processes in the next proposition.
\[prop:lin\_factor\] Let $\be_{v,u}$, $v,u \in V(G)$ be real numbers and let $$\label{eq:lin}
X_v = \sum_{u \in V(G)} \be_{v,u} Z_u .$$ The infinite sum in converges almost surely if and only if $$\label{eq:square_sum}
\sum_{u \in V(G)} \be_{v,u}^2 < \infty .$$ If is satisfied, then $X_v$ is a centered Gaussian with variance $\operatorname{var}(X_v) = \sum_{u \in V(G)} \be_{v,u}^2$.
The process $X_v$, $v \in V(G)$ is $\operatorname{Aut}(G)$-invariant if and only if $$\label{eq:inv}
\be_{v,u} = \be_{\Phi v, \Phi u} \mbox{ for all } \Phi \in \operatorname{Aut}(G) .$$
Now we are in a position to formally define linear factor of i.i.d. processes.
\[def:lin\_factor\] We say that a process $X_v$, $v \in V(G)$ is a *linear factor* of the i.i.d process $Z_u$ if it can be written as in for some real numbers $\be_{v,u}$, $v,u \in V(G)$ satisfying and .
\[rm:lin\_factor\] Let us fix a root $x \in V(G)$. For a linear factor the coefficients $\al_u {\stackrel{\textrm{\scriptsize def}}{=}}\be_{x,u}$ clearly determine each $\be_{v,u}$. Here $\al=(\al_u)_{u \in V(G)}$ can be any $\operatorname{Stab}_x(G)$-invariant vector in $\ell_2( V(G) )$. So there is a one-to-one correspondance between linear factor of i.i.d. processes on $G$ and $\operatorname{Stab}_x(G)$-invariant vectors $\al \in \ell_2( V(G) )$. Also, by Proposition \[prop:lin\_factor\] we have $\operatorname{var}(X_v) = \| \al \|^2$.
Recall Definition \[def:gaussian\_process\] of invariant Gaussian processes.
\[def:gaussian\_ev\] We call an invariant Gaussian process $X_v$, $v \in V(G)$ a *Gaussian wave function* with eigenvalue $\la$ if $$\sum_{u \in N(v)} X_u = \la X_v \mbox{ for each vertex } v \in V(G) ,$$ where $N(v)$ denotes the set of neighbors of $v$ in $G$.
It was shown in [@csghv] that for the $d$-regular tree $T_d$ there exists an essentially unique Gaussian wave function for each $\la \in [-d,d]$. Furthermore, this Gaussian wave function can be approximated by factor of i.i.d. processes provided that $\la$ is in the spectrum $\si(T_d) = [-2\sqrt{d-1}, 2\sqrt{d-1}]$.
In general, it is not clear for which $\la$ such Gaussian wave functions exist and whether they are unique.
For a transitive graph $G$ we call the closed set $$\widetilde{\si}(G) {\stackrel{\textrm{\scriptsize def}}{=}}\left\{ \la \, : \,
\mbox{there exists a Gaussian wave function on $G$ with eigenvalue $\la$} \right\}$$ the *Gaussian spectrum* of $G$.
Theorem \[thm:gaussian\_ev\] claims that for any $\la \in \si(A_G)$ there exists a Gaussian wave function on $G$, which can be approximated by linear factor of i.i.d. processes. Therefore $\widetilde{\si}(G) \supseteq \si(A_G)$.
We use the $\operatorname{Stab}_x(G)$-invariant approximate eigenvectors of Theorem \[thm:inv\_appr\_ev\] to define linear factor of i.i.d. processes. So let $\al$ be a $\operatorname{Stab}_x(G)$-invariant vector with $\| \al^\eps \| = 1$ and $\| A_G \al^\eps - \la \al^\eps \| \leq \eps$. By Remark \[rm:lin\_factor\] for each $\al^\eps$ there is a corresponding linear factor $X_v^\eps$, $v \in V(G)$. Note that the process $$Y_v^\eps {\stackrel{\textrm{\scriptsize def}}{=}}\sum_{u \in N(v)} X_u^\eps - \la X_v^\eps$$ is also a linear factor, the corresponding coefficient vector is $\de^\eps {\stackrel{\textrm{\scriptsize def}}{=}}A_G \al^\eps - \la \al^\eps$. Therefore $X_v^\eps$ is an invariant Gaussian process with $\operatorname{var}(X_v^\eps) = \| \al^\eps \|^2 = 1$ and $$\operatorname{var}\left( \sum_{u \in N(v)} X_u^\eps - \la X_v^\eps \right) =
\operatorname{var}\left( Y_v^\eps \right) = \| \de^\eps \|^2 =
\| A_G \al^\eps - \la \al^\eps \|^2 \leq \eps^2 .$$ Since the space of invariant Gaussian processes with variance $1$ is compact, it follows that there exists a sequence $\eps_n$ converging to $0$ such that the processes $X_v^{\eps_n}$ converge in distribution. The limit process will be a nontrivial invariant Gaussian process $X_v$ that satisfies the eigenvector equation at each vertex.
Factor of i.i.d. processes
--------------------------
For a graph $G$ we defined an i.i.d. process on $G$ as independent standard normal random variables $Z_v$, $v \in V(G)$. In other words, $Z = \left( Z_v \right)_{v \in V(G)}$ is a random point in the measure space $(\Omega, \mu)$, where $\Omega$ is $\IR^{V(G)}$ with the product topology and $\mu$ is the product of standard Gaussian measures (one on each copy of $\IR$). The natural action of $\operatorname{Aut}(G)$ on $V(G)$ gives rise to an action of $\operatorname{Aut}(G)$ on $\Omega$: for $\Phi \in \operatorname{Aut}(G)$ and $\omega = \left( \omega_v \right)_{v \in V(G)} \in \Omega$ let $$\left( \Phi \cdot \omega \right)_v {\stackrel{\textrm{\scriptsize def}}{=}}\omega_{ \Phi^{-1} v } .$$ Let $G$ be an infinite transitive graph and suppose that $F$ is a measurable $\Omega \to \Omega$ function that is $\operatorname{Aut}(G)$-equivariant (i.e., commutes with the $\operatorname{Aut}(G)$-action). Then $X = F(Z)$ is an invariant process on $G$. Such a process $X = \left( X_v \right)_{v \in V(G)}$ is called a *factor* of the i.i.d. process $Z$.
An $\operatorname{Aut}(G)$-equivariant $F \colon \Omega \to \Omega$ function is determined by $f = \pi_x \circ F$, where $\pi_x \colon \Omega \to \IR$ is the projection corresponding to the coordinate of some fixed root $x$. Here $f$ can be any $\operatorname{Stab}_x(G)$-invariant $\Omega \to \IR$ function. So factor of i.i.d. processes can be identified with measurable, $\operatorname{Stab}_x(G)$-invariant functions $f \colon \Omega \to \IR$.
Next we will prove Theorem \[thm:not\_closed\] and Theorem \[thm:cayley\] by showing that certain Gaussian wave functions $X_v$, $v \in V(G)$ cannot be obtained as factor of i.i.d. processes. Since $X_v$ has finite variance in that case, we can restrict ourselves to functions $f \in L_2(\Omega, \mu)$. Let $\Hi \subset L_2(\Omega, \mu)$ be the subspace containing those $f \in L_2(\Omega, \mu)$ that are $\operatorname{Stab}_x(G)$-invariant. There is a natural way to define an adjacency operator $\IA$ on the Hilbert space $\Hi$. Let $$\left( \IA f \right) (\omega) {\stackrel{\textrm{\scriptsize def}}{=}}\sum_{y \in N(x)} f\left( \Phi_{y \to x} \cdot \omega \right) ,$$ where $\Phi_{y \to x}$ is an (arbitrary) automorphism of $G$ taking $y$ to $x$. Since $f$ is $\operatorname{Stab}_x(G)$-invariant, $\IA$ is well defined.
Suppose now that we have a Gaussian wave function with eigenvalue $\la$ that can be obtained as a factor of i.i.d. process. Then the corresponding $f$ satisfies the eigenvector equation $\IA f = \la f$. In particular, $\la$ needs to be in the point spectrum of $\IA$. (Note that an eigenvector $f$ of $\IA$ does not necessarily give us a Gaussian wave function: although the corresponding factor of i.i.d. process will satisfy the eigenvector equation at each vertex, $f(Z)$ might not have a Gaussian distribution.)
Since $L_2(\Omega, \mu)$ is a separable Hilbert space, so is $\Hi$, and consequently the point spectrum of $\IA : \Hi \to \Hi$ is countable.
Therefore only for countably many $\la$’s can we have a Gaussian wave function on $G$ that can be obtained as a factor of i.i.d. process. However, if $\si(A_G)$ is uncountable, then by Theorem \[thm:gaussian\_ev\] $G$ has Gaussian wave functions for uncountably many different eigenvalues $\la$; moreover, they can all be approximated by linear factor of i.i.d. processes.
We will use two basic facts about the point spectra of the adjacency operators $A_G$ and $\IA$. First, $\lama$ is never in the point spectrum $\si_p(A_G)$ (we will give a short proof for this in the Appendix, see Lemma \[lem:lama\]). Second, $\si_p(\IA) \subseteq \si_p(A_G) \cup \{d\}$ for Cayley graphs (this will be explained after the proof). Therefore $\lama$ is not in the point spectrum of $\IA$ provided that $\lama <d$, and consequently, a Gaussian wave function with eigenvalue $\lama$ cannot be obtained as a factor of i.i.d. process.
In the case $\lama = d$ the Gaussian wave function has to be constant, that is, $X_u = X_v$ for any two vertices $u,v$. However, for a factor of i.i.d. process the correlation between $X_u$ and $X_v$ should tend to $0$ as the distance of $u$ and $v$ goes to infinity.
Next we will explain the relation between the adjacency operators $A_G$ and $\IA$. This can be found in [@kechris_tsankov Section 3] in a more general setting; see also [@lyons_nazarov Theorem 2.1 and Corollary 2.2]. Let $\nu$ denote the standard Gaussian measure. Since $L_2(\IR, \nu)$ is a separable Hilbert space, it has a countable orthonormal basis: $g_0, g_1, g_2, \ldots$, where $g_0$ will be assumed to be the constant $1$ function. Let $\II$ denote the set of finitely supported $V(G) \to \{0,1,2,\ldots\}$ functions. For each $q \in \II$ we define an $\Omega \to \IR$ function: $$W_q(\omega) {\stackrel{\textrm{\scriptsize def}}{=}}\prod_{v \in V(G)} g_{q(v)} \left( \omega_v \right) .$$ Note that this is actually a finite product, since all but finitely many terms are equal to $g_0 \equiv 1$. According to [@kechris_tsankov Lemma 3.1] the functions $W_q$, $q\in \II$ form an orthonormal basis of $L_2(\Omega, \mu)$. It follows that $L_2(\Omega, \mu)$ is separable, which fact was used in the proof of Theorem \[thm:not\_closed\].
We defined the operator $\IA$ on the space $\Hi \subset L_2(\Omega, \mu)$ containing $\operatorname{Stab}_x(G)$-invariant functions. When $G$ is a Cayley graph, there is a natural way to extend $\IA$ to an adjacency operator over the whole space $L_2(\Omega, \mu)$. Suppose that $\Ga$ is a finitely generated infinite group. Let $S$ be a finite, symmetric set of generators and let $G$ be the corresponding Cayley graph, that is, $V(G) = \Ga$ and the vertex $v \in \Ga$ is adjacent to the vertices $\ga v$, $\ga \in S$. The natural action of $\Ga$ on itself gives rise to the following $\Ga$-action on $\Omega$: $$\left( \ga \cdot \omega \right)_v {\stackrel{\textrm{\scriptsize def}}{=}}\left( \omega \right)_{\ga^{-1} v} .$$ (This is often called the *generalized Bernoulli shift*.) Then for $f \in L_2(\Omega, \mu)$ let $$\left( \IA f \right) (\omega) {\stackrel{\textrm{\scriptsize def}}{=}}\sum_{\ga \in S} f\left( \ga \cdot \omega \right) .$$ This clearly extends our earlier definition of $\IA$.
There is a natural $\Ga$-action on $\II$ as well: for $q \in \II$ $$\left( \ga \cdot q \right)(v) {\stackrel{\textrm{\scriptsize def}}{=}}q \left( \ga^{-1} v \right) .$$ It is compatible with the $\Ga$-action on $\Omega$ in the following sense: $$W_{\ga \cdot q}(\omega) = W_q \left( \ga^{-1} \cdot \omega \right) .$$ It means that $$\label{eq:Aop}
\IA W_q = \sum_{\ga \in S} W_{\ga \cdot q} .$$ We now consider the the orbit $\{ \ga \cdot p \, : \, \ga \in \Ga \}$ of a given element $p \in \II$ and the closure of the space spanned by the corresponding functions $W_{\ga \cdot p}$: $$H_p {\stackrel{\textrm{\scriptsize def}}{=}}\operatorname{cl}\left( \operatorname{span}\left\{ W_{\ga \cdot p} \, : \, \ga \in \Ga \right\} \right)
\subset L_2(\Omega, \mu) .$$ It is clear from that $H_p$ is $\IA$-invariant. If $p \equiv 0$, then $H_p$ consists of the contant functions on $\Omega$ and both the point spectrum and the spectrum of $ \IA \left|_{H_p} \right.$ is $\{ d \}$. Otherwise the stabilizer $\Ga_p$ of $p$ is a finite subgroup of $\Ga$ , and $ \IA \left|_{H_p} \right.$ is closely related to the original adjacency operator $A_G$. Indeed, let $T_p \colon H_p \to \ell_2( V(G) ) \cong \ell_2(\Ga)$ be the operator defined by $$T_p \colon W_q \mapsto \ind_{ \{\ga \in \Ga \, : \, \ga \cdot p = q\} } ,$$ where $q$ is in the orbit of $p$. It is easy to see that $T_p$ is a bounded operator for which $T_p \IA \left|_{H_p} \right. = A_G T_p$. Since $T_p$ is also bounded below, it follows that $$\si\left( \IA \left|_{H_p} \right. \right) \subseteq \si(A_G) \mbox{ and }
\si_p\left( \IA \left|_{H_p} \right. \right) \subseteq \si_p(A_G)$$ with equality when the stabilizer $\Ga_p$ is trivial.
Therefore for Cayley graphs the operators $A_G \colon \ell_2( V(G) ) \to \ell_2( V(G) )$ and $\IA \colon L_2(\Omega, \mu) \to L_2(\Omega, \mu)$ have the same spectra and point spectra with the possible exception of the point $d$: $$\si(\IA) = \si(A_G) \cup \{d\} \mbox{ and }
\si_p(\IA) = \si_p(A_G) \cup \{d\} .$$ Consequently, $$\si_p\left( \IA \left|_{\Hi} \right. \right) \subseteq
\si_p(\IA) = \si_p(A_G) \cup \{d\} ,$$ which we used in the proof of Theorem \[thm:cayley\].
Independent sets
----------------
Let $G$ be an infinite transitive graph and $\lam$ be the minimum of its spectrum $\si(A_G)$. Consider linear factor of i.i.d. processes $X_v^n$ converging in distribution to a Gaussian wave function $X_v$ with eigenvalue $\lam$ as $n \to \infty$ as in Theorem \[thm:gaussian\_ev\]. We define the following independent sets on $G$: $$I_{+} {\stackrel{\textrm{\scriptsize def}}{=}}\left\{ v \, : \, X_v > X_u, \forall u \in N(v) \right\} \mbox{ and }
I_{+}^n {\stackrel{\textrm{\scriptsize def}}{=}}\left\{ v \, : \, X_v^n > X_u^n, \forall u \in N(v) \right\} .$$ Then for each $n$ the independent set $I_{+}^n$ is a factor of the i.i.d. process $Z_v$ (i.e., it is obtained as a measurable function of $Z_v$, $v \in V(G)$ that commutes with the natural action of $\operatorname{Aut}(G)$.) Furthermore, since the event $v \in I_+$ corresponds to an open set, we have $$\liminf_{n \to \infty} P( v \in I_+^n ) \geq P( v \in I_+ ) .$$ Therefore whenever we have a lower bound $q$ for $P( v \in I_+ )$, it yields that for any $\eps>0$ there exists a factor of i.i.d.independent set with “size” greater than $q - \eps$.
Bounding $P( v \in I_+ )$, however, leads us to the same optimization problem as in the finite case. We need to estimate the volume of the same spherical simplex with the exact same constraints. (Of course, there might be a difference between the finite and infinite setting in terms of what covariances $c_{i,j}$ can actually come up, but our proofs used only the trivial constraints that they form a positive semidefinite matrix and their sum is $(\lam^2-d)/2$, which are true in the infinite case, too.) Thus we obtain the exact same bounds and Theorem \[thm:inf\] follows.
Actually, in Theorem \[thm:3reg\_main\] we proved the bound only for graphs with $\lam \leq -2$ and argued that the only finite, $3$-regular, transitive graph for which this does not hold is the complete graph $K_4$. For infinite transitive graphs $\lam \leq -2$ holds with no exception. This follows from the fact that they contain arbitrarily long paths as induced subgraphs.
Appendix {#sec:app}
========
\[thm:3reg\_other\] Suppose that $G$ is a finite, $3$-regular, vertex-transitive graph with minimum eigenvalue $\lam$ and odd-girth $g$. Then the independence ratio of $G$ is at least $$\frac{5g-3}{16g} + \frac{g+1}{2g} \frac{3}{4 \pi} \arcsin \left( \frac{\lam^2-3}{6} \right) \geq
\frac{5}{16} + \frac{3}{8 \pi} \arcsin \left( \frac{\lam^2-3}{6} \right) - \frac{3}{16g} .$$ In fact, there exist two disjoint independent sets in $G$ such that their average size divided by $|V(G)|$ is not less than the above bound.
It is easy to check the statement for $K_4$. According to Proposition \[prop:la\_min\_3reg\] $\lam \leq -2$ holds for any other finite, $3$-regular, transitive graph $G$. Let $X_v$, $v \in V(G)$ be the random eigenvector corresponding to $\lam$. Let $V_{+}$ denote the set of “positive vertices”, that is, $$V_{+} {\stackrel{\textrm{\scriptsize def}}{=}}\left\{ v\in V(G) \ : \ X_v > 0 \right\} .$$ The expected size of $V_{+}$ is $|V(G)|/2$.
Since $\lam$ is negative, a vertex and its three neighbors cannot all be positive. Therefore each vertex has degree at most two in the induced subgraph $G[V_{+}]$. Thus each connected component of this subgraph is a path or a cycle. We want to choose an independent set from each component. We can choose at least half the vertices from paths and even cycles. From an odd cycle of length $l \geq g$ we can choose $(l-1)/2$ vertices, which is at least a $(g-1)/(2g)$ proportion of all vertices in that component. (Recall that $g$ denotes the odd-girth of $G$, that is, the length of the shortest odd cycle in $G$.)
We need one more observation, namely, that many of the components actually contain only one vertex. Using our earlier notations, let $v$ be an arbitrary vertex with neighbors $u_1,u_2,u_3$, the corresponding random variables are $X$ and $Y_1,Y_2,Y_3$. Note that $Y_1<0$, $Y_2<0$ and $Y_3<0$ imply that $X > 0$. Therefore the probability $p$ that $v$ is an isolated vertex in $G[V_{+}]$ is $$\begin{gathered}
\label{eq:prob_iso}
p {\stackrel{\textrm{\scriptsize def}}{=}}P \left( X > 0; Y_1<0; Y_2<0; Y_3<0 \right) =
P \left( Y_1<0; Y_2<0; Y_3<0 \right) = \\
P \left( y_i \cdot Z < 0; i=1,2,3 \right) =
\frac{1}{2} - \frac{1}{4\pi} \sum_{1\leq i,j\leq 3} \arccos (c_{i,j}) =
\frac{1}{8} + \frac{1}{4\pi} \sum_{1\leq i,j\leq 3} \arcsin (c_{i,j}).\end{gathered}$$ Note that $\arcsin$ it is a monotone increasing odd function on $[-1,1]$, which is convex on $[0,1]$. Furthermore, the average of $c_{i,j}$ is $(\lam^2-3)/6 \geq (2^2-3)/6 > 0$. It is easy to see that these imply that the right hand side of decreases (not increases) if we replace each $c_{i,j}$ with their average $(\lam^2-3)/6$. Thus $$\label{eq:prob_iso_bound}
p \geq \frac{1}{8} + \frac{3}{4\pi} \arcsin \left( \frac{\lam^2-3}{6} \right) .$$ Our independent set will contain all isolated vertices and at least a $(g-1)/(2g)$ proportion of all the other vertices in $V_{+}$. This yields the following lower bound for the independence ratio of $G$: $$p + \frac{g-1}{2g} \left( \frac{1}{2} - p \right) =
\frac{g-1}{4g} + \frac{g+1}{2g} p .$$ Combining this with yields the desired bound.
We can choose an independent set with the same expected size from the “negative vertices”: $$V_{-} {\stackrel{\textrm{\scriptsize def}}{=}}\left\{ v\in V(G) \ : \ X_v < 0 \right\} .$$ This implies the second part of the theorem.
We mention that the proof also works in the infinite setting, so there is an analogous theorem for infinite transitive graphs (as in Theorem \[thm:inf\]).
Any non-trivial lower bound for the density of components of size $3,5, \ldots$ in $G[V_+]$ would immediately yield an improvement in the above theorem. In [@csghv] such non-trivial bounds were obtained for the $3$-regular tree $T_3$.
\[prop:la\_min\_3reg\] Suppose that $G$ is a finite, connected, $3$-regular, vertex-transitive graph. Then either $G$ is isomorphic to the complete graph $K_4$, or the least eigenvalue $\lam$ of its adjacency matrix is at most $-2$.
The proof below is due to Péter Csikvári.
Let $G$ be a connected, $3$-regular, vertex-transitive graph with $\lam(G) > -2$. We need to show that $G$ must be the complete graph $K_4$.
Cauchy’s interlacing theorem implies that $\lam(G) \leq \lam(H)$ whenever $H$ is an induced subgraph of $G$. Therefore $\lam(H) > -2$ must hold for any induced subgraph. Let $T$ denote the tree shown in Figure \[fig:gr\]. It is easy to see that the smallest eigenvalue of $T$ is $-2$. We also have $\lam(C_{2k}) = -2$ for the cycle of length $2k$ for any $k \geq 2$. Therefore $G$ can contain neither $T$, nor $C_{2k}$ as an induced subgraph.
We will distinguish three cases.
**Case 1.** *$G$ does not contain triangles.*\
Let $u,v$ be two neighboring vertices, and let $u_1,u_2$ and $v_1,v_2$ denote the remaining two neighbors of $u$ and $v$, respectively. Since $G$ contains no triangles, $u_1$,$u_2$,$v_1$,$v_2$ are pairwise distinct vertices. The induced subgraph on the set $\{u,u_1,u_2,v,v_1,v_2\}$ must be isomorphic to $T$ (the graph shown in Figure \[fig:gr\]), otherwise $G$ would contain a triangle or an induced $C_4$. Since $G$ cannot contain $T$ as an induced subgraph, this is a contradiction.
**Case 2.** *$G$ contains triangles but no two share a common edge.*\
Since $G$ is vertex-transitive, there must be at least one triangle through every vertex. We claim that any two triangles must be disjoint. If they had two common vertices, then they would share an edge, and if they had exactly one common vertex, then that vertex would have degree at least $4$.
So we have disjoint triangles in $G$, exactly one through every vertex. We claim that there can be at most one edge between two triangles (with one endpoint in one triangle and one in the other). Indeed, otherwise we would either have an induced $C_4$ or a vertex with degree at least $4$.
Let us consider the following graph $G^\ast$. To each triangle in $G$ corresponds a vertex in $G^\ast$, and we join two such vertices with an edge if there is an edge between the corresponding triangles. It is easy to see that $G^\ast$ will be $3$-regular as well. Take a cycle in $G^\ast$ with minimum length $g \geq 3$. There is a corresponding cycle of length $2g$ in the original graph $G$. It is easy to see that this must be an induced cycle, contradiction.
**Case 3.** *$G$ contains two triangles sharing an edge.*\
Let $xy$ be an edge shared by triangles $xyu$ and $xyv$ (see the figure below).
Then $x$ and $y$ already have degree $3$, while $u$ and $v$ still need an edge. We claim that $uv$ must be an edge. Otherwise $v$ would have a neighbour $z$ different from $x,y,u$. Since $z$ cannot be adjacent to $x$ and $y$, there is only one triangle through $v$, while there are two triangles through $x$, contradticting the transitivity of $G$. So $uv$ is an edge, therefore each of $x,y,u,v$ has degree $3$. Since $G$ is connected, $G$ cannot have any other vertices and thus isomorphic to $K_4$.
The following lemma is probably known, but we did not find an explicit reference, so we give a short proof.
\[lem:lama\] If $G$ is an infinite transitive graph, then the maximum $\lama$ of the spectrum of $A_G$ is never in the point spectrum of $A_G$.
In the case $\lama=d$, the equation $A_G f= d f$ means that the vector $f$ is harmonic. However, the maximum principle implies that there are no $\ell^2$ harmonic functions. Thus there is no eigenvector for $\lama$, which is equivalent to saying that $\lama$ is not in the point spectrum of $A_G$.
For the non-amenable case (i.e. $\lama<d$), Theorem II.7.8 in [@woess_book] implies that for any vertex $v$ $$\sum_{n=0}^\infty \lambda_{max}^{-2n} \langle 1_v,A_G^{2n} 1_v \rangle <\infty ,$$ where the left hand side can be written in terms of the spectral measure $\mu_G$ as $$\sum_{n=0}^\infty \lama^{-2n} \int x^{2n} d\mu_G(x) \ge
\sum_{n=0}^\infty \lama^{-2n} \lama^{2n} \mu_G(\{\lama\}) .$$ This forces $\mu_G(\{\lama\})=0$, which means that $\lama$ is not in the point spectrum of $A_G$.
\[lem:vol\_ratio\] $$\frac{\operatorname{vol}(S^{d-2})}{\operatorname{vol}(S^{d-1})} < \frac{\sqrt{d}}{\sqrt{2\pi}} .$$
Using the formula $$\operatorname{vol}( S^{n-1} ) = \frac{2 \pi^{n/2}}{\Gamma(n/2)}$$ we need to show that $$\frac{ \Gamma\left( \frac{d-1}{2} \right) }{ \Gamma\left( \frac{d-2}{2} \right) } <
\sqrt{ \frac{d}{2} } .$$ Since $\Gamma$ is log-convex, the increments of its logarithm over intervals of length, say, $1/2$ are increasing. Thus $$\frac{\Gamma(\frac{d-1}2)}{\Gamma(\frac{d-2}2)} \leq
\frac{\Gamma(\frac{d}2)}{\Gamma(\frac{d-1}2)}$$ and multiplying both sides by the left hand side, we get $$\left(\frac{\Gamma(\frac{d-1}2)}{\Gamma(\frac{d-2}2)}\right)^2 \le
\frac{\Gamma(\frac{d}2)}{\Gamma(\frac{d-2}2)}= \frac{d-2}{2} < \frac{d}{2}$$ as required.
\[lem:tangent\_line\] Let $\la \in [-3,-2]$ and $$f(t) {\stackrel{\textrm{\scriptsize def}}{=}}\arcsin\left( \frac{ \frac{1}{2} - \la + \frac{\la^2}{2} + (1-\la) t }
{ \frac{1}{2} - 2\la + \frac{\la^2}{2} + t } \right) .$$ Then the tangent line to $f$ at $t_0 = (\la^2-3)/6$ is below $f$ on the entire interval $[-0.5,1]$.
We need to prove that $$f(t) - f'(t_0) t$$ takes its minimum value at $t_0$ on the interval $[-0.5,1]$. This will follow from the fact that $f'(t) < f'(t_0)$ for $-0.5 \leq t < t_0$ and $f'(t) > f'(t_0)$ for $t_0 < t < 1$.
In order to make calculations easier we will use the following notations: $$a = \frac{1}{2} - \la + \frac{\la^2}{2} \geq 4.5; \hspace{4mm}
b = 1-\la \geq 3; \hspace{4mm}
c = a+b-1 \geq 6.5 ;$$ then $$f(t) = \arcsin\left( \frac{ a + b t }{ c + t } \right) .$$ It is easy to see that $ 0 < a+bt < c+t $ for $t \in [-0.5,1)$. Therefore we have $$f'(t) = \left( 1 - \left( \frac{ a + b t }{ c + t } \right)^2 \right)^{-\frac{1}{2}}
\frac{b(c+t) - (a+bt)}{(c+t)^2} =
\frac{bc - a}{(c+t) \sqrt{(c+t)^2 - (a+bt)^2}}.$$ Since $bc-a > 0$ it follows that $f'$ is positive on $[-0.5, 1)$ and thus $f$ is monotone increasing. Next we study the intervals of monotonicity of $f'$. First we note that $$(c+t)^2 - (a+bt)^2 = \left( c+a + (1+b)t \right)\left(c-a+ (1-b)t \right) .$$ Using $c-a = b-1$ we get that $$(c+t)^2 - (a+bt)^2 = (b^2-1) (t+d)(1-t) ,$$ where $$d = \frac{c+a}{b+1} = \frac{1-3\la+\la^2}{2-\la} =
1-\la - \frac{1}{2-\la} \geq \frac{11}{4} .$$ It follows that $$\frac{1}{(f'(t))^2} = \frac{b^2-1}{(bc-a)^2} (t+c)^2(t+d)(1-t) .$$ If we restrict ourselves to the interval $[-0.5,1)$ (where $f'$ is positive), then it suffices to examine the function $$g(t) = (t+c)^2(t + d)(1-t) .$$ Wherever $g$ is monotone increasing, $f'$ is monotone decreasing, and vice versa.
So we have a fourth-degree polynomial $g$ with leading coefficient $-1$, whose roots are $-c$ (with multiplicity $2$), $-d$, and $1$. Consequently, the derivative $g'$ is a third-degree polynomial with negative leading coefficient and with roots $-c$, $u$, $v$, where $-c < u < -d < v < 1$. We distinguish the following two cases. *Case 1:* $v \leq -0.5$. Then $g$ is monotone decreasing on $[-0.5, \infty)$, therefore $f'$ is monotone increasing on $[-0.5,1)$, and thus $f$ is convex on the whole interval, which clearly implies the statement of the lemma.
*Case 2:* $v > -0.5$. Since the other two roots of $g'$ are less than $-d < -0.5$, we know that $g$ is monotone increasing on $[-0.5,v]$ and monotone decreasing on $[v,1)$. We claim that $$\label{eq:g}
g\left( -\frac{1}{2} \right) > g\left( \frac{1}{6} \right) .$$ This would yield that $v<1/6$. Since $1/6 \leq t_0 = (\la^2-3)/6$, we have $g(-1/2) > g(1/6) > g( t_0 )$. This means that $g(t) > g(t_0)$ for $-0.5 \leq t < t_0$ and $g(t) < g(t_0)$ for $t_0 < t < 1$. As for $f'$, $f'(t) < f'(t_0)$ for $-0.5 \leq t < t_0$ and $f'(t) > f'(t_0)$ for $t_0 < t < 1$, and the statement of the lemma clearly follows.
It remains to show . Let $-1/2 = t_2 < t_1 = 1/6$. Then $t_1 - t_2 = 2/3$; $t_2+c \geq 6$ and $t_2+d \geq 9/4$, and consequently $$\begin{gathered}
\frac{ g(t_1) }{ g(t_2) } =
\frac{1-t_1}{1-t_2} \left( 1 + \frac{t_1-t_2}{t_2+c} \right)^2
\left( 1 + \frac{t_1-t_2}{t_2+d} \right) \leq \\
\frac{5/6}{3/2}
\left( 1 + \frac{2/3}{6} \right)^2
\left( 1 + \frac{2/3}{9/4} \right) =
\frac{5}{9} \left( \frac{10}{9} \right)^2 \frac{35}{27} =
\frac{17500}{19683} < 1.\end{gathered}$$
[^1]: The first author was supported by MTA Rényi “Lendület” Groups and Graphs Research Group. The second author was supported by the Canada Research Chair and NSERC DAS programs.
|
---
abstract: 'Image Landmark Recognition has been one of the most sought-after classification challenges in the field of vision and perception. After so many years of generic classification of buildings and monuments from images, people are now focussing upon fine-grained problems - recognizing the category of each building or monument. We proposed an ensemble network for the purpose of classification of Indian Landmark Images. To this end, our method gives robust classification by ensembling the predictions from Graph-Based Visual Saliency (GBVS) network alongwith supervised feature-based classification algorithms such as kNN and Random Forest. The final architecture is an adaptive learning of all the mentioned networks. The proposed network produces a reliable score to eliminate false category cases. Evaluation of our model was done on a new dataset, which involves challenges such as landmark clutter, variable scaling, partial occlusion, etc.'
author:
- Akash Kumar
- Sagnik Bhowmick
- 'N. Jayanthi'
- 'S. Indu'
title: Improving Landmark Recognition using Saliency detection and Feature classification
---
[^1]
Introduction
============
United Nation Educational, Scientific and Cultural Organization (UNESCO) World Heritage Center recognizes over 1500 monuments and landmarks as World heritage sites. Apart from this, there are over 10K monuments and landmarks spread over the globe which serves as a local tourist attraction and have a huge contribution in the history and culture of the location. However, it is impossible for humans to individually recognize and classify all monuments according to history and architecture. Technology like Computer Vision and Deep Learning plays a pivotal role to overcome this challenge.
Many Convolutional Neural Network(CNN) based deep learning frameworks shows to be handy in such a scenario, where classes have different features. Every landmark architecture style has distinguishable features from other forms of architecture. These features play a pivotal role in the recognition of such landmark architectures. India, one of the most diverse country in the world, is a house to varied architectures. We propose a framework to classify these landmarks based on the era they were constructed. These varied architectural features make classification of Indian monuments a dreadful task. Moreover, these historic buildings are useful references for architects designing contemporary architecture, thus information about the architectural styles of these monuments seems necessary.
In this paper, we employ CNN to address the problem of Landmark Recognition. Our main contributions are:
1. We proposed an end-to-end architecture to classify Indian monuments in the image. Experiments show that our model surpass the existing baseline on the dataset.
2. We employ convolutional architectures to learn the intra-class variations between different landmarks. The final the averageprediction is ensemble of three networks consisting of salient regions detection, kNN and Random Forest supervised classification algorithms.
Related Work
============
There are several recent papers to address the problem of Landmark Recognition [@sift], [@ref_1], [@ref_2], most of them are based on deep learning except [@ref_7], which classify landmarks using visual features such as HoG [@hog], SIFT [@sift] and SURF [@surf]. While landmark recognition can be considered as descriptor matching, our work relates to some [@ref_1] in that we learn to employ a visual saliency algorithm to focus on the most noticeable region and extract those features to classify them.
Landmark Recognition using CNN presents a competitive research as there is so much little intra-class variations [@ref_url1]. [@ref_2] employed a multi-scale feature embedding to generate condition viewpoint invariant features for specific place recognition. [@ref_10] uses local binary patterns and Gray-level co-occurrence matrix to match the pairs using pixel-wise information. [@ref_11] devised an architecture using visual descriptors and Bag-of-Words for Image-based Monument Classification. [@ref_1] uses AlexNet to extract features and classify landmarks using supervised feature learning. Many works has been done on specific place recognition but the area of using fine-grained features to recognize Indian Landmarks has not been explored yet.
Problem Formulation
===================
Landmark Recognition in Indian scenario is very different from the European and American counter-part, due to its extreme varieties within each region and diversified architecture. Approaches like bag of words, HoG and SIFT are constrained to database size. Other approaches based on deep learning framework [@ref_8] face challenges in identifying diverse image in same class (refer fig.1). Among all these methods, we need a more robust and dynamic framework that can learn these intra-class variations. Hence, our architecture focusses on these explicit and implicit features of the images.
![Same class had varied architecture style[]{data-label="fig1"}](mughal.png){width="\textwidth"}
Dataset
=======
The manually collected Indian monument dataset consist of monument images majorly of 4 classes based on architecture types, i.e. Buddhist, Dravidian, Kalinga and Mughal architectures.
[Class Label]{} [Train Set]{} [Validation Set]{} [Test Set]{}
----------------------- --------------- -------------------- --------------
[**I. Buddhist**]{} 647 81 81
[**II. Dravidian**]{} 657 83 82
[**III. Kalinga**]{} 881 111 110
[**IV. Mughal**]{} 624 79 78
: Categorical distribution of data among 4 classes[]{data-label="tab1"}
The total 3514 dataset images has been divided in ratio 80:10:10 of Training:Validation:Testing images respectively. Overview of the dataset is shown in the diagram below Fig. \[fig2\].
Proposed Approaches
===================
In this section, we discuss about the proposed framework that is used for landmark classification. We devised two architectures to solve the problems of monument classification. These methods are described as follows:
Graph-based Visual Saliency (GBVS)
----------------------------------
Image Saliency is what stands out and how fast you are able to quickly focus on the most relevant parts of what you see. Now, in the case of landmarks the less salient region is common backgrounds, that’s of blue sky. The architectural design of the monuments is what differentiates between the classes. GBVS[@ref_11] [@ref_13] firstly finds feature maps and then apply non-linear Activation maps to highlight “significant” locations in the image. We used GBVS to detect 5 important locations per training image. Those images were used for multi-stage training. It helped to improve our accuracy by 3-4%. Example of salient region detected using GBVS algorithm is shown in Fig. \[fig3\].
![Salient Region Detection using Graph-based Visual Saliency[]{data-label="fig3"}](gbvs.png){width="\textwidth"}
Supervised Feature Classification
---------------------------------
In this approach, we used *fc* layer features of ImageNet models to train supervised machine learning models such as kNN and Random Forest Classification. Among all the ImageNet models, Inception ResNet V2 performed best for landmark classification. Therefore, we extracted the representation from Inception Net of dimension 2048 $\times$ 1.
Ensemble Model Architecture
---------------------------
![Proposed Architecture[]{data-label="fig4"}](LandmarkFramework.png){width="\textwidth"}
Our final architecture comprises of Averaging based Ensemble [@ref_9] methods. Test image is passed firstly from GBVS algorithm to create a batch of 5 images. The batch prediction is done using Inception ResNet V2 [@ref_3]. Similarly, the test image is also passed through Inception ResNet model for feature extraction. These features were used to learn and predict classes using kNN and Random Forest Classifiers. The final prediction is done using averaging of predictions from the three models described above. Ensemble learning boosted the accuracy by approximately 2-3%. The final architecture is diagramatically explained in Fig. \[fig4\].
Experiments
===========
### Improved Feature Learning using Multi-stage Training
We trained our model firstly on original images that were resized to 416 $\times$ 416 and then on high salient regions extracted using Graph-based Visual Saliency Algorithm. We used ImageNet pretrained weights to train Inception ResNet V2 [@ref_3] architecture on original and salient images. The salient images helped us to learn discriminative features between various classes. Original images assisted in learning of global spatial features.
### Parameters
In our model, we used ADAM (*lr*= 0.0001) [@ref_6] optimizer and ReLU[@ref_5] activation function. The model was trained for 7 epochs using the pretrained ImageNet weights as initialization for Transfer Learning.
Results
=======
The experimental results on the landmarks dataset are presented in Table \[tab2\]. The scores obtained are from different architectures trained on salient crops and original images during Multi-stage Training.
**Model Architecture** **Data Subset** **Train** **Validation** **Test**
------------------------------ -------------------- ----------- ---------------- ----------
Inception V3 [@ref_4] Original Images 90.1 77.23 75.42
Original + Salient 91.81 80.3 78.91
Inception Resnet V2 [@ref_3] Original Images 91.76 77 76.35
Original + Salient 92.29 81 80
: Accuracy during Multi-Stage Training on Inception V3 and Inception ResNet V2 models[]{data-label="tab2"}
Table-\[tab3\] compares the accuracy scores for all the models on train, validation and testing dataset. The final prediction is done by average ensembling of three models to get the final architecture with low variance and low bias.
**Model Architecture** **Train** **Validation** **Test**
----------------------------------- ----------- ---------------- -----------
GBVS + InceptionResNetV2 92.61 89.65 86.18
InceptionResnetV2 + kNN 93.62 90.72 86.94
InceptionResNetV2 + Random Forest 91.58 89.8 88
Average Ensemble **94.58** **93.8** **90.08**
: Evaluation comparison (in %) of different models[]{data-label="tab3"}
Table \[tab4\] compares the results on the existing dataset with our new proposed approach. It is clear that our approach the outperforms the existing models by 8%.
**Framework** **Test**
----------------------------------- ----------
SIFT + BoVW 51%
Gabor Transform + Radon Barcode 70%
Radon Barcode 75%
CNN 82%
**Our Method (Average Ensemble)** **90%**
: Comparison of our best model with competing methods[@ref_8]
\[tab4\]
Conclusion and Future Work
==========================
The paper presented two approaches on which extensive experiments were done to classify Indian architectural styles. Landmark Recognition problem presents some noteworthy solutions as there are no training data available for less popular landmarks. Our solution focusses on the most noticeable region of the image to classify landmarks accurately. Our approach targets the fine-grained features as well as on the global features of monuments. Previous works lack the attention mechanism to differentiate models on the basis of fine-grained features. Our model outperforms the existing approach by 8%.
In future, the authors aim to improve the model accuracy by using DELF features and Visual Attention mechanism to further improve the accuracy of the model as well as learn more substantial features.
[8]{}
D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
P. Shukla, B. Rautela and A. Mittal, “A Computer Vision Framework for Automatic Description of Indian Monuments,” 2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Jaipur, 2017, pp. 116-122.
Z. Chen, A. Jacobson, N. Sunderhauf, B. Upcroft, L. Liu, C. Shen, I. Reid, M. Milford, Deep Learning Features at Scale for Visual Place Recognition, 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore.
Szegedy, Christian et al. “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning.” AAAI (2017).
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, United States, 2016, pp. 2818-2826.
Vinod Nair and Geoffrey E. Hinton. “Rectified Linear Units Improve Restricted Boltzmann Machines”. In Proceedings of the 27th International Conference on International Conference on Machine Learning. ICML’10. Haifa, Israel: Omnipress, 2010, pp.807–814. ISBN : 978-1-60558-907-7. URL : http://dl.acm.org/citation.cfm?id=3104322.3104425.
Diederik P. Kingma and Jimmy Ba. “Adam: A Method for Stochastic Optimization”. In: CoRR abs/1412.6980 (2014). arXiv: 1412.6980. URL:http://arxiv.org/abs/1412.6980.
G. Amato and F. Falchi and P. Bolettieri, “Recognizing Landmarks Using Automated Classification Techniques: Evaluation of Various Visual Features,”. 2010 Second International Conferences on Advances in Multimedia. 78-83
Sharma S., Aggarwal P., Bhattacharyya A.N., Indu S. (2018) Classification of Indian Monuments into Architectural Styles. NCVPRIPG 2017. Communications in Computer and Information Science, vol 841. Springer, Singapore.
Sainin, Mohd & Alfred, Rayner & Adnan, Fairuz & Ahmad, Faudziah. (2018). Combining Sampling and Ensemble Classifier for Multiclass Imbalance Data Learning. 262-272.
Bay, Herbert and Ess, Andreas and Tuytelaars, Tinne and Van Gool, Luc, “Speeded-Up Robust Features (SURF),”. Comput. Vis. Image Underst. Journal. June, 2008. Vol. 110. 346–359.
Dalal, Navneet and Triggs, Bill, “Histograms of Oriented Gradients for Human Detection,”. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) - Volume 1 - Volume 01. 886–893.
A. Saini, T. Gupta, R. Kumar, A. K. Gupta, M. Panwar and A. Mittal, “Image based Indian monument recognition using convoluted neural networks,” 2017 International Conference on Big Data, IoT and Data Science (BID), Pune, 2017, pp. 138-142.
TRIANTAFYLLIDIS, Georgios; KALLIATAKIS, Gregory. Image based Monument Recognition using Graph based Visual Saliency. ELCVIA Electronic Letters on Computer Vision and Image Analysis, \[S.l.\], v. 12, n. 2, p. 88-97, apr. 2013. ISSN 1577-5097.
B. Ghildiyal, A. Singh and H. S. Bhadauria, “Image-based monument classification using bag-of-word architecture,” 2017 3rd International Conference on Advances in Computing,Communication & Automation (ICACCA) (Fall), Dehradun, 2017, pp. 1-5.
Google AI Blog:Google-Landmarks: A New Dataset and Challenge for Landmark Recognition, <https://ai.googleblog.com/2018/03/google-landmarks-new-dataset-and.html>.
Harel, Jonathan and Koch, Christof and Perona, Pietro (2007) Graph-Based Visual Saliency. In: Advances in Neural Information Processing Systems 19 (NIPS 2006). Advances in Neural Information Processing Systems. No.19. MIT Press , Cambridge, MA, pp. 545-552. ISBN 0-262-19568-2.
[^1]: \* Equal Contribution
|
---
abstract: 'The typical view in evolutionary biology is that mutation rates are minimised. Contrary to that view, studies in combinatorial optimisation and search have shown a clear advantage of using variable mutation rates as a control parameter to optimise the performance of evolutionary algorithms. Ronald Fisher’s work is the basis of much biological theory in this area. He used Euclidean geometry of continuous, infinite phenotypic spaces to study the relation between mutation size and expected fitness of the offspring. Here we develop a general theory of optimal mutation rate control that is based on the alternative geometry of discrete and finite spaces of DNA sequences. We define the monotonic properties of fitness landscapes, which allows us to relate fitness to the topology of genotypes and mutation size. First, we consider the case of a perfectly monotonic fitness landscape, in which the optimal mutation rate control functions can be derived exactly or approximately depending on additional constraints of the problem. Then we consider the general case of non-monotonic landscapes. We use the ideas of local and weak monotonicity to show that optimal mutation rate control functions exist in any such landscape and that they resemble control functions in a monotonic landscape at least in some neighbourhood of a fitness maximum. Generally, optimal mutation rates increase when fitness decreases, and the increase of mutation rate is more rapid in landscapes that are less monotonic (more rugged). We demonstrate these relationships by obtaining and analysing approximately optimal mutation rate control functions in 115 complete landscapes of binding scores between DNA sequences and transcription factors. We discuss the relevance of these findings to living organisms, including the phenomenon of stress-induced mutagenesis.'
address:
- 'School of Engineering and Information Sciences, Middlesex University, London NW4 4BT, UK'
- 'Research Institute for the Environment, Physical Sciences and Applied Mathematics, Keele University, ST5 5BG, UK'
- 'Department of Statistics, University of Warwick, CV4 7AL, UK'
- 'Faculty of Life Sciences, University of Manchester, M13 9PT, UK'
author:
- 'Roman V. Belavkin'
- Alastair Channon
- Elizabeth Aston
- John Aston
- 'Rok Kra[š]{}ovec'
- 'Christopher G. Knight'
bibliography:
- 'rvb.bib'
- 'nn.bib'
- 'other.bib'
- 'newbib.bib'
- 'ica.bib'
- 'evolution.bib'
- 'eids-pointmutation.bib'
title: Monotonicity of Fitness Landscapes and Mutation Rate Control
---
=1
Adaptation ,Fitness landscape ,Mutation rate ,Population Genetics ,Phenotypic Plasticity
Introduction {#sec:intro}
============
Mutation is one of the most important biological processes that influence evolutionary dynamics. During replication mutation leads to a loss of information between the offspring and its parent, but it also allows the offspring to acquire new features. These features are likely to be deleterious, but have the potential to be beneficial for adaptation. Thus mutation can be seen as a process of innovation, which is particularly important as the number of all living organisms is tiny relative to the number of all possible organisms. A question that naturally arises with regards to mutation is whether there is an optimal balance between the amount of information lost and potential fitness gained.
The seminal mathematical work to investigate biological mutation is by Ronald Fisher [@Fisher30], who considered mutation as a random motion in Euclidean space, the points of which are vectors representing collections of phenotypic traits of organisms. Using the geometry of Euclidean space, Fisher showed that probability of adaptation decreases exponentially as a function of mutation size (defined using the ratio of mutation radius and distance to the optimum), and concluded therefore that adaptation is more likely to occur by small mutations. Several studies, however, suggested that large mutations can be quite frequent in nature, thereby prompting re-examination of the theory [@Orr05]. Thus, Kimura [@Kimura80] extended the theory to take into account differences in probabilities of fixation for mutations of small and large size. Subsequently Orr [@Orr98] considered the effect of mutation across several replications. Interestingly, while he had a critical role in developing mathematical theory around discrete alleles, Fisher in his geometric model uses Euclidean space, which is uncountably infinite and unbounded. That this is an important issue became apparent only after the realisation that biological evolution occurs in a countable or even finite space of discrete molecular sequences [@Smith70]. However, subsequent geometric models based on Fisher’s, while they have explicitly modelled discrete mutational steps (e.g. [@Orr02]), continue to assume that they occur within the same infinite Euclidean space. This issue may contribute to the fact that the predictions of such models have at best only been partially verified in actual biological systems [@McDonald11; @Bataillon11; @Kassen06; @Rokyta08]. One of the contributions of the current work is that we consider mutation using the geometry of other spaces, and in particular the geometry of a Hamming space, which is finite and leads to a radically different view about the role of large mutations.
Mutation size as considered by Fisher is closely related to mutation frequency measured in biology in terms of the number of mutations per replication per DNA base. Mutation rates in biology vary over several orders of magnitude [@Drake98]. Nonetheless, mutation rate for any particular species is typically believed to be minimised, within bounds set by physiology [@Drake91], or more likely population genetics [@Lynch10]. Despite this, mutation rates are known to vary within and among populations of a single species [@Bjedov03] and recently, population-genetic models have been developed proposing that variable mutation rates may be in fact adaptive in biology [@Ram12].
Independent of such biological concerns, researchers in evolutionary computation and operations research have a longer history of considering variable mutation rates in genetic algorithms (GAs) (e.g. see [@Eiben_etal99; @Ochoa02; @Falco_etal02; @Cervantes-Stephens06; @Vafaee_etal10] for reviews). In particular, Ackley suggested in [@Ackley87] that mutation probability is analogous to temperature in simulated annealing, which decreases with time through optimisation. A gradual reduction of mutation rate was also proposed by Fogarty [@Fogarty89]. In a pioneering work, Yanagiya [@Yanagiya93] used Markov chain analysis of GAs to show that a sequence of optimal mutation rates maximising the probability of obtaining global solution exists in any problem. A significant contribution to the field was made by Thomas B[ä]{}ck [@Back93], who studied the probability of adaptation in the space of binary sequences and suggested that mutation rate should depend on fitness values rather than time. More recently, numerical methods have been used to optimise a mutation operator [@Vafaee_etal10] that was based on the Markov chain model of GA by Nix and Vose [@Nix-Vose92]. The complexity of this model, however, restricted the application of this method to small spaces and populations. It is these insights regarding mutation rate variation from evolutionary computation and operations research which we develop here towards the particular issues presented by biological systems.
We develop theory in the following directions:
1. Generalise Fisher’s geometric model of adaptation for metric spaces, and in particular for discrete spaces of sequences, such as the Hamming spaces with arbitrary alphabets.
2. Define problems of optimal mutation rate control within such spaces, and study how different problem formulations (e.g. time horizon, objective function) affect the solutions.
3. Extend the theory to more biologically realistic (i.e. rugged) fitness landscapes.
Some relevant results have already been reported. For example, results for general Hamming spaces were first reported in [@Belavkin_etal11:_ecal11; @Belavkin11:_itw11]. We develop these results towards biology in Section \[sec:geometry\]. Various optimisation problems were considered in [@Belavkin11:_dyninf; @Belavkin11:_qbic11], deriving theoretical optimal mutation rate control functions. We address how such control functions may also be obtained numerically in Section \[sec:meta-ga\]. In Section \[sec:monotonic\], we develop theory to consider a fitness landscape as a memoryless communication channel between fitness values and distance from an optimal sequence. We introduce the ideas of local and weak monotonicity of a landscape. This allows us to formulate hypotheses about monotonicity and mutation rate control in biological fitness landscapes. We test these hypotheses by numerically obtaining optimal mutation rate control functions for 115 published complete landscapes of transcription factor binding [@Badis09]. Our results presented in Section \[sec:TF-landscapes\] show that all the optimal mutation rate control functions in these biological landscapes do indeed converge to non-trivial forms consistent with the theory developed here. We also observe differences among optimal mutation rate control functions, variation that relates to variation in the landscapes’ monotonic properties. We conclude in Section \[sec:discussion\] by discussing how mutation rate control as considered here may be manifested in living organisms.
A Generalisation of Fisher’s Geometric Model of Adaptation {#sec:geometry}
==========================================================
In this section, we consider an abstract problem, in which organisms are viewed as points in some metric space and adaptation as a motion in this space towards some target point (an optimal organism). In such formulation, maximisation of biological fitness corresponds to a minimisation of distance to the target, and geometry of the metric space allows us to solve the optimisation problem precisely. These abstract results will be used in the following sections to develop the theory further bringing it closer to biology.
Representation and assumptions {#sec:representation}
------------------------------
Let $\Omega$ be a set of all possible organisms. Environment defines a preference relation $\lesssim$ on $\Omega$ (a total pre-order), so that $a\lesssim b$ means $b$ is better adapted to or has a higher replication rate in a particular environment than $a$. Throughout this paper we shall consider only the case of countable or even finite $\Omega$, although the theory can be easily extended with certain care to the uncountable case. It is well-known from game theory (e.g. [@Neumann-Morgenstern]) that in the countable case the preference relation always has a utility representation: there exists a real function $f:\Omega\rightarrow{\mathbb{R}}$ such that $a\lesssim b$ if and only if $f(a)\leq f(b)$. In the biological context, the utility function is called [*fitness*]{}, and it is usually defined to have non-negative values (i.e. if $f(\omega)$ is the replication rate of $\omega$). Having positive fitness values is not essential, because the preference relation does not change under a strictly increasing transformation of $f$, such as adding a constant $\varepsilon\in{\mathbb{R}}$ to $f$ or multiplying it by a positive number (i.e. representation $f(\omega)$ is equivalent to $\lambda f(\omega)+\varepsilon$ for any $\lambda>0$ and $\varepsilon\in{\mathbb{R}}$). Thus, our interpretation of fitness simply as a numerical representation of a preference relation on organisms is distinct from population genetic definitions of fitness (e.g. see [@Orr09]). We shall assume also that there exists a top (optimal) element $\top\in\Omega$ such that $f(\top)=\sup f(\omega)$, which is the most adapted and quickly replicating specie in the current environment. Note that a finite set $\Omega$ always contains at least one top (optimal) element $\top$ as well as at least one bottom element $\bot$.
Generally, one can consider also the set $\Theta$ of all environments (including other organisms), because different environments $\theta\in\Theta$ impose different preference relations $\lesssim_\theta$ on $\Omega$, which have to be represented by different fitness functions $f_\theta(\omega):=f(\omega,\theta)$. In this paper, however, we shall assume that a particular environment has been fixed, and therefore consider only one preference relation and one fitness function.
During the replication, organism $a$ can mutate into $b$ with probability $P(b\mid a)$, and the products $P(b\mid a)\cdot f(a)$ define the [*selection-mutation*]{} matrix — the infinitesimal generator of the replicator-mutator dynamics (generally non-linear Markov evolution). Mutation can have different effects on fitness of the offspring. Mutation $a\mapsto b$ can be deleterious, if $f(a)>f(b)$, neutral, if $f(a)=f(b)$, or beneficial, if $f(a)<f(b)$. We shall analyse how the probability of beneficial mutation can be related to the ‘geometry’ of mutation.
Fitness is defined by the interaction of an organism with its environment, and therefore it is a property of a *phenotype*. Thus the set $\Omega$, which is the domain of the fitness function, can be thought of as not just the set of all organisms, but the set of all possible phenotypes. Reproduction of organisms, however, involves passing of information about the phenotypes in the form of codes, which can be elements of some other set. Consider a representation of phenotypes $\omega\in\Omega$ by points of a topological vector space ${\mathcal{H}}$ (e.g. a space of traits, a space of DNA sequences and so on). In information theory, a mapping $\kappa:\Omega\to{\mathcal{H}}$ is called a [*code*]{}, and we shall assume here that it is uniquely decodable: $\kappa(a)=\kappa(b)$ implies $a=b$. That is, $\omega\mapsto\kappa(\omega)$ is an injection of $\Omega$ into a possibly larger space ${\mathcal{H}}$. In biological terms, each genotype has either one or no phenotype, and each phenotype has precisely one genotype. In addition, we shall assume that the image of $\kappa$ is closed under the operation of addition in ${\mathcal{H}}$, which implies that for all $a$, $b\in\Omega$, there exists $c\in\Omega$ such that $\kappa(a)+\kappa(c)=\kappa(b)$. Thus, mutation $a\mapsto b$ in $\Omega$ can be represented in ${\mathcal{H}}$ by addition of codes $\kappa(a)$ and $\kappa(c)=\kappa(b)-\kappa(a)$, as shown on the following diagram: $$\xymatrix{\Omega\ni a\ar[rr]^{\txt{Mutation}}\ar[d]_\kappa&&b\in\Omega\\
{\mathcal{H}}\ni\kappa(a)\ar[rr]^{+\kappa(c)}&&\kappa(b)\in{\mathcal{H}}\ar[u]_{\kappa^{-1}}}$$
We shall assume that the topology in ${\mathcal{H}}$ is defined by a metric $d:{\mathcal{H}}\times{\mathcal{H}}\to[0,\infty)$ (i.e. ${\mathcal{H}}$ is a metric vector space). Under a uniquely decodable mapping $\kappa$, the metric on ${\mathcal{H}}$ induces an equivalent metric on $\Omega$ representing ‘dissimilarity’ of two phenotypes. Thus, abusing notation, we shall identify phenotypes $\omega$ with their codes $\kappa(\omega)$ and write $d(a,b)$ and $b=a+c$ instead of $d(\kappa(a),\kappa(b))$ and $\kappa(b)=\kappa(a)+\kappa(c)$. A sphere and a ball of radius $r\in[0,\infty)$ around every point $a\in\Omega$ is defined as usual: $$S(a,r):=\{b\in\Omega:d(a,b)=r\}\,,\quad
B(a,r):=\bigcup_{n\in[0,r]}S(a,n)$$ If $a$ mutates into $b$, then we call $r=d(a,b)$ a [*mutation radius*]{}.
More generally, a representation may be non-uniquely decodable or even stochastic, in which case $\Omega$ is not a metric space, but this will not be considered here. Thus we consider a simplified picture of uniquely decodable genotypes. The motivation for distinguishing genotype and phenotype however will become apparent in Section \[sec:monotonic\] when we define the monotonic properties of general fitness landscapes. In particular, the radius $r$ is the dissimilarity of the codes (e.g. genotypes) $\kappa(a)$ and $\kappa(b)$, and it depends on the choice of a representation space ${\mathcal{H}}$, its metric and the encoding-decoding schemes $\kappa$ and $\kappa^{-1}$, all of which may influence landscape monotonicity.
Fisher’s representation in Euclidean space
------------------------------------------
In this section, we identify fitness $f(\omega)$ with the negative distance $-d(\top,\omega)$ from the top element, but later we shall generalise the relation between fitness and the topology of a representation space. Thus, adaptation (beneficial mutation) corresponds to a transition from a sphere of radius $n=d(\top,a)$ into a sphere of a smaller radius $m=d(\top,b)$, which is depicted in Figure \[fig:mutation\].
(50,40) (34,17) (34,15)[$a$]{} (34,17)[(0,1)[7]{}]{} (34,24) (35,24.5)[$b$]{} (34.5,20)[$r$]{} (34,17) (5,35) (2,35)[$\top$]{} (5,35)[(3,-2)[28]{}]{} (18,22)[$n$]{} (5,5)(40,5)(40,40) (5,35)[(4,-1)[30]{}]{} (22,32)[$m$]{} (5,8)(37,8)(37,40)
This geometric view of mutation and adaptation is based on Ronald Fisher’s idea [@Fisher30], which was, perhaps, the earliest mathematical work on the role of mutation in adaptation. Fisher represented phenotypes by points of Euclidean space ${\mathcal{H}}\equiv{\mathbb{R}}^l$ of $l\in{\mathbb{N}}$ traits, and therefore equipped $\Omega$ with Euclidean metric $d_E(a,b)=\|a-b\|_2$ (here $\|\cdot\|_2$ denotes the standard $\ell_2$-norm in ${\mathbb{R}}^l$). The top element $\top$ was identified with the origin in ${\mathbb{R}}^l$, and fitness $f(\omega)$ with the negative distance $-d_E(\top,\omega)$. Then Fisher used geometry of the Euclidean space to show that probability of beneficial mutation decreases exponentially as mutation radius increases, and therefore mutations of small radii are more likely to be beneficial. Despite subsequent development of the theory [@Orr05], the use of Euclidean space for representation was not revised.
Euclidean space is infinite, and the interior of any ball has always smaller volume than its exterior. Therefore, assuming mutation in random directions, an organism on the surface of a ball around an optimum is always more likely to mutate into the exterior than the interior of this ball. This obvious and simple property is key for the conclusion that adaptation is more likely to occur by small mutations. Recently, we showed that the geometry of a finite space, such as the Hamming space of sequences, implies a different relation between the radius of mutation and adaptation [@Belavkin_etal11:_ecal11; @Belavkin11:_itw11]. In particular, mutation radius maximising the probability of adaptation varies as a function of the distance to the optimum.
Probability of adaptation and representation in a Hamming space
---------------------------------------------------------------
One of the most common examples of a finite metric space is a Hamming space of sequences. Let us denote by ${\mathcal{H}}_\alpha^l:=\{1,\ldots,\alpha\}^l$ the set of all sequences of letters from a finite alphabet $\{1,\ldots,\alpha\}$ and length $l$. The alphabet can be equipped with operations of addition and multiplication such that it becomes a finite field $GF(\alpha)$ (a Galois field) and ${\mathcal{H}}_\alpha^l$ becomes a linear algebra over $GF(\alpha)$. A linear algebra is also a vector space, and ${\mathcal{H}}_\alpha^l$ is an example of a finite vector space (there are $\alpha^l$ points in ${\mathcal{H}}_\alpha^l$). The space ${\mathcal{H}}_\alpha^l$ can be equipped with the Hamming metric $d_H(a,b):=|\{i:a_i\neq b_i\}|$ counting the number of different letters. The Hamming metric can also be defined as $d_H(a,b):=\|a-b\|_H$, where $\|\cdot\|_H:{\mathcal{H}}_\alpha^l\rightarrow\{0,1,\ldots,l\}$ is the Hamming weight counting the number of letters in a sequence not equal to the additive unit of the field $GF(\alpha)$ (zero of the field). Thus, addition of sequences results in a substitution of some letters, which corresponds to a simple mutation, and the Hamming distance counts the number of substitutions.
Consider mutation of sequence $a\in S(\top,n)$ into sequence $b\in S(\top,m)$ by radius $r=d_H(a,b)$, as shown on Figure \[fig:mutation\]. Assuming equal probabilities for all sequences in the sphere $S(a,r)$, the probability that the offspring sequence is in the sphere $S(\top,m)$ is given by the number of elements in the intersection of spheres $S(\top,m)$ and $S(a,r)$: $$P(m\mid n,r)=\frac{|S(\top,m)\cap S(a,r)|_{d(\top,a)=n}}{|S(a,r)|}
\label{eq:p-intersection}$$ where $|\cdot|$ denotes cardinality of a set (the number of its elements). The cardinality of the intersection $S(\top,m)\cap S(a,r)$ with condition $d(\top,a)=n$ is computed as follows $$\begin{aligned}
\lefteqn{\left|S(\top,m)\cap S(a,r)\right|_{d(\top,a)=n}}\nonumber\\
&=&\sum_{\substack{r_0+r_-+r_+=\min\{r,m\}\\r_+-r_-=n-\max\{r,m\}}} (\alpha-2)^{r_0}{n-r_+\choose r_0}(\alpha-1)^{r_-}{l-n\choose r_-}{n\choose r_+}\label{eq:h-intersection}\end{aligned}$$ The summation runs over indexes $r_0$, $r_-$ and $r_+$ satisfying conditions $r_0+r_-+r_+=\min\{r,m\}$ and $r_+-r_-=n-\max\{r,m\}$. These conditions follow from the triangle inequalities for $r$, $m$ and $n$, such as $$|n-m|\leq r\leq n+m$$ When $r\leq m$ then $r_0$, $r_-$ and $r_+$ count respectively the numbers of neutral, deleterious and beneficial substitutions in $r\in[0,l]$. They also satisfy the following constraints $r_-\in[0,\lfloor (r+m-n)/2\rfloor]$ and $r_+\in[0,\lfloor (n-|r-m|)/2\rfloor]$, where $\lfloor\cdot\rfloor$ denotes the floor operation.
The cardinality of sphere $S(a,r)\subset{\mathcal{H}}_\alpha^l$ is $$|S(a,r)|=(\alpha-1)^r{l\choose r}
\label{eq:h-sphere}$$ Equations (\[eq:p-intersection\])-(\[eq:h-sphere\]) allow us to compute the probability of adaptation in the Hamming space ${\mathcal{H}}_\alpha^l$, which is the probability that the offspring is in the interior of ball $B(\top,n)$: $$P(m<n\mid n,r)=\sum_{m=0}^{n-1}P(m\mid n,r)
\label{eq:p-adaptation}$$
Figure \[fig:fisher-r\] shows the probability of adaptation in Hamming space $H_4^{100}$ (i.e. alphabet of size $\alpha=4$ and length $l=100$) as a function of mutation radius $r$ for different values of $n=d(\top,a)$. One can see that when $n<75$ (more generally when $n<l(1-1/\alpha)$), the probabilities of adaptation decrease with $r$, similar to Fisher’s conclusion for the Euclidean space. However, for $n=75$ there is no such decrease, and when $n>75$ (i.e. for $n>l(1-1/\alpha)$), the probability of adaptation actually increases with $r$. This is due to the fact that, unlike Euclidean space, Hamming space is finite, and the interior of ball $B(\top,n)$ can be larger than its exterior. The geometry of a Hamming space has a number of interesting properties [@Ahlswede-Katona77]. For example, every point $\omega$ has $(\alpha-1)^l$ diametric opposite points $\neg\omega$, such that $d_H(\omega,\neg\omega)=l$, and the complement of a ball $B(\omega,r)$ in ${\mathcal{H}}_\alpha^l$ is the union of $(\alpha-1)^l$ balls $B(\neg\omega,l-r-1)$.
Using arbitrary alphabets is important not only because DNA molecules are sequences with $\alpha=4$ bases, but also because it allows us to consider different representations, where the letters of the alphabet may correspond not to DNA base-pairs, but to higher-level structures such as triplets of DNA bases (encoding amino acids) or genes. Changing the representation by considering subsequences of a sequence as letters from an alphabet of a larger size $\alpha$ corresponds to decreasing the length $l$ of the sequence. The Hamming metric, measuring the distance between sequences, takes values in $\{0,\ldots,l\}$, and changing the alphabet and length changes the geometry of the representation space ${\mathcal{H}}_\alpha^l$. \[rm:alphabets\]
Hamming metric compares sequences of the same lengths, and it counts the least number of substitutions between a pair of sequences, which is the main mutation mechanism that we consider here. Variable length sequences can be compared using, for example, the Levenshtain metric, which counts the least number of substitutions, insertions and deletions. The space of all variable length finite alphabet sequences is countably infinite, and it can be considered as a vector space over an extended Galois field [@Sanchez-Grau07]. Hamming spaces are finite subspaces of this space, and one can consider the set of nested Hamming spaces, where increasing complexity corresponds to an increasing sequence length [@Belavkin11:_itw11]. We note also that every such finite subspace has a top element $\top$, but the whole space of variable length sequences may fail to have one. \[rm:levenshtain\]
Random mutation {#sec:random-mutation}
---------------
By mutation of parent sequence $a$ into $b$ we understand a random process, so that the mutation radius is a random variable. The simplest form of mutation, called [*point mutation*]{}, is the random process of independently substituting each letter in $a$ to any of the other $\alpha-1$ letters with probability $\mu$. At its simplest, with one parameter, there is an equal probability $\mu/(\alpha-1)$ of mutating to each other base. Such mutation corresponds also to additive noise: $b=a+c$, where $c$ is a sequence obtained by point mutation of the origin in ${\mathcal{H}}_\alpha^l$ (the sequence with all letters equal to zero — the additive unit of the field $GF(\alpha)$). The parameter $\mu$ is called [*mutation rate*]{}. For point mutation, the probability of mutating by radius $r\in[0,l]$ is given by the binomial distribution: $$P_\mu(r\mid n)={l\choose r}\,\mu(n)^r(1-\mu(n))^{l-r}
\label{eq:p-radius}$$ The expected value and variance of the mutation radius are respectively ${\mathbb{E}}_\mu\{r\}=l\mu$ and $\sigma^2(r)=l\mu(1-\mu)$. Note in the equation above that we assume that mutation rate $\mu$ may depend on the distance $n=d_H(\top,a)$ from the top sequence.
Optimisation of the mutation rate requires knowledge of the probability $P_\mu(m\mid n)$ that the offspring sequence $b$ is in the sphere $S(\top,m)$ that can be expressed as follows: $$P_\mu(m\mid n)=\sum_{r=0}^l P(m\mid n,r)\,P_\mu(r\mid n)
\label{eq:p-transition}$$ Equations (\[eq:p-intersection\])–(\[eq:h-sphere\]) and (\[eq:p-radius\]) can be substituted into equation (\[eq:p-transition\]) to obtain the precise expression for transition probability $P_\mu(m\mid n)$ in ${\mathcal{H}}_\alpha^l$.
Mutation in biology is much more complex than described above, and its precise mathematical modelling involves many parameters. One parameter point mutation, however, is optimal in a certain sense: it is a solution of one specific variational problem of minimisation of expected distance between points $a$ and $b$ in a Hamming space subject to a constraint on mutual information between $a$ and $b$. We define and solve this problem in \[sec:variational\]. The optimal solutions are conditional probabilities having exponential form $P_\beta(b\mid a)\propto\exp[-\beta\,d_H(a,b)]$, where parameter $\beta>0$, called the inverse temperature, is related to the constraint on mutual information. Because Hamming metric $d_H(a,b)=\|a-b\|_H$ is computed as the sum $\sum_{i=1}^l\delta_{a_i}(b_i)$ of elementary distances $\delta_{a_i}(b_i)$ between letters $a_i$ and $b_i$ in $i$th position in the sequence, and the values of $\delta_{a_i}(b_i)$ do not depend on the position $i$, the exponential conditional probability factorises into the product $P_\beta(b\mid a)\propto\prod_{i=1}^l e^{-\beta\,\delta_{a_i}(b_i)}$ corresponding to independent substitution of letters $a_i$ into $b_i$ with equal probabilities $\mu/(\alpha-1)$, where $\mu$ is related to the inverse temperature $\beta$. Changing the representation space ${\mathcal{H}}$ and its metric will result in a different optimal mutation operation. For example, if ${\mathcal{H}}$ is the space of variable length sequences with Levenshtain metric, then optimal mutation $P_\beta(b\mid a)$ will involve independent substitutions, insertions and deletions. If elementary distances $\delta_{a_i}(b_i)$ are different between different pairs of letters, then there will be different parameters for different pairs. If elementary distances depend on the position $i$ in a sequence or the metric $d(a,b)$ is not the sum of elementary distances, then the optimal mutation is a more complex process with non-independent substitutions, insertions or deletions, the phenomenon known in biology as epistasis.
Optimal control of mutation rates {#sec:theory-optimal-control}
---------------------------------
The fact that we have shown above that the probability of adaptation depends on mutation rate introduces the possibility of organisms maximising the expected fitness of their offspring by controlling mutation rate. The exact form of the optimal mutation rate control functions depends on a number of factors, such as the time horizon. Here we cover the principal elements required, developed in [@Belavkin11:_dyninf].
Let $P_t(a)$ be the distribution of parent sequences in ${\mathcal{H}}_\alpha^l$ at time $t$, and let $P_t(n)=\sum_{a:d(\top,a)=n}P_t(a)$ be the distribution of their distances $n=d_H(\top,a)$ from the optimum. Transition probabilities $P(m\mid n)$ define linear transformation of $P_t(n)$ into distribution $P_{t+1}(m)$ of distances $m=d_H(\top,b)$ of their offspring from the optimum: $$P_{t+1}(m)=\sum_{n=0}^lP(m\mid n)P_t(n)$$ If this linear transformation $T(\cdot):=\sum_{n=0}^lP(m\mid n)(\cdot)$ does not change with time and assuming that distance to the optimum has Markov property (i.e. distance at $t+1$ depends only on distance at $t$, but not at $t-1$, $t-2$, etc), then the distribution $P_{t+s}(m)$ after $s$ generations is defined by $T^s(\cdot)$, the $s$th power of $T(\cdot)$.
According to equation (\[eq:p-transition\]) transition probabilities $P_\mu(m\mid n)$ from sphere $S(\top,n)$ to $S(\top,m)$ depend on the mutation rate parameter $\mu$ for each distance $n$ from top sequence $\top$, and we call the collection of pairs $(n,\mu)$ the mutation rate [*control function*]{} $\mu(n)$. The expressions for the transition probabilities $P_\mu(m\mid n)$ between spheres around optimal element $\top\in{\mathcal{H}}_\alpha^l$ can be used to optimise this function. This optimisation, however, can be done with respect to different criteria leading to different optimal functions. For example, after one replication, the conditional expected distance to the optimum ${\mathbb{E}}\{m\mid n\}=\sum_{m=0}^lmP_\mu(m\mid n)$ is minimised if the mutation rate $\mu$ depends on $n$ according to the following [*step function*]{}: $$\mu(n)=\left\{\begin{array}{ll}
0&\mbox{ if $n<l(1-1/\alpha)$}\\
\frac12&\mbox{ if $n=l(1-1/\alpha)$}\\
1&\mbox{ otherwise}
\end{array}\right.
\label{eq:step}$$ This function is shown on Figure \[fig:mutation-rate-functions\] for Hamming space ${\mathcal{H}}_4^{10}$. The sudden change of the optimal mutation rate from $\mu=0$ to $\mu=1$ at $n=l(1-1/\alpha)$ corresponds to the sudden change of the effect of the mutation radius on the probability of adaptation shown on Figure \[fig:fisher-r\]. If parent sequences are uniformly distributed $P_t(a)=\alpha^{-l}$ in ${\mathcal{H}}_\alpha^l$, then mutation of sequences with this control function achieves the greatest decrease ${\mathbb{E}}\{n\}-{\mathbb{E}}\{m\}=\sum_{n=0}^l nP_t(n)-\sum_{m=0}^l mP_{t+1}(m)$ of the expected distance to the optimum. Note, however, that sequences with $n=d_H(\top,a)<l(1-1/\alpha)$ do not mutate. Therefore, if after several generations all sequences are closer to $\top$ than $l(1-1/\alpha)$, then their offspring cannot get closer to $\top$. In the space of binary sequences ($\alpha=2$) this occurs after only one replication. For this reason, the control of mutation by the step function is not optimal for adaptation in more than one generation.
Deriving a mutation rate control function minimising the expected distance to the optimum after several generations is not a trivial task. However, for a sufficiently large number of generations this problem is equivalent to minimising the expected time at which individuals achieve maximum fitness. The expected convergence times can be computed using techniques for absorbing Markov chains, and numerical methods show that the optimal mutation rate control changes in this case from a step to a more smooth, sigmoid-like function [@Belavkin11:_dyninf].
A simpler but closely related problem is maximisation of probability $P_\mu(b=\top\mid a)$ of mutating directly to the optimum, or maximisation of the probability $P_\mu(m=0\mid n)$, which has the following expression: $$P_\mu(m=0\mid n)=(\alpha-1)^{-n}\mu^n(1-\mu)^{l-n}
\label{eq:top-sequence}$$ Conditions $dP_\mu/d\mu=0$ and $d^2P_\mu/d\mu^2\leq0$ defining the mutation rate maximising this probability lead to the equation $n-l\mu=0$ and the following linear mutation rate control function shown on Figure \[fig:mutation-rate-functions\] for ${\mathcal{H}}_4^{10}$: $$\mu(n)=\frac{n}{l}
\label{eq:linear}$$ This variation of optimal control functions illustrates the importance of the number of generations (time horizon) for which the expected fitness is maximised, as pointed out previously by Orr [@Orr98].
Another approach to mutation rate control is to maximise the probability of adaptation: $$P_\mu(m<n\mid n)=\sum_{m=0}^{n-1}P_\mu(m\mid n)$$ Bäck obtained the mutation rate function $\mu(n)$ maximising this probability (which he called the probability of *success*) in the space ${\mathcal{H}}_2^l$ of binary sequences [@Back93]. The expressions from the previous section allow us to obtain similar functions for general Hamming spaces ${\mathcal{H}}_\alpha^l$. Figure \[fig:mutation-rate-functions\] shows this function for ${\mathcal{H}}_4^{10}$. We note that the comparison $m<n$ used in the probability of adaptation and its maximisation effectively changes fitness from being absolute (i.e. depending only on an individual) to relative (e.g. depending also on the parent of an individual). Indeed, maximisation of $P(m<n\mid n)$ is equivalent to maximisation of the expected value ${\mathbb{E}}\{f_2(m,n)\mid n\}$ of a two-valued relative fitness function $f_2(m,n)=1$ if $m<n$; $f_2(m,n)=0$ otherwise.
Another approach that we pursued elsewhere is based on information theory [@Belavkin11:_itw11; @Belavkin11:_qbic11]. In brief, the optimisation of expected fitness is performed subject to constraints on information divergence of distribution $P_{t+1}(m)$ from distribution $P_t(n)$. The resulting optimal mutation rates $\mu(n)$ correspond to cumulative probabilities $P_0(m<n)=\sum_{m=0}^{n-1}P_0(m)$, where $P_0(m)$ is the distribution of $m=d(\top,a)$ assuming uniform distribution $P(\omega)=\alpha^{-l}$ of sequences in ${\mathcal{H}}_\alpha^l$. Figure \[fig:mutation-rate-functions\] shows this function for ${\mathcal{H}}_4^{10}$. We point out that this control not only achieves a very fast decrease of the expected distance ${\mathbb{E}}\{m\}$ to the optimum, but the resulting populations also have the smallest variance $\sigma^2(m)$ of the distances.
There are other optimisation criteria, such as maximisation of cumulative expected fitness, that may lead to different optimal control functions. Thus, Figure \[fig:mutation-rate-functions\] and this discussion illustrates the fact that there is no single optimal mutation rate control function, but a variety of functions each of which solves a specific optimisation problem. However, it is also evident from Figure \[fig:mutation-rate-functions\] that all these control functions have a common property of monotonically increasing mutation rate with increasing distance of parent sequence from the optimum. Where an evolutionary system optimises a particular criterion, such as one of those considered in this section, on a monotonic landscape, the optimal mutation rate control function will be the corresponding derived function. In Section \[sec:meta-ga\] we shall consider an approximation technique applicable to a more general class of problems including cases where derivation is impractical. In Section \[sec:monotonic\] we relax the assumption of a monotonic landscape.
Evolutionary Optimisation of Mutation Rate Control Functions {#sec:meta-ga}
============================================================
Analytical approaches cannot always be applied to derive optimal mutation rate control functions due to high problem complexity. Another approach is to use numerical optimisation or evolutionary techniques to obtain approximately optimal solutions. In this section, we introduce such an evolutionary technique that uses two genetic algorithms. The first, which we refer to as the Inner-GA, evolves sequences with the mutation rate controlled by some function that maps fitness to mutation rate. The second, which we refer to as the Meta-GA, evolves a population of such mutation rate control functions for better performance of the Inner-GA. In this section, we describe the details of these algorithms and report results of experiments. The Inner-GA can use any fitness function. First, we shall apply the techique to the case when fitness of an individual is its negative distance from a selected point in a Hamming space. Later we shall apply the technique to more general non-monotonic fitness landscapes.
Inner-GA
--------
The Inner-GA is a simple generational genetic algorithm that uses no selection and no recombination. Each genotype in the Inner-GA is a sequence $\omega\in{\mathcal{H}}_\alpha^l$, and we used populations of 100 individuals. The initial population had equal numbers of individuals at each fitness value, and they were evolved by the Inner-GA for 500 generations using simple point mutation, according to a mutation rate control function specified by the Meta-GA. The fitness can be defined by an arbitrary real function $y=f(\omega)$, and the average fitness $\bar y(t)$ of the population is calculated at each generation, in order that expected fitness ${\mathbb{E}}\{y\}(t)$ may be maximised by the Meta-GA.
Meta-GA
-------
The Meta-GA is a simple generational genetic algorithm that uses tournament selection (a good choice when little is known or assumed about the structure of the landscape). Each genotype in the Meta-GA is a mutation rate function $\mu(y)$ of fitness values $y$. The domain of $\mu(y)$ is an ordered partition of the range $\{y:f(\omega)=y,\ \omega\in{\mathcal{H}}_\alpha^l\}$ of the Inner-GA fitness function. Thus, individuals in the Meta-GA are sequences of real values $\mu\in[0,1]$ representing probabilities of mutation at different fitnesses, as used in the Inner-GA.
At each generation of the Meta-GA, multiple copies of the Inner-GA were evolved for 500 generations, with the mutation rate in each copy controlled by a different function $\mu(y)$ taken from the Meta-GA population. We used populations of 100 individual functions, which were initialised to $\mu(y)=0$. All runs within the same Meta-GA generation were seeded with the same initial population of the Inner-GA. The Meta-GA evolved functions $\mu(y)$ for $5\cdot10^5$ generations to maximise the average fitness $\bar y(t)\approx{\mathbb{E}}\{y\}(t)$ in the final generation of the Inner-GA.
The Meta-GA used the following selection, recombination and mutation:
- Randomly select three individuals from the population and replace the least fit of these with a mutated crossover of the other two; repeat with the remaining individuals until all individuals from the population have been selected or fewer than three remain.
- Crossover recombines the start of the numerical sequence representing one mutation rate function with the end of another using a single cut point chosen randomly, excluding the possibility of being at either end so that there are no clones.
- Mutation adds a uniform-random number $\Delta\mu\in[-.1,.1]$ to one randomly selected value $\mu$ (mutation rate) on the individual mutation rate function but then bounds that value to be within $[0,1]$.
The Meta-GA returns the fittest mutation rate function $\mu(y)$.
Evolved control functions
-------------------------
The kind of mutation rate control function the Meta-GA evolves depends greatly on properties of the fitness landscape used in the Inner-GA. In Section \[sec:theory-optimal-control\] we showed theoretically that for $f(\omega)$ corresponding to negative distance to optimum $-d_H(\top,\omega)$, the optimal mutation rate increases with $n=d_H(\top,\omega)$. Therefore, the population of mutation rate functions in the Meta-GA should evolve the same characteristics in such a landscape. Figures \[fig:mutation-10-4-d\] shows the average and standard deviations of the fittest control functions evolved in 20 runs of the Meta-GA using Inner-GAs with individuals in ${\mathcal{H}}_4^{10}$ (i.e. $\alpha=4$, $l=10$) and fitness defined by $f(\omega)=-d_H(\top,\omega)$. As predicted, the mutation rate increases with $n=d_H(\top,\omega)$. We shall now consider more complex landscapes.
Locally and Weakly Monotonic Fitness Landscapes {#sec:monotonic}
===============================================
The logic behind the variation and optimal control of mutation rates described in the previous section was based on the assumption that fitness $f(\omega)$ is isomorphic to negative Hamming distance $-d_H(\top,\omega)$ from the top sequence, which allowed us to derive optimal control functions using the geometry of the space of sequences. As detailed below, this assumption implies global monotonicity of the fitness landscape, and it is highly unlikely in real biological landscapes, which can be rugged [@Lobkovsky11]. In this section, we define the concept of a local and weak monotonicity relative to a chosen metric and show that all landscapes are weakly monotonic at least in some small but non-trivial neighbourhood of the top sequence. This relation between fitness and distance allows one to implement a control of mutation rate using feedback from fitness values. We then consider how monotonicity of different landscapes may influence these fitness-based optimal control functions.
Memoryless communication between fitness and distance
-----------------------------------------------------
If fitness $y=f(\omega)$ is not isomorphic to the negative distance $n=d_H(\top,\omega)$ from the optimum, then fitness values of the sequences do not provide full information about their distances. Thus, in order to employ the optimal control $\mu(n)$ of mutation rate based on the distance from the top sequence, one has to estimate the distance from fitness values. The estimation of unobserved random variable $n=d_H(\top,\omega_t)$ at time $t$ from a sequence $y_t,y_{t-1},\dots,y_0$ of observed random variables is known as the [*filtering*]{} problem [@Stratonovich59:_nonlinear]. Note that generally the observed process $\{y_t\}_{t\geq0}$ is not Markov (i.e. $P(y_{t+1}\mid y_t,\ldots,y_0)\neq P(y_{t+1}\mid y_t)$), even if the unobserved process $\{n\}_{t\geq0}$ and the joint process $\{(n,y_t)\}_{t\geq0}$ are. For this reason, the optimal control of mutation rate should be a function $\mu(y_t,\ldots,y_0)$ of the entire history of observations. It seems unlikely, however, that such a control has biological relevance, as its implementation would require information about fitness values in all previous generations. Instead, we shall consider a control based only on the current fitness value $y_t$. Our analysis will focus on monotonic properties of the fitness landscape that will allow us to relate transition probability $P_\mu(y_{t+1}\mid y_t)$ between fitness values of the parent and offspring with probability $P_\mu(m\mid n)$ of transitions between spheres of different radii around the optimum. We shall demonstrate that the ‘similarity’ between these transition probabilities increases as sequences evolve closer to the optimum, and for this reason the optimal control function $\mu(y_t)$ based on the current fitness values should closely resemble the distance-based optimal control function $\mu(n)$ in some neighbourhood of the optimum.
By a *fitness landscape*, we understand it to mean a graph of a function $f\circ\kappa^{-1}:{\mathcal{H}}_\alpha^l\rightarrow{\mathbb{R}}$ which associates representations $\kappa(\omega)\in{\mathcal{H}}$ (codes) of individuals with their fitness values $y=f(\omega)$. The landscape defines a joint distribution $P(y,n)$ of the fitness values $y=f(\omega)$ and distances $n=d_H(\top,\omega)$ from the nearest global optimum. This joint distribution defines conditional probabilities $P(n\mid y)$ and $P(y\mid n)$. Let us consider mutation of sequence $a$ into sequence $b$, and let us denote by $n=d_H(\top,a)$ and $m=d_H(\top,b)$ their distances from the nearest optimum and by $y_t=f(a)$ and $y_{t+1}=f(b)$ their fitness values. Thus, given sequence $b$, its fitness and distance values $y_{t+1}$ and $m$ are independent of the parent sequence $a$. We shall assume further that given distance $m$, the fitness value $y_{t+1}$ is also independent of distance $n$: $P(y_{t+1}\mid m,n)=P(y_{t+1}\mid m)$. One can show that this is equivalent to conditional independence of $y_{t+1}$ and $y_t$ given distances $m$ and $n$: $P(y_{t+1},y_t\mid m,n)=P(y_{t+1}\mid m)\,P(y_t\mid n)$. The transition probability $P_\mu(y_{t+1}\mid y_t)$ in this case can be expressed as a composition of transition probabilities $P(n\mid y_t)$, $P_\mu(m\mid n)$ and $P(y_{t+1}\mid m)$ in the following way: $$P_\mu(y_{t+1}\mid y_t)=\sum_{m=0}^l\sum_{n=0}^l P(y_{t+1}\mid m)P_\mu(m\mid n)P(n\mid y_t)$$ Thus, we assume that the fitness landscape acts as a memoryless communication channel between distances of individuals to the nearest global optimum and their fitness values. The amount of information communicated through this channel defines how ‘similar’ the conditional probabilities $P_\mu(y_{t+1}\mid y_t)$ and $P_\mu(m\mid n)$ are and how effective a mutation control function $\mu(y)$ is.
If fitness values $y=f(\omega)$ of sequences and their distances $n=d_H(\top,\omega)$ from the nearest global optimum are statistically independent, then $P(n\mid y_t)=P(n)$, $P(y_{t+1}\mid m)=P(y_{t+1})$ and therefore $P_\mu(y_{t+1}\mid y_t)=P(y_{t+1})$. This means that fitness $y_{t+1}$ of the offspring is independent of fitness $y_t$ of its parent, and therefore a control of mutation rate will have [*no*]{} effect on fitness of the offspring. On the other hand, if there is a one-to-one correspondence between the fitness values $y=f(\omega)$ and distances $n=d_H(\top,\omega)$ (i.e. there is a bijection $g:{\mathbb{R}}\rightarrow{\mathbb{R}}$ such that $f(\omega)=g\circ d_H(\top,\omega)$ and $g^{-1}\circ f(\omega)=d_H(\top,\omega)$), then $P_\mu(y_{t+1}\mid y_t)=P_\mu(m=g^{-1}(y_{t+1})\mid n=g^{-1}(y_t))$, and the optimal mutation rate control function is $\mu\circ g^{-1}(y)$, where $\mu(n)$ is an optimal control function obtained using $P_\mu(m\mid n)$. In particular, the identity $f(\omega)=-d_H(\top,\omega)$ used in previous section is established by $g(\cdot)=-1\times(\cdot)$. In \[sec:memoryless\] the relationship between transition probabilities $P(y_{t+1}\mid y_t)$ and $P(m\mid n)$ is explained in more detail.
Monotonicity of fitness landscapes
----------------------------------
Let us consider landscapes in which fitness and distance to nearest global optimum are not isomorphic but there is a deterministic mapping between them. Moreover, we shall consider monotonic properties of these mappings, which allow us to clarify notions of ‘smooth’ or ‘rugged’ fitness landscapes, used in biological literature. Note that these monotonic properties are relative to (i.e. depend on) the choice of a representation space, its metric $d$ and encoding-decoding scheme. Below we introduce the definitions of various monotonic properties of landscapes which later allow us to analyse rugged biological landscapes and address optimal control of mutation rate in such landscapes.
Let $(\Omega,d)$ be a metric space, and let $f:\Omega\rightarrow{\mathbb{R}}$ be a real function. Then, if all $a$ and $b$ inside some ball $B(\omega,\delta)$ satisfy the properties below, we say that:
- $d$ is *locally monotonic* relative to $f$ at $\omega$ if: $$-d(\omega,a)\leq -d(\omega,b)\quad\Longleftarrow\quad f(a)\leq f(b)$$
- $f$ is *locally monotonic* relative to metric $d$ at $\omega$ if: $$-d(\omega,a)\leq -d(\omega,b)\quad\Longrightarrow\quad f(a)\leq f(b)$$
- $f$ and $d$ are *locally isomorphic* at $\omega$ if: $$-d(\omega,a)\leq -d(\omega,b)\quad\iff\quad f(a)\leq f(b)$$
- We say that $d$ or $f$ are *globally monotonic* (*isomorphic*) at $\top$ relative to each other if the relevant property holds over $B(\top,\delta)\equiv\Omega$.
\[def:monotonicity\]
The three monotonic relations between fitness and distance defined above are illustrated on Figure \[fig:monotonicity\]. These cases represent idealised situations, but they help in understanding the properties of real and biologically relevant landscapes. Let us first consider global monotonicity, that is when the monotonic properties hold for the entire $\Omega$.
a)
(12,6) (0,0)[(3,2)[3]{}]{} (12,0)[(-3,2)[3]{}]{} (3,2)[(0,1)[2]{}]{} (9,2)[(0,1)[2]{}]{} (3,4)[(3,2)[3]{}]{} (9,4)[(-3,2)[3]{}]{}
b)
(12,6) (0,0)[(2,3)[2]{}]{} (12,0)[(-2,3)[2]{}]{} (2,3)[(1,0)[2]{}]{} (10,3)[(-1,0)[2]{}]{} (4,3)[(2,3)[2]{}]{} (8,3)[(-2,3)[2]{}]{}
c)
(12,6) (0,0)[(1,1)[6]{}]{} (12,0)[(-1,1)[6]{}]{}
The monotonic relationships between distance $d(\top,\omega)$ and fitness $f(\omega)$ can be represented by real monotonic functions $h:{\mathbb{R}}\to{\mathbb{R}}$ and $g:{\mathbb{R}}\to{\mathbb{R}}$ such that $h\circ f(\omega)=d(\top,\omega)$ and $g\circ d(\top,\omega)=f(\omega)$. These mappings are shown in the commutative diagrams in Figure \[fig:commutative\]. It is clear from the diagrams that mappings $h$ and $g$ are adjoint to encoding $\kappa$ and decoding $\kappa^{-1}$ schemes. Thus, for these diagrams to commute, these mappings as well as the representation space with its topology must satisfy certain properties. This represents the fact that monotonicity of fitness and distance (i.e. monotonicity of $h$ and $g$) is relative to the choice of a representation space, its metric $d$ and encoding-decoding scheme.
$$\xymatrix{(\Omega,\lesssim)\ar[d]_{\kappa}&&({\mathbb{R}},\leq)\ar[d]^h\ar[ll]_{f^{-1}}\\
({\mathcal{H}}_\alpha^l,\lesssim)\ar[rr]^{-d(\top,\cdot)}&&({\mathbb{R}},\leq)}
\qquad
\xymatrix{(\Omega,\lesssim)\ar[rr]^f&&({\mathbb{R}},\leq)\\
({\mathcal{H}}_\alpha^l,\lesssim)\ar[u]^{\kappa^{-1}}&&({\mathbb{R}},\leq)\ar[ll]_{-d^{-1}(\top,\cdot)}\ar[u]_g}$$
If the metric $d$ is monotonic relative to fitness $f$, then the distance to optimum is overdetermined, because there are generally more fitness values $f(\omega)$ than spheres $S(\top,n)$ around the optimum (see Figure \[fig:monotonicity\]a). This follows directly from the fact that in this case sequences with the same fitness must have the same distances to the optimum, but not necessarily vice versa (see Proposition \[pr:same-fitness-distance\] in \[sec:same-fitness-distance\]). Transition probabilities $P(y_{t+1}\mid y_t)$ between fitness values are easily determined by transition probabilities $P(m\mid n)$ between spheres around $\top$ and monotonic function $h\circ f(\omega)=d(\top,\omega)$ (see Proposition \[pr:induced-kernel\] in \[sec:memoryless\]): $$P(y_{t+1}\mid y_t)=\frac{1}{|h^{-1}\circ h(y_{t+1})|}\,P(m=h(y_{t+1})\mid n=h(y_t))$$ where $h^{-1}(y):=\{x:h(x)=y\}$ is the pre-image of $y$, and cardinality $|h^{-1}\circ h(y)|\geq1$ represents degeneracy of the mapping $h$ (i.e. the number of fitness values at the same distance from $\top$ as $y$). Thus, generally $P(y_{t+1}\mid y_t)\leq P(m=h(y_{t+1})\mid n=h(y_t))$, when distance to optimum is monotonic. In addition, it is easy to show that in the case of a globally monotonic metric there can be only one optimal element. Indeed, applying the definition to $\top_1$ and $\top_2$ we have: $$f(\top_1)=f(\top_2)\ \Longrightarrow\ d(\top_2,\top_1)=d(\top_2,\top_2)=0\ \iff\ \top_1=\top_2$$
The case of distance being overdetermined has little practical interest for our theory. In addition, this property does not allow for fitness plateaus as can be seen from Figure \[fig:monotonicity\]a. Such plateaus may be important in biology [@Wagner08]. It is therefore particularly interesting to look at the case where $f$ is monotonic to $d$, which allows for plateaus. In this case distance to optimum is underdetermined, because there can be fewer fitness values $f(\omega)$ than spheres $S(\top,n)$ around the optimum (see Figure \[fig:monotonicity\]b). It follows directly from the fact that in this case sequences with the same distance from the optimum must have the same fitness values, but not necessarily vice versa (see Proposition \[pr:same-fitness-distance\] in \[sec:same-fitness-distance\]). Transition probabilities $P(y_{t+1}\mid y_t)$ between fitness values can be computed from transition probabilities $P(m\mid n)$ between spheres around $\top$ and monotonic function $g\circ d(\top,\omega)=f(\omega)$ (see Proposition \[pr:induced-kernel\] in \[sec:memoryless\]): $$P(y_{t+1}\mid y_t)=\frac{1}{|g^{-1}(y_t)|}\,\sum_{m\in g^{-1}(y_{t+1})}\sum_{n\in g^{-1}(y_t)}P(m\mid n)$$ One can see that the relation between two transition probabilities is more complicated than in the previous case, and captures a model of ‘noisy’ communication between fitness and distance simply in the mapping $g$. The amount of noise in this case depends on the average degeneracy of the mapping $g$, represented by the average number of distance values $|g^{-1}(y)|$ corresponding to each fitness value $y=f(\omega)$. The extreme case is a constant fitness function, which has only one value so that all sequences are optimal. A non-trivial example of a highly degenerate landscape is a Boolean landscape, where fitness can have only two values, a situation close to many in biology where a single, non-lethal aspect of the environment is critical for determining fitness (e.g. a nutrient that either can or cannot be utilised, an absent vitamin that is or is not required or, resistance or not to a pathogen or stressor). We now combine the results obtained in Section \[sec:geometry\] with those in this section to derive transition probabilities between fitness values on this Boolean landscape where fitness is not isomorphic to distance as in the landscapes used in Section \[sec:geometry\] and how this leads on to optimal mutation rate control even in this degenerate case.
\[ex:Boolean\] Boolean fitness landscape is defined by $f(\omega)=1$ if $\omega=\top$; $f(\omega)=0$ otherwise. There can be multiple optima $\top\in\Omega$ with $f(\top)=1$, and the domain is partitioned into two disjoint subsets $f^{-1}(1)=\{\omega:f(\omega)=1\}$ and $f^{-1}(0)=\{\omega:f(\omega)=0\}$. Because there are only two fitness values, there are only four transition probabilities $P(y_{t+1}\mid y_t)$ between them, the most important of which for optimisation purposes is probability $P(y_{t+1}=1 \mid y_t=0)$. This probability is related to probability $P(\omega_{t+1}\mid\omega_t)$ of transitions between any two points $\omega_t$, $\omega_{t+1}\in\Omega$ in the following way: $$P(y_{t+1}=1\mid y_t=0)=\frac{1}{|f^{-1}(0)|}\sum_{\omega_{t+1}\in f^{-1}(1)}\sum_{\omega_t\in f^{-1}(0)}P(\omega_{t+1}\mid\omega_t)$$ One can see that the size of subsets $f^{-1}(1)$ and $f^{-1}(0)$ relative to each other plays an important role, and this characteristic can be used to study different types of Boolean landscapes. When $\omega$ are represented by sequences in a Hamming space ${\mathcal{H}}_\alpha^l$, the probability $P(\omega_{t+1}\mid \omega_t)$ with $d_H(\omega_{t+1},\omega_t)=n$ is given by equation (\[eq:top-sequence\]): $P_\mu(\omega_{t+1}\mid \omega_t)=(\alpha-1)^{-n}\mu^n(1-\mu)^{l-n}$. This expression can be used to maximise the transition probability above by optimising the mutation rate $\mu(0)$.
Weak monotonicity
-----------------
Generally, fitness landscapes may have different local monotonic properties, described above, and the relationship between fitness and distance to an optimum may not be described by any function, but rather it is non-deterministic, described by conditional probabilities $P(n\mid y_t)$ and $P(y_{t+1}\mid m)$. In this case, we can still define monotonicity in a [*weak*]{} sense (i.e. on average) using conditional expected fitness values within spheres of a given radius from point $\omega$: $${\mathbb{E}}\{f\mid n\}=\frac{1}{|S(\omega,n)|}\sum_{a:d(\omega,a)=n} f(a)$$
Let $(\Omega,d)$ be a metric space, and let $f:\Omega\rightarrow{\mathbb{R}}$ be a real function. Then we call $f$ *weakly locally monotonic* at $\omega$ relative to metric $d$ if there exists a ball $B(\omega,\delta)$ such that for all $a$, $b$ within this ball, the following condition holds: $$-d(\omega,a)=-n\leq -d(\omega,b)=-m\quad\Longrightarrow\quad {\mathbb{E}}\{f\mid n\}\leq {\mathbb{E}}\{f\mid m\}$$
It is not difficult to show that all fitness landscapes are weakly and locally monotonic at $\top$. To see this, assume the opposite, that ${\mathbb{E}}\{f\mid n\}>{\mathbb{E}}\{f\mid m\}$ holds for all neighbourhoods of $\top$. Then clearly $\sup f(\omega)$ cannot be attained at $\top$ (i.e. $\top$ is not the optimum). Thus, there must be some neighbourhood $B(\top,\delta)$, containing elements other than $\top$, where weak monotonicity holds. Our analysis in Section \[sec:TF-landscapes\] suggests that biological landscapes may exhibit weak monotonicity in large neighbourhoods of the optimum.
As discussed previously, if $f$ is locally monotonic relative to $d$, then spheres $S(\top,\delta)$ cannot contain elements with different values $y=f(\omega)$. This is not true in the case of weak monotonicity. The variation of fitness within the spheres $S(\top,n)$ can be measured by the conditional variance of fitness: $$\sigma^2(f\mid n)=\frac{1}{|S(\top,n)|}\sum_{\omega:d(\top,\omega)=n}|f(\omega)-{\mathbb{E}}\{f\mid n\}|^2$$ Clearly, stronger monotonicity implies smaller variance $\sigma^2(f\mid n)$. It is not difficult to see that an increase of expected fitness ${\mathbb{E}}\{f\mid n\}\rightarrow f(\top)$ coincides with a decrease of the variance $\sigma^2(f\mid n)\rightarrow 0$. Because of these weak locally monotonic properties of general fitness landscapes, the probabilities of transitions $P(y_{t+1}\mid y_t)$ between fitness values that are close to the optimum $y_t$, $y_{t+1}> f(\top)-\varepsilon$ will be similar to transition probabilities $P(m\mid n)$ between spheres with $n$, $m=d(\top,\omega)<\delta$. Therefore, we formulate the following hypotheses:
Hypothesis 1
: Optimal mutation rate increases with a decrease in fitness in some neighbourhood of an optimum for realistic fitness landscapes (e.g. biological landscapes) where fitness is not isomorphic to distance, similar to the monotonic increase in optimal mutation rate derived for the isomorphic case.
Hypothesis 2
: Real and biological landscapes exhibit weak monotonicity in large neighbourhoods of an optimum.
Hypothesis 3
: The larger the neighbourhood of weak monotonicity, the more mutation rate control may contribute to evolution towards high fitness.
Evolving Fitness-Based Mutation Rate Control Functions {#sec:TF-landscapes}
======================================================
To test the relevance of our predictions about the optimal mutation rate control functions more widely in biologically realistic sequence-fitness landscapes, we used the described earlier Meta-GA technique (see Section \[sec:meta-ga\]) to evolve approximately optimal functions for 115 published complete landscapes of transcription factor binding [@Badis09]. Transcription factors have evolved over very long periods to bind to specific DNA sequences. The landscapes show experimentally measured strengths of interaction (DNA-TF binding score) between the double-stranded DNA sequences of length $l=8$ of base pairs each and a particular transcription factor. Because these landscapes represent results of direct interaction between the DNA sequences and the transcription factors, the DNA sequences can be thought of as both ‘phenotypes’ and their codes, which allows us to identify the space $\Omega$ of phenotypes with the representation space, which in this case is the Hamming space ${\mathcal{H}}_4^8$ ($\alpha=4$, $l=8$). The DNA-TF binding score, however, which plays the role of fitness, is clearly not identical to the negative Hamming distance of a sequence from the top sequence (a sequence with the maximum DNA-TF binding score). In this section, we show that the mutation rate control functions obtained for these landscapes using evolutionary technique conform well to our theoretical predictions about the optimal mutation rate control.
Evolved control functions
-------------------------
We used the Meta-GA evolutionary optimisation technique, described in Section \[sec:meta-ga\], to obtain for each landscape the best possible mutation rate control function that maximises the average DNA-TF binding score in the population (expected fitness) after 500 replications. The Meta-GA algorithm converged within a small margin of statistical error to a specific mutation rate control function in each landscape. To get sufficiently significant results as well as an estimate of the convergence, 16 replicate runs were performed in each of the 115 transcription factor landscapes.
Figure \[fig:three-curves\] shows the average values and standard deviations of the evolved mutation rates for three transcription factors: Srf, Glis2 and Zfp740. Evolved functions for all landscapes are shown on Figure \[fig:all-curves\] in supplementary material. One can see that the evolved functions for each transcription factor landscape is monotonic in the direction predicted: close to zero mutation at the maximum fitness, rising to high levels further from the maximum fitness value. Once the mutation rate has peaked near the maximum value $\mu=1$, the mutation rates tend to decrease and become chaotic. As will be shown in the next section, this occurs at lower fitness values at which the landscape is no longer monotonic (i.e. further from the peak of fitness). Small standard deviations indicate good convergence to a particular control function. Observe that there is poor convergence at low fitness areas of the landscape that are poorly explored by the genetic algorithm.
Landscapes for transcription factors
------------------------------------
The variation in the evolved mutation rate control function is clearly related to a variation in the properties of the landscapes. Our theoretical analysis suggests that the main property affecting the mutation rate control is monotonicity of the landscape relative to a metric measuring the mutation radius. In particular, the radius of point-mutation is measured by the Hamming metric, and we shall look into the local and weak monotonic properties of the transcription factors landscapes relative to the Hamming metric.
Figure \[fig:landscape-types-fitness\] shows average DNA-TF binding scores within spheres $S(\top,n)$ around the optimal sequence as a function of Hamming distance $n=d_H(\top,\omega)$ from the optimum. Data is shown for three transcription factors: Srf, Glis2 and Zfp740. Lines connect average values at discrete distances for visualisation purpose. Errorbars show standard deviations of the DNA-TF binding scores within the spheres. Distributions of fitness with respect to Hamming distance $d_H(\top,\omega)$ for all 115 transcription factors are shown on Figure \[fig:all-fitness-local\] (supplementary material).
One can see from Figure \[fig:landscape-types-fitness\] that the landscape for the Srf factor has monotonic properties: The average values increase steadily for sequences that are closer to the optimum, and the deviations from the mean within the spheres are relatively small. This is in contrast to the other two landscapes. We note also that the average values for Glis2 decrease quite sharply around the optimum, while the landscape for Zfp740 has a relatively flat plateau area around the optimum, which means that there are many sequences with high DNA-TF binding score. This difference may explain different gradients of optimal mutation rates near the maximum fitness shown on Figure \[fig:three-curves\].
Monotonicity and controllability
--------------------------------
Our results have confirmed that the evolved optimal mutation rates rise from zero to very high levels as fitness decreases from the maximum value $f(\top)$ to some value $f(\top)-\varepsilon$ (see supplementary Fig. \[fig:all-curves\]). We refer to the corresponding value $\varepsilon>0$ as the [*monotonicity radius*]{}, as it defines the neighbourhood of $\top$ in terms of fitness values in which the evolved mutation rate control function has monotonic properties. We find substantial variation in monotonicity radius among transcription factors (see Fig. \[fig:three-curves\] and supplementary Fig. \[fig:all-curves\])
We hypothesised that the variation in the optimal mutation rate control functions relates to variation in the monotonicity of the transcription factor landscapes. Various measures have been proposed for the roughness of biological landscapes [@Lobkovsky11]. Here we focus on the Kendall’s $\tau$ correlation, which is directly concerned with monotonicity; specifically, $\tau$ measures the proportion of mutations that, in moving closer to the optimum in sequence space, also increase in fitness. As shown in Figure \[fig:edge-tau\], we find that $\tau$ of the landscape does indeed have a relationship with the monotonicity radius $\varepsilon$ of the evolved mutation rate control functions (Spearman’s $\rho= 0.77$, $P \approx 10^{-16}$, $N=115$).
Finally, we hypothesise that these related features of the transcription factor landscape and mutation rate function themselves relate to the biological evolution of this transcription factor system. To test this we looked at the evolutionary age of transcription factor families [@Weirauch11]. We find the suggestion of a relationship between the monotonicity of a landscape ($\tau$) and the age of the transcription factor family, implying that the more recently a transcription factor family evolved, the more monotonic is its landscape (Spearman’s $\rho = 0.23$, $P = 0.061$, $N = 115$). We find a more substantial relationship between this evolutionary age and the monotonicity radius $\varepsilon$ (Spearman’s $\rho = 0.36$, $P = 0.0032$, $N = 115$).
Discussion {#sec:discussion}
==========
In this paper we have developed and tested theory relating to the control of mutation rate in biological sequence landscapes. To do so, we had to move the theory closer to the biology in three ways. Firstly (in Section \[sec:geometry\]), we generalised Fisher’s geometric model of adaptation, from its Euclidean space (continuous and infinite) to discrete, finite Hamming spaces of sequences. Doing so demonstrated that, in contrast to the behaviour in Euclidean space, where the probability of beneficial mutation behaves similarly at different distances from the optimum [@Orr03], the probability of beneficial mutation, for a given mutation size, varies markedly depending on the distance from the optimum (Figure \[fig:fisher-r\]). Secondly, we analytically derived functions for optimal control of the mutation rate minimising the expected Hamming distance to a particular point (optimal sequence). We demonstrated also a variation of these control functions dependent on specific formulations of the optimisation problem. Nonetheless we observed consistency: all optimal functions increase monotonically (Figure \[fig:mutation-rate-functions\]). Thirdly, we developed theory concerning locally and weakly monotonic landscapes, demonstrating that all possible landscapes, including biologically rugged landscapes, can be included in these categories and thus that, at some level, our theoretical findings regarding mutation rate control may be applied to realistic biological landscapes. The most striking differences from existing theory in Euclidean spaces occur when sequences are short and far from a peak. We therefore used transcription factors binding to DNA sequences [@Badis09] as a test case, which involves both short sequences (eight base-pairs) and highly evolved binding specificity (i.e. we expect that many sequences will bind much more weakly than the best). We tested hypotheses arising from the theory, relating to the nature of optimal mutation rate functions (Hypothesis 1 and Figure \[fig:three-curves\]), the monotonic properties of landscapes (Hypothesis 2 and Figure \[fig:landscape-types-fitness\]) and the relationship between the two (Hypothesis 3 and Figure \[fig:edge-tau\]). In each case we find evidence to support the hypothesis, implying that our theory is relevant to these biological landscapes.
We have considered the possibility of varying the general mutation rate for a single genotype, that is *mutation rate plasticity*, and identified forms that such plasticity may be expected to take as a function of fitness in biological fitness landscapes. This raises a number of important questions about how this theory might relate to living organisms. The primary question is whether such control of mutation rate plasticity actually occurs in nature. Variation in mutation rate is well known, and organisms with a genetically encoded raised mutation rate, termed mutators, are found at appreciable frequencies in various real populations, apparently via their association (that is ‘hitchhiking’) with beneficial mutations [@Taddei97; @Sniegowski97]. Mutation rate plasticity is a more subtle effect than simply being a mutator. However, as with the evolution of mutators, for mutation rate control to have evolved at all might be expected to require so-called ‘second-order selection’, that is selection not directly on a trait’s effect on an individual’s fitness, but indirectly via the genetic effects it produces [@Tenaillon01]. While rare, there are clear examples of second-order selection occurring in biology [@Woods11], and in our more abstracted system of genetic algorithms we do see mutation rate plasticity rapidly evolving to particular forms (Figure \[fig:three-curves\]). This implies that mutation rate control of the sort we have considered may reasonably be hypothesised in biology.
Most existing discussion of mutation rate plasticity in nature concerns the observed phenomenon of stress-induced mutagenesis [@Galhardo07]. It has long been postulated, and most recently argued from a population genetic model [@Ram12], that such plasticity might indeed be adaptive. Such adaptationist hypotheses for stress-induced mutation have been subject to protracted debate [@Tenaillon04], but here are present two difficulties. First, it is necessary to exclude alternative, non-adaptive hypotheses. For instance, it seems likely that a raised mutation rate could be a physiologically unavoidable direct effect of stress. This has long been speculated, for instance Muller remarked that the kinetics of temperature’s effect on mutation rate resembles that of an ordinary chemical reaction [@Muller28]. Second, there needs to be a connection between the imposed or measured variable, stress, and the variable considered by the theory, (inverse) fitness. The first difficulty is ameliorated by the development of explicit theory around non-adaptive hypotheses of mutation rate variation [@Lynch10]. However, this population genetic theory is currently defined as an alternative to physiological hypotheses of mutation rate variation, whereas real organisms experience both physiological and population-genetic constraints. Integrating the two would help understand what might be expected in terms of stress-induced mutagenesis without recourse to adaptive hypotheses. Regarding the second issue, the connection between stress and fitness, ‘stress’, as typically defined, can be difficult to separate from ‘normal’ physiological processes [@Koolhaas11]. This means that stress is not a simple inverse of fitness. Indeed, stress may actually be associated with increased fitness (e.g. in the phenomenon of hormesis [@Constantini10]). Therefore, while it is possible that stress-induced mutagenesis is an example of mutation rate control as discussed in this paper, further work is required to clarify how exactly the theory relates to that example and perhaps to look for new examples of mutation rate control.
Given the current uncertainties about the existence of mutation rate control in nature, it is important to ask whether, nonetheless, mechanisms exist whereby the processes discussed in this paper *could* be exercised. The very existence of mutator phenotypes demonstrates that, physiologically, increasing mutation rates from the low values typical of biology is possible. If it is possible via genetic change in mutators, it seems highly likely also to be possible in a controlled way via plastic changes. Indeed, several different mechanisms for modulating mutation rates have been proposed, notably by regulating particular DNA repair mechanisms [@Feng_etal1996; @Drake09; @Deem11] or up regulating mutagenic repair [@Ponder05; @Slack06; @VanderVeen11].
While regulation of mutation rate is mechanistically feasible, a more challenging issue for the relevance of the theory presented here is whether feedback mechanisms exist for an individual organism to assess its own fitness against which to set its mutation rate. Stress is one indicator that may be assessed by an individual and is known to induce regulatory responses (e.g. the SOS response in bacteria [@Courcelle01]), but as discussed above, stress may not be a clear indicator of fitness. We propose three possible alternative feedback mechanisms, assessing either absolute or relative fitness. Absolute fitness is the scale used in the theory developed in this paper and concerns the number of offspring left in subsequent generations. For some organisms it may be possible to assess absolute fitness by assessing their own reproductive period relative to an internal or external clock. It is notable that one of the best characterised examples of stress-induced mutation [@Bjedov03] actually relates to mutagenesis in ageing bacterial colonies (MAC) and ageing may be an appropriate biological clock for this mechanism, one that is known to be associated with mutation rates in human males [@Kong12]. Secondly, for organisms with limited dispersal rates, the number of live organisms of the same genotype in the near vicinity may be a proxy for absolute fitness. Thirdly, while the fitness scale we have worked with is absolute, we have demonstrated elsewhere [@Belavkin_etal11:_ecal11; @Belavkin11:_itw11] that appropriate mutation rate functions may be approximated by the cumulative distribution function of the population fitnesses through evolution (also shown in Figure \[fig:mutation-rate-functions\]). That is, information about an organism’s fitness relative to others in its population could in principle act as feedback, allowing an individual to set its mutation rate in a good approximation to what would be optimal if absolute fitness were known. These latter two mechanisms raise the intriguing possibility that population-level or social effects could be important in determining individual mutation rates. Testing which, if any, of these processes actually occurs in biology will give important insights into evolutionary mechanisms.
We have focused on fitness-associated control of mutation rate. However, mutation is only one evolutionary process where fitness-associated control may be beneficial. Recombination and dispersal are also evolutionary processes that may be under the control of the individual and therefore open to similar effects. Fitness-associated recombination has been demonstrated to be advantageous theoretically [@Hadany03; @Agrawal05] and identified in biology [@Agrawal08; @Zhong11]. Similarly, the idea that dispersal associated with low fitness might be advantageous has a basis in simulation of spatially differentiated populations [@Aktipis2004; @Aktipis2011]. This association might perhaps be framed more generally in terms of ‘fitness associated dispersal’. Thus the framework for control of mutation rate in response to fitness that we have developed here may in future be applicable to both recombination and dispersal.
To conclude, our development of theory and testing its predictions *in silico* not only clarifies ideas around the monotonicity of fitness landscapes and mutation rate control, it leads directly to questions testable in living organisms. At the same time there is the potential for greater insight through further development of the theory. Three directions seem particularly likely to be fruitful.
First, while it is striking how effective mutation rate control is for adaptive evolution without invoking selection in our *in silico* experiments, it will be important to consider the role of a selection strategies. Such strategies may implicitly modify fitness functions. For instance, one of the analytically derived functions shown in Figure \[fig:mutation-rate-functions\] is the mutation rate function for a DNA space (${\mathcal{H}}_4^{10}$) which maximises the probability of adaptation (as derived by Bäck for binary sequences [@Back93]). As outlined in the corresponding section, maximising the probability of adaptation is equivalent to maximising expected fitness of the offspring relative to its parent. This effect may be implicit in a selection strategy that removes the offspring of reduced fitness that will inevitably be produced by maximising offspring expected fitness. Given the importance of selection in biology, we therefore anticipate that such functions may be closer to putative mutation rate control functions in living organisms. This requires further work.
A second area for development is in variable adaptive landscapes. The importance of time-varying adaptive landscapes in biological evolution is becoming increasingly appreciated [@Mustonon09; @Collins11] and variable mutation rates have a particular role here [@Stich10]. It is worth noticing however that our derivation of optimal mutation rate functions is *not* dependent on a fixed landscape, as it depends only on the fitness values. Nonetheless, as we demonstrate for the transcription factor landscapes, variation in landscapes’ monotonic properties relates to the shape of mutation rate functions in predictable ways (Figure \[fig:edge-tau\]). This deserves further exploration both theoretically and empirically: measuring variation in the monotonic properties of real biological landscapes will be informative about optimal mutation rate functions and *vice versa*.
Finally, there is potential to develop theory around the role of encoding-decoding schemes. Landscape monotonicity, as explored here, is not absolute; it depends on the encoding-decoding scheme (see Figure \[fig:commutative\]). That is, if the encoding changes, it may be possible to convert a non-monotonic landscape into a monotonic one. Biology uses a variety of such encoding schemes which may themselves evolve. For the transcription factor landscapes used here, the encoding-decoding scheme is defined by the biochemical interactions between the transcription factor (a protein molecule) and DNA. Thus, evolution of transcription factors constitutes evolution of the encoding-decoding scheme, and indeed we do find a relationship between that evolution (age of families) and the monotonic properties of the associated landscapes. A more familiar example of a biological encoding-decoding scheme is the genetic code where there is much existing work on its evolution (e.g. [@Freeland00]). Determining how evolution of such codes affects the monotonic properties of biological landscapes as explored here may therefore provide novel insights into large-scale evolutionary patterns. Ultimately, theory such as this that identifies analytically or empirically optimal mutation rate control functions may help make predictions about evolutionary responses to future environmental change [@Chevin10] or inferences about the environment(s) within which particular organisms evolved. In the meantime mutation rate control as developed here will assist directed evolution within biological and other complex landscapes, for instance in the evolution of DNA-protein binding [@Knight09].
Memoryless Communication {#sec:memoryless}
========================
Let $(X,\mathcal{X})$ and $(Y,\mathcal{Y})$ be measurable sets. We shall now consider an $X\times Y$-valued stochastic process $\{(x_t,y_t)\}_{t\geq0}$ and the ‘similarity’ between the marginal processes $\{x_t\}_{t\geq0}$ and $\{y_t\}_{t\geq0}$ under special assumptions on the communication between $X$ and $Y$. Recall that a Markov [*transition kernel*]{} from $(X,\mathcal{X})$ to $(Y,\mathcal{Y})$ is a conditional probability measure $P(Y_i\mid x)$ on $(Y,\mathcal{Y})$, which is $\mathcal{X}$-measurable for each $Y_i\in\mathcal{Y}$. We shall often use measure-theoretic notation $dP(y\mid x)$ for transition kernel $P(Y_i\mid x)$, especially when it appears under the integral.
Let $(X,\mathcal{X})$ and $(Y,\mathcal{Y})$ be measurable sets, and let $\{(x_t,y_t)\}_{t\geq0}$ be a $X\times Y$-valued stochastic process such that elements of the marginal process $\{y_t\}_{t\geq0}$ are conditionally independent given corresponding elements of $\{x_t\}_{t\geq0}$: $$dP(y_t,\ldots,y_0\mid x_t,\ldots,x_0)=dP(y_t\mid x_t)\times\cdots\times dP(y_0\mid x_0)$$ Then transition kernel $dP(y_{t+1}\mid y_t)$ can be expressed as a composition of transition kernels $dP(x_t\mid y_t)$, $dP(x_{t+1}\mid x_t)$ and $dP(y_{t+1}\mid x_{t+1})$ as follows: $$dP(y_{t+1}\mid y_t)=\int\limits_{x_{t+1}\in X}\int\limits_{x_t\in X} dP(y_{t+1}\mid x_{t+1})\,dP(x_{t+1}\mid x_t)\,dP(x_t\mid y_t)$$ This transition kernel has the following properties:
1. If $X$ and $Y$ are statistically independent, then $y_{t+1}\in Y$ is independent of $y_t\in Y$: $dP(y_{t+1}\mid y_t)=dP(y_{t+1})$
2. If $dP(x\mid y)$ corresponds to a function $x=h(y)$ and $y$ are uniformly distributed in the preimage $h^{-1}(x)$, then $$dP(y_{t+1}\mid y_t)=\frac{1}{|h^{-1}\circ h(y_{t+1})|}\,dP(x_{t+1}=h(y_{t+1})\mid x_t=h(y_t))$$
3. If $dP(y\mid x)$ corresponds to a function $y=g(x)$ and $x$ are uniformly distributed in the preimage $g^{-1}(y)$, then $$dP(y_{t+1}\mid y_t)=\frac{1}{|g^{-1}(y_t)|}\int\limits_{x_{t+1}\in g^{-1}(y_{t+1})}\int\limits_{x_t\in g^{-1}(y_t)}dP(x_{t+1}\mid x_t)$$
4. If $dP(y\mid x)$ corresponds to a bijection $y=h(x)$, then $$dP(y_{t+1}\mid y_t)=dP(x_{t+1}=h(y_{t+1})\mid x_t=h(y_t))$$
\[pr:induced-kernel\]
Transition kernel $dP(x_{t+1}\mid x_t)$ can generally be expressed as follows: $$\begin{aligned}
dP(y_{t+1}\mid y_t)&=&\int\limits_{x_{t+1}\in X}\int\limits_{x_t\in X} dP(y_{t+1},x_{t+1},x_t\mid y_t)\\
&=&\int\limits_{x_{t+1}\in X}\int\limits_{x_t\in X}dP(y_{t+1}\mid x_{t+1},x_t,y_t)\,dP(x_{t+1}\mid x_t,y_t)\,dP(x_t\mid y_t)\end{aligned}$$ Using the Bayes formula and conditional independence $dP(y_{t+1},y_t\mid x_{t+1},x_t)=dP(y_{t+1}\mid x_{t+1})\,dP(y_t\mid x_t)$ one can show that $dP(y_{t+1}\mid x_{t+1},x_t,y_t)=dP(y_{t+1}\mid x_{t+1})$ and $dP(x_{t+1}\mid x_t,y_t)=dP(x_{t+1}\mid x_t)$. Indeed $$\begin{aligned}
dP(y_{t+1}\mid x_{t+1},x_t,y_t)&=&\frac{dP(y_{t+1},y_t\mid x_{t+1},x_t)}{\int\limits_{y_{t+1}\in Y}dP(y_{t+1},y_t\mid x_{t+1},x_t)}\\
&=&\frac{dP(y_{t+1}\mid x_{t+1})\,dP(y_t\mid x_t)}{\int\limits_{y_{t+1}\in Y}dP(y_{t+1}\mid x_{t+1})\,dP(y_t\mid x_t)}=dP(y_{t+1}\mid x_{t+1})\end{aligned}$$ $$\begin{aligned}
dP(x_{t+1}\mid x_t,y_t)&=&\int\limits_{y_{t+1}\in Y}dP(y_{t+1},x_{t+1}\mid x_t,y_t)\\
&=&\int\limits_{y_{t+1}\in Y}\frac{dP(y_{t+1},y_t\mid x_{t+1},x_t)\,dP(x_{t+1}\mid x_t)}{dP(y_t\mid x_t)}\\
&=&\int\limits_{y_{t+1}\in Y}\frac{dP(y_{t+1}\mid x_{t+1})\,dP(y_t\mid x_t)\,dP(x_{t+1}\mid x_t)}{dP(y_t\mid x_t)}\\
&=&dP(x_{t+1}\mid x_t)\end{aligned}$$ Thus, $dP(y_{t+1}\mid y_t)$ can be expressed using the composition of transition kernels $dP(y_{t+1}\mid x_{t+1})\,dP(x_{t+1}\mid x_t)\,dP(x_t\mid y_t)$. We now consider four important cases.
1. If $X$ and $Y$ are independent, then $dP(y_{t+1}\mid x_{t+1})=dP(y_{t+1})$ and $dP(x_t\mid y_t)=dP(x_t)$, and therefore $$dP(y_{t+1}\mid y_t)=dP(y_{t+1})\int\limits_{x_{t+1}\in X}\int\limits_{x_t\in X}dP(x_{t+1}\mid x_t)\,dP(x_t)=dP(y_{t+1})$$
2. If $x=h(y)$ and $y$ are uniformly distributed in the preimage $h^{-1}(x)$, then $$dP(x_t\mid y_t)=\delta_{h(y_t)}(x_t)\,,\qquad
dP(y_{t+1}\mid x_{t+1})=\frac{1}{|h^{-1}\circ h(y_{t+1})|}$$ which gives the resulting expression.
3. If $y=g(x)$ and $x$ are uniformly distributed in the preimage $g^{-1}(y)$, then $$dP(x_t\mid y_t)=\frac{1}{|g^{-1}(y_t)|}\,,\qquad
dP(y_{t+1}\mid x_{t+1})=\delta_{g(x_{t+1})}(y_{t+1})$$ The resulting expression is obtained by integrating $dP(x_{t+1}\mid x_t)$ for each $x_{t+1}\in g^{-1}(y_{t+1})$ and $x_t\in g^{-1}(y_t)$.
4. Follows trivially from the fact that $|h^{-1}\circ h(y)|=1$ for a bijection.
It is not required in Proposition \[pr:induced-kernel\] for any of the stochastic processes $\{(x_t,y_t)\}_{t\geq0}$, $\{x_t\}_{t\geq0}$ or $\{y_t\}_{t\geq0}$ to be Markov. It is well-known, however, that if $\{x_t\}_{t\geq0}$ is Markov (i.e. $dP(x_{t+1}\mid x_t,\ldots,x_0)=dP(x_{t+1}\mid x_t)$) and $y_t$ are conditionally independent given the corresponding $x_t$, then the combined process $\{(x_t,y_t)\}_{t\geq0}$ is Markov as well, because in this case $dP(x_{t+1},y_{t+1}\mid x_t,y_t,\ldots,x_0,y_0)=dP(y_{t+1}\mid x_{t+1})\,dP(x_{t+1}\mid x_t)$. The unobserved process $\{x_t\}_{t\geq0}$ is often referred to as a hidden Markov model, and $x_t$ is estimated from observed values $y_0,\ldots,y_t$ of the related process $\{y_t\}_{t\geq0}$ (this is called the [*filtering*]{} problem [@Stratonovich59:_nonlinear]). Note that the observed process $\{y_t\}_{t\geq0}$ is usually non-Markov (i.e. $dP(y_{t+1}\mid y_t,\ldots,y_0)\neq dP(y_{t+1}\mid y_t)$). In the context of Section \[sec:monotonic\], the unobserved variable $x\in X$ is distance to optimum $d(\top,\omega)$, and observed variable $y\in Y$ is fitness.
Monotonicity {#sec:same-fitness-distance}
============
Let $(\Omega,d)$ be a metric space, and let $f:\Omega\to{\mathbb{R}}$ be a function with $f(\top)=\sup f(\omega)$ for some $\top\in\Omega$. If the metric $d$ is monotonic at $\top$ relative to $f$, then all $\omega$ with the same values $f(\omega)$ have the same distance $d(\top,\omega)$ from the optimum. Conversely, if $f$ is monotonic at $\top$ relative to $d$, then all $\omega$ with the same distance $d(\top,\omega)$ from the optimum have the same values $f(\omega)$. \[pr:same-fitness-distance\]
Indeed, using the definition of monotonic $d$: $$\begin{aligned}
f(a)=f(b)&\iff& f(a)\leq f(b)\ \wedge\ f(a)\geq f(b)\\
&\Longrightarrow& -d(\top,a)\leq -d(\top,b)\ \wedge\ -d(\top,a)\geq -d(\top,b)\\
&\iff& d(\top,a)=d(\top,b)\end{aligned}$$ Using the definition of monotonic $f$: $$\begin{aligned}
d(\top,a)=d(\top,b)&\iff& -d(\top,a)\leq -d(\top,b)\ \wedge\ -d(\top,a)\geq -d(\top,b)\\
&\Longrightarrow& f(a)\leq f(b)\ \wedge\ f(a)\geq f(b)\\
&\iff&f(a)=f(b)\end{aligned}$$
Point Mutation as Optimal Solution of Variational Problem {#sec:variational}
=========================================================
Let $(\Omega,d)$ be a metric space, $dQ(a\in\Omega)$ be a probability measure of the ‘parent’ points, and let $dP(b\in\Omega)$ be a probability measure of their ‘offspring’ points obtained by a stochastic transformation defined by the transition kernel $dP(b\mid a)$. The product $dP(b\mid a)\,dQ(a)$ defines a joint probability measure of parents and their offspring. The expected distance between parents and offspring is $${\mathbb{E}}\{d(a,b)\}=\int\limits_{\Omega\times\Omega} d(a,b)\,dP(b\mid a)\,dQ(a)$$ The mutual information between parents and offspring is defined as $$I\{a,b\}=\int\limits_{\Omega\times\Omega}\left[\ln\frac{dP(b\mid a)}{dP(b)}\right]\,dP(b\mid a)\,dQ(a)$$ We remind that $I\{a,b\}\geq0$ with zero if and only if $a$ and $b$ are statistically independent. The supremum of $I\{a,b\}$ corresponds to the case when $b$ is obtained from $a$ deterministically using some injective function on $\Omega$ (i.e. a one-to-one mapping). For example, if $b$ is identical to $a$ (i.e. $dP(b\mid a)$ corresponds to the identity mapping on $\Omega$), then $I\{a,b\}=\sup I\{a,b\}=|\Omega|$ and $d(a,b)=0$. Consider the following variational problem $$\mbox{minimise}\quad{\mathbb{E}}\{d(a,b)\}\quad\mbox{subject to}\quad I\{a,b\}\leq\lambda
\label{eq:min-d}$$ where optimisation is over all joint probability measures $dP(b\mid a)\,dQ(a)$ or over all transition probabilities $dP(b\mid a)$, if $dQ(a)$ is fixed. Because of the constraint on mutual information, the transition probabilities $dP(b\mid a)$ cannot correspond to any injective function on $\Omega$, and therefore generally $b$ cannot be identical to $a$ so that ${\mathbb{E}}\{d(a,b)\}>0$. Note that problem (\[eq:min-d\]) has the following ‘inverse’ problem: $$\mbox{minimise}\quad I\{a,b\}\quad\mbox{subject to}\quad{\mathbb{E}}\{d(a,b)\}\leq\upsilon
\label{eq:min-i}$$ The constraint on the expected distance implies that $a$ and $b$ are not independent so that $I\{a,b\}>0$. It is well-known in information theory (e.g. see [@Shannon48; @Stratonovich65] or [@Belavkin11:_optim] for generalisations) that solutions to these variational problems are members of an exponential family $$dP_\beta(b\mid a)=e^{-\beta\,d(a,b)-\Psi(\beta,a)}\,dP(b)\,,\qquad
e^{\Psi(\beta,a)}=\int\limits_B e^{-\beta\,d(a,b)}\,dP(b)$$ where parameter $\beta$ (called the *inverse temperature*) is defined from one of the conditions: $$I\{a,b\}=\lambda\,,\qquad {\mathbb{E}}\{d(a,b)\}=\upsilon$$ Moreover, if the metric space $\Omega$ is also a group $(\Omega,+)$ with invariant measure $\nu$, and the metric is translation invariant $d(a,b)=d(a+c,b+c)$, then these exponential transition kernels have the following simplified form $$dP_\beta(b\mid a)=e^{-\beta\,d(a,b)-\Psi_0(\beta)}\,d\nu(b)\,,\qquad
e^{\Psi_0(\beta)}=\int\limits_B e^{-\beta\,d(a,b)}\,d\nu(b)$$ In particular, this is the case when $\Omega$ is a normed vector space, and the metric is defined using the difference of two vectors: $d(a,b)=\|a-b\|$. For example, the Hamming space ${\mathcal{H}}_\alpha^l:=\{1,\ldots,\alpha\}^l$ is a finite vector space over a finite field $GF(\alpha)$ with the Hamming metric defined as $d_H(a,b)=\|a-b\|_H$, where $\|\cdot\|_H$ is the Hamming weight. The invariant measure on a Hamming space is the counting measure $\nu(b)=1$. Thus, for a Hamming space the optimal transition kernel solving problems (\[eq:min-d\]) and (\[eq:min-i\]) is $$P_\beta(b\mid a)=e^{-\beta\,\|a-b\|_H-\Psi_0(\beta)}\,,\qquad
e^{\Psi_0(\beta)}=\sum_{b\in{\mathcal{H}}_\alpha^l} e^{-\beta\,\|a-b\|_H}$$ We now show that the above exponential transition kernel implements point mutation.
Indeed, because $e^{-\beta\,\|a-b\|_H}=e^{-\beta\,r}$ for all sequences in the sphere $S(a,r):=\{b:\|a-b\|_H=r\}$ around point $a$ and radius $r$, the summation of $e^{-\beta\,\|a-b\|_H}$ over all sequences $b\in{\mathcal{H}}_\alpha^l$ can be replaced by the summation of $|S(a,r)|e^{-\beta\,r}$ over the spheres of all radii $r\in\{0,\ldots,l\}$. The number of sequences in a sphere of the Hamming space ${\mathcal{H}}_\alpha^l$ is $|S(a,r)|=(\alpha-1)^r{l\choose r}$, and therefore $$e^{\Psi_0(\beta)}=\sum_{b\in{\mathcal{H}}_\alpha^l} e^{-\beta\,\|a-b\|_H}=\sum_{r=0}^l(\alpha-1)^r{l\choose r}e^{-\beta\,r}
=[1+(\alpha-1)e^{-\beta}]^l$$ Thus, $P_\beta(b\mid a)$ has the following simple expression: $$P_\beta(b\mid a)=\frac{e^{-\beta\,\|a-b\|_H}}{[1+(\alpha-1)e^{-\beta}]^l}$$ Given a sequence that is $n=\|\top-a\|_H$ letters away from $\top$, the probability of mutation by radius $r=\|a-b\|_H$ is: $$P_\beta(r\mid n)=|S(a,r)|P_\beta(b\mid a)=(\alpha-1)^r{l\choose r}\frac{e^{-\beta\,r}}{[1+(\alpha-1)e^{-\beta}]^l}$$ The inverse temperature parameter $\beta$ is determined either from condition $I\{a,b\}=\lambda$ or ${\mathbb{E}}\{\|a-b\|_H\}=\upsilon$. In particular, it is convenient to use the latter condition in conjunction with the following expression for the expected mutation radius $${\mathbb{E}}\{r\}=\frac{d}{d\beta}\Psi_0(\beta)=\frac{l}{1+e^\beta/(\alpha-1)}$$ Inverting the equation ${\mathbb{E}}\{r\}(\beta)=\upsilon$ gives the result $$\beta=\ln\left(\frac{l-\upsilon}{\upsilon}\right)+\ln(\alpha-1)$$ Changing parametrisation from $\beta$ to $\upsilon$, the probability $P_\beta(r\mid n)$ can be written as binomial distribution with probability of success $\mu=\upsilon/l$: $$P_\upsilon(r\mid n)={l\choose r}\left(\frac{\upsilon}{l-\upsilon}\right)^r\left(1+\frac{\upsilon}{l-\upsilon}\right)^{-l}
={l\choose r}\left(\frac{\upsilon}{l}\right)^r\left(1-\frac{\upsilon}{l}\right)^{l-r}$$ Therefore, exponential transition kernel that solves optimisation problems (\[eq:min-d\]) and (\[eq:min-i\]) in the Hamming space corresponds to independent substitution of each letter in a sequence to any other of the $\alpha-1$ letters with probability $\mu/(\alpha-1)$, and this process is known as point mutation.
Supplementary Figures {#sec:supp-figures}
=====================
|
---
abstract: 'We perform a systematic [*ab initio*]{} study of the electronic structure of Sr(V,Mo,Mn)O$_3$ perovskites, using the parameter-free $GW$+EDMFT method. This approach self-consistently calculates effective interaction parameters, taking into account screening effects due to nonlocal charge fluctuations. Comparing the results of a 3-band ($t_{2g}$) description to those of a 5-band ($t_{2g}$+$e_g$) model, it is shown that the $e_g$ states have little effect on the low-energy properties and the plasmonic features for the first two compounds but play a more active role in SrMnO$_3$. In the case of SrMnO$_3$ paramagnetic $GW$+EDMFT yields a metallic low-temperature solution on the verge of a Mott transition, while antiferromagnetic $GW$+EDMFT produces an insulating solution with the correct gap size. We discuss the possible implications of this result for the nature of the insulating state above the Néel temperature, and the reliability of the $GW$+EDMFT scheme.'
author:
- Francesco Petocchi
- Fredrik Nilsson
- Ferdi Aryasetiawan
- Philipp Werner
bibliography:
- 'paper.bib'
title: |
Screening from $e_g$ states and antiferromagnetic correlations in $d^{(1,2,3)}$ perovskites:\
A $GW$+EDMFT investigation
---
=1
Introduction
============
Transition-metal oxides represent an important test ground for new theoretical and computational schemes aimed at a quantitative description of electron-electron correlations. In this class of compounds, methods based on a single-particle description of the solid do not provide a satisfactory description due to the strong many-body interactions within the partially filled and narrow $d$ or $f$ bands. Emergent properties, such as high-temperature superconductivity and other electronic ordering phenomena, are the result of subtle competitions between different interactions and require an accurate estimation of free energies. Even in the absence of symmetry breaking, basic properties of the solids, such as the metallic or insulating nature of the ground state, cannot be easily predicted.[@Imada1998] Theoretical models capturing the essential physics need to be developed, solved, and the results compared to experiments.
A widely used approach is the combination of Density Functional Theory (DFT) and Dynamical Mean-Field Theory (DMFT).[@Georges1996; @Anisimov1991] This scheme reduces the problem to a multi-orbital Hubbard model with hopping parameters derived using Wannier basis functions. In many of these calculations the value of the local Coulomb interaction is chosen to reproduce experimentally observed properties, such as mass enhancements or positions of Hubbard bands. With suitably chosen parameters, many properties of correlated materials can be described by DFT+DMFT.[@Kotliar2004] What is not captured by this approach are collective long-range charge fluctuations and the resulting dynamical screening effects. These dynamical long-range correlations are on the other hand well described by the $GW$ approximation,[@Hedin1965] one of the most successful methods for the study of excited-state properties of weakly correlated compounds, such as semi-conductors. [@Schilfgaarde2006] After more than a decade of development, [@Biermann03FirstPrinciples; @Tomczak12Combined; @Ayral2013; @Tomczak14Asymmetry; @Huang2014; @Boehnke16When] the power of these two methods has been combined into a multitier $GW$+EDMFT formalism, which is applicable to moderately and strongly correlated materials.[@Boehnke16When; @Nilsson17Multitier] EDMFT is the extended version of DMFT, [@Sun2002] which allows to treat the effect of long-range interactions. Multitier refers to the fact that different degrees of freedom are treated with different physically motivated approximations: the highest energy bands with single-shot $GW$, an intermediate energy window within self-consistent $GW$ and only the most strongly correlated bands near the Fermi energy within $GW$+EDMFT. This separation makes the scheme computationally feasible and can be implemented without any double countings. Apart from the choice of these energy windows, multitier $GW$+EDMFT is free of adjustable parameters, and thus a true [*ab initio*]{} method.
To assess the accuracy and predictive power of this approach, it is important to test it on a broad range of compounds, and with different choices of energy windows. Here, we continue the effort started in Refs. and present a systematic study of three prototypical perovskite compounds, namely SrVO$_3$, SrMoO$_3$, and SrMnO$_3$. These materials exhibit different fillings of the $t_{2g}$ orbitals, and hence different correlation effects. SrVO$_3$ and SrMoO$_3$ are paramagnetic metals, while SrMnO$_3$ is an antiferromagnetic insulator. DFT+DMFT based modeling of these materials often only considered the $t_{2g}$ shell,[@Pavarini04Mott; @nekrasov2006; @Lechermann06Dynamical; @Backes16Hubbard] but here, we also explore the effect of including the almost empty $e_g$ states. This provides a consistency check for the multitier scheme, since the three-band and five-band treatments should produce consistent results for the low-energy electronic structure.
As far as the methodology is concerned, detailed descriptions can be found in Ref. . One extension in the present work is the implementation of a self-consistency loop with two sublattices, which allows to stabilize solutions with antiferromagnetic order.
The manuscript is organized as follows: in Sec. \[sec\_method\] we briefly summarize the method, the steps of the self-consistency loop and the rationale behind the multitier subdivision of the orbital space. In Sec. \[sec\_results\] we compare the results of the five- and three-band models for SrVO$_3$, SrMoO$_3$ and SrMnO$_3$. In Sec. \[sec\_conclusions\] we present our conclusions.
Method {#sec_method}
======
In this Section we give a brief overview of the multitier $GW$+EDMFT method developed in Refs. and explain its extension to antiferromagnetically ordered systems.
$GW$+EDMFT
----------
By defining a set of localized wave functions $w_{i\mathbf{R}}(\mathbf{r})$, where $i$ is an orbital index and $\mathbf{R}$ is a site index, the self-energy $\Sigma$ and polarization $\Pi$ can be divided into the local (onsite) components $\Sigma^{\mathrm{loc}}$, $\Pi^{\mathrm{loc}}$ and the remaining nonlocal components, $$\begin{aligned}
\Sigma_{i,j}(\mathbf{k},i\nu)= \Sigma^{\mathrm{loc}}_{i,j}(i\nu) +
\Sigma^{\mathrm{nonloc}}_{i,j}(\mathbf{k},i\nu), \\
\Pi_{\alpha,\beta}(\mathbf{k},i\omega)=
\Pi^{\mathrm{loc}}_{\alpha,\beta}(i\omega) +
\Pi^{\mathrm{nonloc}}_{\alpha,\beta}(\mathbf{k},i\omega).\end{aligned}$$ The Greek indices $\alpha,\beta$ denote a product basis, $\alpha = \{i,j\}$, necessary to expand the two-particle functions, and we have assumed that two basis functions localized on different sites do not overlap. The self-energy and polarization are related to the Green’s function $G$ and screened interaction $W$ through the Dyson equations $$\begin{aligned}
G&=G^0 + G^0 \Sigma G, \\
W&=v + v \Pi W,\end{aligned}$$ where $G^0$ is the bare propagator and $v$ the bare Coulomb interaction. The key approximation in EDMFT is that the nonlocal components of $\Sigma$ and $\Pi$ are negligible, $\Sigma=\Sigma^{\mathrm{loc}}$ and $\Pi=\Pi^{\mathrm{loc}}$. With these approximations the full lattice problem can be mapped to an effective local impurity problem with a dynamical bare propagator $\mathcal{G}(i\nu)$ and a dynamical bare impurity interaction $\mathcal{U}(i\omega)$. These so-called Weiss-fields are determined self-consistently such that the impurity Green’s function reproduces the local lattice Green’s function, $G^{\mathrm{imp}}=G^{\mathrm{loc}}$, and correspondingly for the screened interaction, $W^{\mathrm{imp}}=W^{\mathrm{loc}}$.
$GW$+EDMFT can be regarded as an extension of EDMFT, where the nonlocal components are accounted for within the $GW$ approximation, $$\begin{aligned}
\Sigma_{ik}^{\mathrm{nonloc}}(\mathbf{q},\tau) = &-\sum_{\mathbf{k}jl}
G_{jl}(\mathbf{k},\tau) W_{ijkl}(\mathbf{q}-\mathbf{k},\tau) \nonumber \\
&+ \sum_{jl} G_{jl}^{\mathrm{loc}}(\tau) W_{ijkl}^{\mathrm{loc}}(\tau),
\label{eq:GWSigma} \\
\Pi^{\mathrm{nonloc}}_{mm'nn'}(\mathbf{q},\tau) =
&\sum_{\mathbf{k}}G_{mn}(\mathbf{k},\tau)G_{n'm'}(\mathbf{k}-\mathbf{q},-\tau)
\nonumber \\
&-G^{\mathrm{loc}}_{mn}(\tau)G^{\mathrm{loc}}_{n'm'}(-\tau). \label{eq:GWPol} \end{aligned}$$ The matrix elements of the screened interaction are defined as $$\begin{aligned}
\label{eqn:downfolded}
W_{ijkl}(\mathbf{q},i\omega)=&\sum_{\mathbf{R},\mathbf{R'}}
e^{i\mathbf{q}(\mathbf{R}-\mathbf{R}')}\int
d\mathbf{r}d\mathbf{r}'w_{i\mathbf{R}}^{*}(\mathbf{r})w_{j\mathbf{R}}^{}(\mathbf
{r})\nonumber\\
&\times W(\mathbf{r},\mathbf{r}',i\omega)
w_{k\mathbf{R}'}(\mathbf{r}')w^{*}_{l\mathbf{R}'}(\mathbf{r}'),\end{aligned}$$ where we once again have assumed that two basis functions localized on different sites have zero overlap.
The $GW$+EDMFT self-consistency cycle contains the following steps:
1. Start with an initial guess for $\Sigma^\mathrm{imp}$, $\Pi^\mathrm{imp}$ and $G_\mathbf{k}$. $\Sigma^\mathrm{loc}=\Sigma^\mathrm{imp}$ and $\Pi^\mathrm{loc}=\Pi^\mathrm{imp}$ (EDMFT approximations). \[nbr:EDMFTcondition\]
2. Compute $\Sigma^{\mathrm{nonloc}}$ and $\Pi^{\mathrm{nonloc}}$ according to equations (\[eq:GWSigma\])-(\[eq:GWPol\]). \[nbr:nonloc\]
3. Define $\Sigma_\mathbf{k}=\Sigma^\mathrm{imp} +
\Sigma^\mathrm{nonloc}_\mathbf{k}$ and $\Pi_\mathbf{q}=\Pi^\mathrm{imp} +
\Pi^\mathrm{nonloc}_\mathbf{q}$ ($GW$+EDMFT approximations). \[nbr:EDMFTcondition\]
4. Calculate $G_\mathbf{k}=\big((G^{(0)}_\mathbf{k})^{-1}-\Sigma_\mathbf{k}\big)^{-1}$ and $W_\mathbf{q}=v_\mathbf{q}\left(\mathbbm{1}-\Pi_\mathbf{q}
v_\mathbf{q}\right)^{-1}$.
5. Using $G^\mathrm{loc}= \frac{1}{N}\sum_\mathbf{k} G_\mathbf{k}$ and $W^\mathrm{loc}=\frac{1}{N} \sum_\mathbf{q}W_\mathbf{q}$ calculate the fermionic Weiss field $$\begin{aligned}
\mathcal{G}=\left(\Sigma^\mathrm{imp}+(G^\mathrm{loc})^{-1}\right)^{-1}
\label{eq:weissfields1}\end{aligned}$$ and the effective impurity interaction $$\begin{aligned}
\mathcal{U}=W^\mathrm{loc}\left(\mathbbm{1}+\Pi^\mathrm{imp}
W^\mathrm{loc}\right)^{-1}.
\label{eq:weissfields2}\end{aligned}$$\[nbr:weissfields\]
6. Numerically solve the impurity problem to obtain $G^\mathrm{imp}$ and the impurity charge susceptibility $\chi^\mathrm{imp}=\left\langle\hat{n}(\tau)\hat{n}(0)\right\rangle$.
7. Use the current $\mathcal{G}$ and $\mathcal{U}$ to calculate $\Sigma^\mathrm{imp}=\mathcal{G}^{-1}-(G^\mathrm{imp})^{-1}$, $\Pi^\mathrm{imp}=\chi^\mathrm{imp}\left(\mathcal{U}\chi^\mathrm{imp}-\mathbbm{1
}\right)^{-1}$ and $W^\mathrm{imp}=\mathcal{U}-\mathcal{U}\chi^\mathrm{imp}\mathcal{U}$.
8. If the selfconsistency conditions $G^\mathrm{imp} = G^\mathrm{loc}$ and $W^\mathrm{imp}= W^\mathrm{loc}$ are not fulfilled within a given tolerance, go back to step \[nbr:nonloc\].
Multitier-approach
------------------
If the self-consistency cycle is performed in the complete Hilbert space the $GW$+EDMFT formalism is derivable from a free energy functional $\Psi$ and is hence conserving.[@Biermann03FirstPrinciples] However, in practice this is not feasible. To overcome this problem a multitier implementation was developed in Refs. . In this approach the complete Hilbert space is divided into three different subspaces, each treated at a different level of approximation. Correspondingly the calculations are divided into three tiers which refer to the different subspaces. The multitier approach is a systematic downfolding procedure from the complete Hilbert space to a smaller subspace and includes well-defined double counting corrections at each step:
1. TIER III: First, a DFT calculation is performed using the FLAPW code FLEUR[@fleur]. Based on the DFT bandstructure we compute the one-shot $GW$ self-energy $\Sigma^{G^{\, 0}W\, ^0}$ using the SPEX code.[@Friedrich10Efficient; @fleur] Then, an intermediate- or low-energy subspace, $I$, which includes up to 10 bands around the Fermi energy is defined using maximally localized Wannier functions as implemented in the Wannier90 library.[@Marzari97Maximally; @Mostofi08wannier90; @Freimuth08Maximally; @Sakuma13Symmetryadapted] The effective Coulomb interaction, $U$, on the intermediate subspace is computed within the constrained random-phase approximation (cRPA) [@Aryasetiawan04Frequencydependent] using the SPEX code. The $G^{\, 0}W\, ^0$ self-energy contribution from within the intermediate subspace is removed from $\Sigma^{G^{\, 0}W\, ^0}$ to define an effective bare propagator $G_{\mathbf{k}}^{\, 0}$ for the intermediate subspace.
2. TIER II: In the intermediate subspace the self-energy is calculated using a custom self-consistent $GW$-implementation (See Ref. ). A correlated subspace $C$, which can be smaller or equal to the intermediate subspace, is defined. The local part of the $GW$ self-energy and polarization from within the correlated subspace is subtracted to define the effective bare propagator and effective bare interaction for $C$.
3. TIER I: At each step of the self-consistency cycle local corrections to the self-energy and polarization in the correlated subspace $C$ are computed using EDMFT. The effective impurity problem is solved using the CT-Hyb [@Werner06ContinuousTime; @Werner07Efficient; @Gull11Continuoustime] quantum Monte-Carlo algorithm implemented in ALPS,[@Bauer11Alps; @Alps; @Hafermann13Efficient] while the self-consistency equations make use of the TRIQS framework.[@Parcollet15Triqs] The EDMFT calculation provides local corrections to the self-energy and polarization within the correlated subspace.
The complete expressions for the Green’s function and screened interactions are: $$\begin{aligned}
&G_{\mathbf{k}}^{-1}=\overbrace{\underbrace{\mathrm{i}\omega _{n}+\mu
-\varepsilon
_{\mathbf{k}}^{\mathrm{DFT}}+V_{\mathrm{XC},\mathbf{k}}}_{G_{\mathrm{Hartree},
\mathbf{k}}^{\, 0}{}^{-1}}\underbrace{-\left(\Sigma_{\mathbf{k}}^{G^{\, 0}W^{\,
0}}-\Sigma_{\mathbf{k}}^{G^{\, 0}W^{\, 0}}\big|_{I}\right)}_{-\Sigma
_{\mathrm{r},\mathbf{k}}}}^{\text{TIER III},\; G_{I, \mathbf{k}}^{\,
0}{}^{-1}}\notag\\
&\underbrace{-\left(\Sigma _{\mathbf{k}}^{GW}\big|_{I}-\Sigma
^{GW}\big|_{C,\mathrm{loc}} + \Delta V_H|_{I} \right)}_{\text{TIER
II}}\underbrace{-\Sigma ^{\mathrm{EDMFT}}\big|_{C,\mathrm{loc}}}_{\text{TIER
I}}\;,\label{eqn:fullG} \\
&W_\mathbf{q}^{-1}=\overbrace{v_\mathbf{q}^{-1}\underbrace{-\left(\Pi^{G^{\,
0}G^{\, 0}}_\mathbf{q}-\Pi^{G^{\, 0}G^{\,
0}}_\mathbf{q}\big|_I\right)}_{-\Pi_{\mathrm{r},\mathbf{q}}}}^{\text{TIER
III},\; U_{I,\mathbf{q}}^{-1}}\notag\\
&\underbrace{-\left(\Pi_\mathbf{q}^{GG}\big|_I-\Pi^{GG}\big|_{C,\mathrm{loc}}
\right)}_{\text{TIER
II}}\underbrace{-\Pi^\mathrm{EDMFT}\big|_{C,\mathrm{loc}}}_{\text{TIER
I}}.\label{eqn:fullW}\end{aligned}$$ The self-energies is Eq. (\[eqn:fullG\]) only contain the exchange and correlation parts, while $\Delta V_H|_{I}$ represents the change of the Hartree potential within the intermediate subspace (see Ref. for a detailed description). $V_{\mathrm{XC},\mathbf{k}}$ is the exchange-correlation potential from the DFT calculation. The notation $A\big|_{I}$ means that all internal sums when evaluating $A$ are restricted to the subspace $I$.
Antiferromagnetic extension {#sec_AFM}
---------------------------
Strongly correlated multiorbital systems at certain integer fillings tend to develop long-range magnetic ordering. In particular, on a bipartite lattice, there is a strong tendency to antiferromagnetic order at half-filling. At sufficiently low temperature we then expect the appearance of a solution with a local spin polarization. A staggered magnetization on a bipartite lattice can be easily treated in DMFT [@Georges1996] by considering two sublattices $A$ and $B$ and imposing the following relation between the self-energies: $$\begin{aligned}
\label{eq:spinflip}
\Sigma_{\uparrow}^{A} =\Sigma_{\downarrow}^{B}, \quad
\Sigma_{\downarrow}^{A} =\Sigma_{\uparrow}^{B}. \end{aligned}$$ This allows to reduce the EDMFT calculation to the solution of a single impurity problem, while the unit cell used in the lattice self-consistency has to be doubled. We extended the multitier formalism in order to include this kind of long-range spin ordering by doubling the unit cell of the $GW$ calculations in TIER III and TIER II, which doubles the size of the lattice Green’s function $G_\mathbf{k}$ and screened interaction $W_\mathbf{q}$. The calculation in TIER III (which provides the input for TIER II) is kept paramagnetic, but we allow for a spin symmetry breaking at the EDMFT level in TIER I, which feeds back into TIER II. Hence, in TIER I, we introduce spin-dependent self-energies $$\begin{aligned}
\Sigma ^{\mathrm{EDMFT}}\big|_{C,\mathrm{loc}}\longrightarrow
\Sigma_{\uparrow,\downarrow} ^{\mathrm{EDMFT}}\big|_{C,\mathrm{loc}}.\end{aligned}$$ We do not need to apply any seed, since the Monte Carlo errors enable a symmetry breaking in the self-consistent calculation. The spin-dependent local self-energies are then associated with the two sites of the lattice Green’s function. Since the decoupling of the long-range interaction is in the charge channel, implying that the local vertex $\Pi ^{\mathrm{EDMFT}}\big|_{C,\mathrm{loc}}$ is computed via the local charge susceptibility, all the two-particle fields remain symmetric with respect to the spin index.
We will apply this extension only to SrMnO$_3$, which meets the requirements for antiferromagnetic order in term of filling and interaction strength, and which is experimentally found to be in an antiferromagnetic phase at low temperatures.
Results {#sec_results}
=======
In the following we will investigate the three perovskite compounds SrVO$_3$, SrMoO$_3$ and SrMnO$_3$ using two different low-energy models. In the $t_{2g}+e_g$ model the correlated subspace (TIER I) contains all five $d$ orbitals, while in the $t_{2g}$ model, it is restricted to the $t_{2g}$ sub-shell. The calculations for SrVO$_3$ and SrMoO$_3$ are performed at inverse temperature $\beta=10$ eV$^{-1}$, corresponding to $T=1160$ K, while in the case of SrMnO$_3$ we use $\beta=40$ eV$^{-1}$ corresponding to $T=290$ K. The latter value is close to $T_{\text{N\'eel}}$ $\sim$ 260 K for SrMnO$_3$,[@Takeda1974; @Saitoh1995; @Kang2008; @Kim2010] whose magnetic moments order antiferromagnetically in all directions (G-type ordering).
![Local and $\mathbf{k}$-resolved spectral function of SrVO$_3$ (left) and SrMoO$_3$ (right) in the three- (top) and five- (bottom) band description. Thin black lines represent the LDA bandstructure.\[SVO\_SMoO\_Akw\] ](AkwSVO_noGoWo.png "fig:"){width="44.00000%"} ![Local and $\mathbf{k}$-resolved spectral function of SrVO$_3$ (left) and SrMoO$_3$ (right) in the three- (top) and five- (bottom) band description. Thin black lines represent the LDA bandstructure.\[SVO\_SMoO\_Akw\] ](AkwSMoO_noGoWo.png "fig:"){width="44.00000%"}
SrVO$_3$ and SrMoO$_3$
----------------------
SrVO$_3$ is one of the simplest and most extensively studied correlated compounds due to its undistorted cubic lattice structure. [@Morikawa95Spectral; @Sekiyama04Mutual; @Yoshida10Mass; @Pavarini04Mott; @nekrasov2006; @Lechermann06Dynamical; @Backes16Hubbard; @Tomczak12Combined; @Sakuma13Electronic; @Tomczak14Asymmetry] The conduction band is formed by vanadium 3$d$ states of $t_{2g}$ character which are populated by one electron per unit cell. Within LDA the conduction band is isolated with a bandwidth of roughly 2.2 eV. The 3.8 eV wide conduction band in SrMoO$_3$ originates from the $t_{2g}$ states of the molybdenum cations which are in a $4d^2$ configuration. In both systems the DFT calculation predicts empty $e_g$ states which start at about 1 eV above the Fermi level. The main difference between the two perovskites is thus the filling and the width of the $t_{2g}$ band. The experimental photo-emission (PES) and inverse photo-emission (IPES) spectra of SrVO$_3$ display a renormalized quasi-particle peak, corresponding to an effective mass enhancement of approximately 2, a pronounced upper satellite feature at roughly 3 eV and a very weak lower satellite feature at around $-1.5$ eV.[@Morikawa95Spectral; @Sekiyama04Mutual; @Yoshida10Mass; @Backes16Hubbard] SrMoO$_3$, on the other hand, exhibits a very weakly renormalized quasi-particle peak and a pronounced shoulder structure in the PES.[@Wadati14Photo] The satellite features in the local spectral function of both systems clearly indicate correlation effects beyond the LDA. For SrMoO$_3$ it was shown in Ref. that the satellite features cannot be described as Hubbard bands. A later publication, using the same $GW$+EDMFT multitier formalism as employed in the current paper, showed that the satellites in this compound should instead be interpreted as plasmon satellites originating from long-range charge fluctuations.[@Nilsson17Multitier] This is consistent with the conclusions of Ref. , and hence there is a relative consensus in the literature on the nature of the satellite features in the spectral function of SrMoO$_3$.
![Frequency-dependent cRPA interaction $U_\text{cRPA}^{t_{2g}}(i\omega_n)$, Hubbard $\mathcal{U}^{t_{2g}}(i\omega_n)$ and Hund’s $\mathcal{J}^{t_{2g}}(i\omega_n)$ component (dashed) of the local effective interaction and screened interactions $W_\text{loc}^{t_{2g}}(i\omega_n)$ for SrVO$_3$ (top) and SrMoO$_3$ (bottom) in the three- (a,c) and five- (b,d) band description.\[SVO\_SMoO\_UW\][]{data-label="fig:VMoint"}](Mats_int_SVO.pdf "fig:"){width="48.00000%"} ![Frequency-dependent cRPA interaction $U_\text{cRPA}^{t_{2g}}(i\omega_n)$, Hubbard $\mathcal{U}^{t_{2g}}(i\omega_n)$ and Hund’s $\mathcal{J}^{t_{2g}}(i\omega_n)$ component (dashed) of the local effective interaction and screened interactions $W_\text{loc}^{t_{2g}}(i\omega_n)$ for SrVO$_3$ (top) and SrMoO$_3$ (bottom) in the three- (a,c) and five- (b,d) band description.\[SVO\_SMoO\_UW\][]{data-label="fig:VMoint"}](Mats_int_SMoO.pdf "fig:"){width="48.00000%"}
This is not the case for SrVO$_3$, where the origin of the satellite features is still under debate. For this compound the 3$d$ valence states (and hence also the MLWFs constructed from the $t_{2g}$ bands) are relatively localized around the $V$ ion. In cRPA this yields an effective Coulomb interaction with a static value of 3.4 eV. LDA+DMFT calculations in which the value of $U$ was chosen to reproduce the experimental effective mass enhancement,[@Pavarini04Mott; @nekrasov2006; @Lechermann06Dynamical; @Backes16Hubbard] as well as ab-initio one-shot combinations of $GW$ and DMFT [@Tomczak12Combined; @Sakuma13Electronic; @Tomczak14Asymmetry] can roughly reproduce the band narrowing and the lower satellite, but place the upper satellite observed in IPES too close to the Fermi energy. In Ref. it was instead speculated that the upper satellite could originate from the $e_g$ states. Common to all the above mentioned calculations is that the lower satellite in the SrVO$_3$ spectral function was interpreted as a Hubbard band, because of the strong local correlations between $t_{2g}$ electrons on the same $V$ site, while the upper satellite was either interpreted as originating from the $e_g$ states or left unexplained.
The interpretation of the satellites as Hubbard bands may be related to the fact that DFT+DMFT calculations only include local correlations in the solution of the low-energy model. On the other hand, the satellite structures of SrVO$_3$ are well described by the cumulant expansion,[@Gatti13Dynamical] which is an expansion of the Green’s function that is based on the $GW$-approximation of the self-energy.[@aryasetiawan1996; @guzzo2011] Because the $GW$ method does not capture the strong local correlations that give rise to Hubbard bands, these satellite features should be interpreted as plasmons.
![Thick lines represent the imaginary part of fully screened interaction on the real frequency axis $-\Im W_\text{loc}^{t_{2g}}(\omega)$ for SrVO$_3$ and SrMoO$_3$ while thin lines in the same color code indicate the imaginary part of the initial cRPA interaction $-\Im U_\text{cRPA}^{t_{2g}}(\omega)$. \[RealInts\]](Wloc_r_SVO.pdf){width="34.00000%"}
![Thick lines represent the imaginary part of fully screened interaction on the real frequency axis $-\Im W_\text{loc}^{t_{2g}}(\omega)$ for SrVO$_3$ and SrMoO$_3$ while thin lines in the same color code indicate the imaginary part of the initial cRPA interaction $-\Im U_\text{cRPA}^{t_{2g}}(\omega)$. \[RealInts\]](Wloc_r_SMoO.pdf){width="34.00000%"}
Multitier $GW$+EDMFT considers both types of correlations and therefore is a good scheme to investigate the origin of the satellites. In Refs. it was shown that the $GW$+EDMFT multitier technique yields high energy satellites which are most naturally explained in terms of plasmonic excitations when the intermediate and correlated subspaces include only the $t_{2g}$ orbitals. This conclusion is supported by the relatively small value of the self-consistently computed local interaction $\mathcal{U}(0)$, which cannot explain those structures as Hubbard bands, while reproducing the experimental mass enhancement relatively well (the band renormalization is slightly too small). In Fig. \[SVO\_SMoO\_Akw\] we show the local and $\mathbf{k}$-resolved spectral functions for SrVO$_3$ and SrMoO$_3$ obtained from the three-band and five-band calculations. Focusing on the spectral function associated with the $t_{2g}$ states, we see that the inclusion of the $e_g$ orbitals has no significant effect on the partial $t_{2g}$ spectral function. In particular, the position and strength of the satellite features is similar in the three- and five-band models. The fact that the satellites at 3 eV in the local spectral function follow the dispersion of the unoccupied part of the quasi-particle bands is consistent with the plasmon scenario. We find that the crystal field splitting between the two manifolds is significantly enhanced by correlation effects in the case of SrMoO$_3$ while for SrVO$_3$ the $e_g$ states remain at the same position as in the LDA bandstructure.
In Fig. \[SVO\_SMoO\_UW\] we show the frequency dependent interaction along the Matsubara axis. The results of the three- and five-band calculations for SrMoO$_3$ do not show any significant difference, probably as a consequence of the correlation enhanced crystal field splitting, which decouples the $t_{2g}$ and $e_g$ bands. In the case of SrVO$_3$, the screening effects on the local $t_{2g}$ interaction $\mathcal{U}^{t_{2g}}(0)$ coming from the inclusion of the $e_g$ orbitals are more pronounced. Here the five-band system is characterized by $\mathcal{U}^{t_{2g}}(0)=2.4$ eV versus $\mathcal{U}^{t_{2g}}(0)=2.1$ eV in the three-band case. This difference is bigger than the difference in the screened cRPA interaction provided as input (see red line). The relatively large difference in $\mathcal{U}^{t_{2g}}(0)$ may be related to the pole at low frequency discussed in Ref. . A small shift of this pole to lower frequencies can lead to substantial changes in the static value of the interaction (the results for $\mathcal{U}^{t_{2g}}$ above the pole are very similar in both models). In spite of this difference in the local interaction strength, long-range charge fluctuations lead to an almost complete screening of the interaction, i.e., an almost complete vanishing of $W_\text{loc}^{t_{2g}}(0)$.
The similarity between the three- and five-band results is also seen in the broad pole structure in $-\Im W_\text{loc}(\omega)$ (Fig. \[RealInts\]) which provides a consistent explanation of the satellites in the spectral function in terms of long-range charge fluctuations. Also, in agreement with our previous studies on the same compounds,[@Boehnke16When; @Nilsson17Multitier] the plasmon peak is higher in SrVO$_3$ indicating stronger screening effects compared to SrMoO$_3$.
We conclude from this analysis that the three- and five-band calculations yield consistent interpretations of the satellite features in these compounds as plasmons rather than Hubbard bands. The presented data also provide a convincing check for the validity of the downfolding procedure in the multitier approach.
SrMnO$_3$
---------
### Results for paramagnetic SrMnO$_3$
There is a substantial level of agreement on the importance of electronic correlations in SrMnO$_3$, while their role in determining the experimentally observed insulating ground state is still debated. Previous studies on this material [@Mravlje2012; @Sondena2006; @Bauernfeind2018] employed DFT for the structural properties or DFT+DMFT to incorporate the effect of Hubbard-like interactions. A common aspect of all these studies has been the [*ad hoc*]{} choice of the Hubbard interaction $U$ and Hund coupling $J_H$, which were chosen with the goal of reproducing experimental observations like the band gap or the magnetic moment.
![Frequency dependent cRPA interaction $U_{cRPA}^{t_{2g}}(i\omega_n)$, Hubbard $\mathcal{U}^{t_{2g}}(i\omega_n)$ and Hund’s $\mathcal{J}^{t_{2g}}(i\omega_n)$ (dotted blue line) components of the local effective interaction and screened interaction $W_\text{loc}^{t_{2g}}(i\omega_n)$ for SrMnO$_3$ in the three- (a) and five- (b) band case. Panel (c) shows the local spectral function for the different correlated subspaces. In (d) the imaginary parts of the cRPA interaction (thin lines) and the fully screened (thick lines) interactions are shown on the real-frequency axis. \[SrMnO3Ints\]](All_int_SMnO.pdf){width="48.00000%"}
In the following we apply the fully self-consistent and parameter-free $GW$+EDMFT approach to the three- and five-band models of SrMnO$_3$ which, at first glance, appear quite similar to the models described above. In SrMnO$_3$, three electrons populate the $t_{2g}$ states, hosted by the manganese cation, which results in a 2.5 eV wide band. In contrast to the previous systems, already at the DFT level, the $e_g$ manifold crosses the Fermi level, which puts a question mark behind the validity of a three-band description for this compound. This observation also suggests that the $e_g$ states provide nonnegligible screening channels. Indeed, the static values of the cRPA interaction reported in Fig. \[SrMnO3Ints\] are quite different for the two models, namely $U_\text{cRPA}(0)=1.9$ eV in the three-band model and $U_\text{cRPA}(0)=2.7$ eV in the five-band model. We also notice that, even though the effective local interaction $\mathcal{U}$ is similar for the two models, the fully screened interaction $W_\text{loc}^{t_{2g}}$ is substantially smaller in the five-band model. This indicates a more metallic behavior of the five-band model due to enhanced screening within the low-energy subspace.
Similar conclusions can be reached from the lower right panel of the same figure showing $\Im W_\text{loc}$ along the real frequency axis. $\Im W_\text{loc}^{t_{2g}}$ closely follows the cRPA $U$ for low frequencies ($<1$ eV), which indicates that the metallic screening within the $t_{2g}$ subspace is weak in the three-band model. In this case the dominant low-energy screening is in the $t_{2g}$-$e_g$ channel which is incorporated into the cRPA interaction. On the other hand, in the five-band model the low-energy screening is stronger even though the $e_g$ states are pushed up in energy (see panel (c)) and therefore should not contribute significantly to screening channels below 1 eV. Hence, in the five-band model, the screening within the $t_{2g}$ subspace is enhanced. This can be understood from the larger weight of the quasiparticle peak in the five-band case (see Fig. \[SrMnO3Akw\] and the discussion below) which corresponds to an increased $t_{2g}$ spectral weight around the Fermi energy. Again this indicates that an effective model containing only the $t_{2g}$ states might be insufficient in describing the screening effects in paramagnetic SrMnO$_3$.
In Fig. \[SrMnO3Akw\] we show the local and $\mathbf{k}$-resolved spectral functions of SrMnO$_3$ in the paramagnetic phase obtained by the $GW$+EDMFT approach and compare it with the result from a single-shot $GW$ calculation. The latter yields a dispersion similar to SrVO$_3$ and SrMoO$_3$ (see Ref. ) with a smaller bandwidth and a plasmonic broadening occurring mainly in the proximity of the $\Gamma$ and $X$ points. The inclusion of local vertex corrections beyond $GW$, however, has striking effects on SrMnO$_3$: the near-Mottness of the compound, in both the three- and five-band models alike, manifests itself with the formation of broad structures centered at $\pm$ 1 eV and with an extremely narrow peak at the Fermi level. We identify these features as Hubbard bands considering that their separation agrees with the magnitude of the static local effective interaction $\mathcal{U}^{t_{2g}}(0)=1.8$ eV. In addition, especially in the three-band model, within each of the three main structures (the two Hubbard bands at $\sim\pm$ 1 eV and the narrow quasi-particle band) it is possible to recognize renormalized and/or broadened replicas of the noninteracting dispersion. This behavior is typical of the Mott transition scenario. [@Georges1996] The asymmetry between the occupied and unoccupied parts of the spectra appears to be a consequence of the $GW$-derived $\mathbf{k}$-dependent self-energy, which is known to produce such effects.[@Casula2016] It is also worth noting that these strong correlation effects occur even though the ratio between $\mathcal{U}^{t_{2g}}(0)$ and the bandwidth is similar to the previous two compounds. This is a Hund coupling effect, which leads to a suppression of the kinetic energy at half-filling. As a result, the critical interaction for the Mott transition in a multi-orbital system with $J_H>0$ is lowest at half-filling.[@Werner2009]
![Local and $\mathbf{k}$-resolved spectral function of SrMnO$_3$ obtained using the $GW$+EDMFT method (left) and with single-shot $GW$ (right) in the three- (top) and five-band (bottom) description. Thin black lines represent the LDA bandstructure. \[SrMnO3Akw\]](AkwSMnO.png){width="78.50000%"}
The low-energy structures in $\Im W_\text{loc}(\omega)$ will give rise to weak satellites (or broad tails) on the high energy side of the Hubbard bands, a feature seen in the local $t_{2g}$ spectral function shown in Fig. \[SrMnO3Akw\]. Similar physics was investigated for model systems in Ref. , where it was shown that a Mott gap in the fermionic spectral function is associated with a peak in $\text{Im}\mathcal{U}(\omega)$ at $\omega$ corresponding to the characteristic energy for charge excitations across this gap. In the presence of a quasi-particle band, there are also screening modes associated with transitions between the quasi-particle band and the Hubbard bands. The situation for real materials is, as discussed above, more complicated since there are multiple screening channels giving rise to different peaks in $\Im W(\omega)$ and a careful analysis of the different screening channels is needed to clarify the origin of the satellite features.
![Imaginary part of the local effective interaction on the real frequency axis $-\Im \mathcal{U}^{t_{2g}}(\omega)$ for the three-band model of SrMnO$_3$ in the paramagnetic (red line) and antiferromagnetic (green line) phases. The peaks mainly originate from excitations from outside the $t_{2g}$ subspace, which are transferred from $U_\text{cRPA}$, but also include contributions from excitations within the quasi-particle band (paramagnetic case) and excitations across the gap (antiferromagnetic case), respectively. \[curlyUReal\]](curlyU_r_SMnO_AFM.pdf){width="34.00000%"}
In contrast to the results in previous LDA and LDA+DMFT studies with ad-hoc parameters,[@Mravlje2012; @Sondena2006] both our models of SrMnO$_3$ remain conducting in the paramagnetic phase. The metallicity is due to a very narrow quasi-particle band pinned at the chemical potential. The quasi-particle peak in the five-band case is a bit larger, see Fig. \[SrMnO3Ints\](c), and as a result, the screened interaction $W_\text{loc}^{t_{2g}}$ is smaller (panel (d)). Similarly to the case of SrMoO$_3$, the $e_g$ center of mass is shifted to higher energies compared to LDA. However, a low amount of $e_g$ spectral weight remains in the occupied part of the spectral function and is responsible for the self doping effect on the $t_{2g}$ states, as one can infer from the peak at the Fermi level not being at the center of the gap. The self doping from the $e_g$ states alters the partial $t_{2g}$ filling slightly away from half-filling which tends to reduce the local correlations, as discussed above. Hence it is this self doping that is responsible for the larger quasiparticle peak in the five-band model. From these observations we conclude that, if restricted to the paramagnetic case, the $GW$+EDMFT approach applied to the three- and five-band models of SrMnO$_3$ yields a metal on the verge of a Mott transition, in which the $e_g$ bands play an active role in determining the overall physics of the system.
### Antiferromagnetic phase of SrMnO$_3$
By construction, the paramagnetic $GW$+EDMFT calculation cannot account for the magnetic ground state which is experimentally observed in the cubic phase of SrMnO$_3$.[@Takeda1974; @Saitoh1995; @Kang2008; @Kim2010] Measurements of the magnetic moment report a value of 2.6$\pm$0.2 $\mu_B$ and previous DFT+DMFT calculations yield compatible results.[@Mravlje2012] The low temperature behavior has been reported to be well described by the ordering of $S=3/2$ local moments with a $T_{\text{N\'eel}}$ between 233 K and 260 K. The variations in the ordering temperature can be accounted for if oxygen defects are considered.
To describe antiferromagnetic ordering we extended the $GW$+EDMFT multitier approach to a bipartite lattice as described in Sec. \[sec\_AFM\]. At $\beta=40$ eV$^{-1}$, the solution with G-type antiferromagnetism self-consistently emerges in our parameter-free simulation. The local and $\mathbf{k}$-resolved spectral functions of the three-band model are reported in Fig. \[SrMnO3Akw\_AFM\] and exhibit a gap of about 0.5 eV, as well as pronounced features at $\pm 2$ eV. The position of the lower satellite is in good agreement with the PES measurements of Ref. , which were however taken above $T_{\text{N\'eel}}$. There are spectral weight tails up to very high energy, consistent with plasmonic sidebands. From the imbalance in the spin population on a given site we compute the magnetic moment as $$\begin{aligned}
m=\sum_{\alpha}\left(n_{\alpha\uparrow}-n_{\alpha\downarrow}\right)\mu_{B},\end{aligned}$$ where $\alpha$ is the orbital index. The result is 2.05 $\mu_{B}$ on both sublattices. This is in reasonably close agreement with the above-quoted 2.6 $\mu_{B}$ which has been experimentally found at a much lower temperature.[@Takeda1974] Due to numerical stability issues we could not go below 290 K in our calculations, which means that our simulation results are a bit above the experimental $T_{\text{N\'eel}}$. However, in mean-field based approaches such as DMFT, $T_{\text{N\'eel}}$ is expected to be substantially overestimated since long-range fluctuation are neglected. In the $GW$+EDMFT approach used in this work, long-range spin fluctuation are not included, as the long range interaction is decoupled in the charge channel. We can thus not expect a significant improvement in the description of magnetic ordering temperatures. The value of $m$ is also reduced compared to the low-temperature value, because our simulation temperature is close to $T_{\text{N\'eel}}$. The magnetic moment should increase as the temperature is lowered toward the experimental value. Due to the lack of adjustable parameters, this kind of temperature-dependent analysis would be an interesting topic for future studies.
![$\mathbf{k}$-resolved spectral function of a three-band model of SrMnO$_3$ with long range antiferromagnetic order.\[SrMnO3Akw\_AFM\]](AkwSMnO_AFM.pdf){width="40.00000%"}
Because of the heavy numerical cost of the two-sublattice calculation, we analyze the symmetry-broken phase only for the three-band model of SrMnO$_3$. The insulating nature of the solution should result in small screening effects within the correlated subspace. Indeed, as shown in Fig. \[SrMnO\_afm\_Ints\](a), the effective local interaction is essentially equal to the cRPA result and the fully screened interaction $W_\text{loc}^{t_{2g}}$ is only slightly smaller. The bosonic spectrum $\Im
W_\text{loc}^{t_{2g}}(\omega)$, shown in panel (b), is also similar to $\Im
U_\text{cRPA}^{t_{2g}}(\omega)$, but shifted to slightly higher energy (due to the gap) and without a prominent feature near 4 eV. The gap in the spectrum suppresses the low-frequency screening and introduces a screening channel corresponding to transitions across the gap which modify the low-energy peak in $\Im W_\text{loc}(\omega)$. On the other hand, the local effective interaction on the real axis, shown in Fig. \[curlyUReal\], features a broad pole centered at $\omega=3$ eV, which is inherited from the corresponding peak in $\Im U_\text{cRPA}^{t_{2g}}(\omega)$.
![(a) Frequency dependent cRPA interaction $U_\text{cRPA}^{t_{2g}}(i\omega_n)$, Hubbard $\mathcal{U}^{t_{2g}}(i\omega_n)$ and Hund’s $\mathcal{J}^{t_{2g}}(i\omega_n)$ (dotted blue line) components of the local effective interaction and screened interaction $W_\text{loc}^{t_{2g}}(i\omega_n)$ for the symmetry broken phase of SrMnO$_3$ in the three-band model. (b) Imaginary part of the fully screened interaction $-\Im
W_\text{loc}^{t_{2g}}(\omega)$ (thick lines) and the initial cRPA interaction $-\Im U_\text{cRPA}^{t_{2g}}(\omega)$ (thin lines) on the real frequency axis. \[SrMnO\_afm\_Ints\]](Mats_int_SMnO_AFM.pdf){width="48.00000%"}
Conclusions {#sec_conclusions}
===========
We used the recently developed multitier $GW$+EDMFT approach to perform a systematic analysis of the electronic properties in a family of transition metal perovskites. This self-consistent computational scheme captures both local Hubbard physics and long range charge fluctuation, and does not rely on any choice of parameters, apart from the energy windows defining the model subspaces (tiers). Access to the frequency dependence of the selfconsistently determined interactions allows to discriminate spectral features originating from local physics and from plasmonic excitations. The latter are collective charge fluctuations that screen the local effective interaction below a frequency that depends on the details of the correlated electronic structure. Both effects are self-consistently accounted for on equal footing, making $GW$+EDMFT a fully [*ab initio*]{} approach.
The three perovskites considered, namely SrVO$_3$, SrMoO$_3$ and SrMnO$_3$, contain 1, 2, and 3 electrons in the $t_{2g}$ shell, respectively, while the $e_g$ shell is essentially empty. Due to the effects of filling and Hund coupling, the correlations in these three materials are qualitatively different. For example, the first two materials are correlated metals, while SrMnO$_3$ is an antiferromagnetic insulator. Reproducing these basic properties is a first important test for a parameter-free [*ab initio*]{} scheme. The comparison between the three-band and five-band description serves as an additional consistency check for the multitier $GW$+EDMFT approach, which should produce consistent results for the low-energy electronic structure, independent of the choice of subspace for the self-consistency cycle.
--------------------------- ---------- -------------- ---------- -------------- ---------- --------------
$t_{2g}$ $t_{2g}+e_g$ $t_{2g}$ $t_{2g}+e_g$ $t_{2g}$ $t_{2g}+e_g$
$\mathcal{U}^{t_{2g}}$(0) 2.13 2.38 2.78 2.76 1.88 2.01
$\mathcal{J}^{t_{2g}}$(0) 0.38 0.41 0.24 0.24 0.32 0.31
$Z^{t_{2g}}$ 0.62 0.62 0.7 0.7 0.07 0.08
--------------------------- ---------- -------------- ---------- -------------- ---------- --------------
: \[tab:tableInteractions\]Screened effective interaction parameters and quasi-particle weight $Z$ in the $t_{2g}$ sub-shell for the different low energy models considered.[]{data-label="Ztable"}
In the case of SrVO$_3$ we obtained a quasi-particle weight in the $t_{2g}$ shell which is slightly larger than the $Z^{t_{2g}}\approx 0.5$ determined in photoemission studies,[@Yoshida10Mass] see Tab. \[Ztable\]. Both in the three-band and five-band description, the satellite features appear to have a plasmonic origin, since the self-consistently computed static interaction is too small to produce Hubbard bands. Hence, within $GW$+EDMFT, SrVO$_3$ is described as a correlated metal with strong nonlocal screening effects within the $t_{2g}$ subspace. The results for SrMoO$_3$ indicate an even more weakly correlated metal. In both these metallic systems the inclusion of the $e_g$ states has little effect on the $t_{2g}$ states, which makes the interpretation of the satellite structures in terms of plasmonic excitations resilient against the choice of the low energy window.
A qualitatively different situation is encountered in the case of SrMnO$_3$ which is experimentally found to be insulating both above [@Kang2008; @Kim2010] and below $T_\text{N\'eel}\approx 290$ K. If restricted to paramagnetic solutions, $GW$+EDMFT predicts a strongly correlated metal in proximity to a Mott transition, in both the three- and five-band models. The Hubbard bands of this strongly correlated metal are at too low energy compared to experiment. On the other hand the extension of the method to states with broken spin symmetry produces an antiferromagnetic solution with spectral features and a magnetic moment in good agreement with experiments. There are two possible conclusions one can draw from this observation:
\(i) The (short-range) antiferromagnetic spin correlations at $T\approx 300$ K may still be so strong that the material is more accurately described by the antiferromagnetic solution than by the paramagnetic solution, which ignores nonlocal correlations completely. In this case, SrMnO$_3$ would not be a pure Mott insulator above $T_{\text{N\'eel}}$, but a material strongly influenced by short-range magnetic correlations. These short-range antiferromagnetic fluctuations are, in the paramagnetic phase, described by the nonlocal vertex which is not included in the present calculations. On the other hand, below the Néel temperature antiferromagnetic long-range order can be incorporated without the need to include the nonlocal vertex, as described in Section IIC. Hence the lack of a nonlocal vertex is a likely reason for why the $GW$+EDMFT method only yields a correct description of SrMnO$_3$ in the antiferromagnetic phase.
\(ii) The second possibility is that the interaction parameters which are self-consistently computed in $GW$+EDMFT are too small. Recent model studies have shown that cRPA can strongly overestimate the screening from bands which are relatively close to the Fermi level,[@Honerkamp2018] which may lead to an underestimation of $U_\text{cRPA}$, and hence of the effective bare interactions in tiers II and I. However, we have to note that both the five- and three-band models yield metallic solutions, and that the five-band model actually is more metallic. This indicates that the problem with $U_\text{cRPA}$, if it exists at all, is not related to screening from $e_g$ states.
If the first scenario turns out to be correct, it implies that SrMnO$_3$, similar to SrVO$_3$,[@Boehnke16When] is a material whose physics has been incorrectly described by standard DFT+DMFT treatments. In these calculations, interaction parameters are chosen ad-hoc to reproduce spectral features (e. g. Hubbard) based on pre-conceived notions about the nature of material. The second scenario can be checked by systematically enlarging the low-energy space (tier I). As we mentioned, the inclusion of the $e_g$ states in SrMnO$_3$ significantly alters the fermionic and bosonic Weiss fields. It is then conceivable that states outside our correlated subspace, but located at a similar energy separation require a treatment beyond $U_\text{cRPA}$. In particular, it would be interesting to include the oxygen $p$ orbitals in tier I, since these may produce a significant screening. Indeed, in Ref. it was shown that these states lie not far from the Fermi level in SrMnO$_3$, while the authors of Ref. even argue that they must be included in the low energy models of all the perovskites.
While the current study showed that the $GW$+EDMFT approach can handle different correlations strengths and magnetic phases in material specific setups, and produces consistent results for different choices of low-energy models, it will be important to perform additional tests on experimentally well-characterized compounds. The great strength of $GW$+EDMFT is that it is free from ad-hoc parameters and capable of treating weakly and strongly correlated systems. However, the treatment of nonlocal correlations is limited to charge fluctuations whereas spin fluctuations are not included. Furthermore, the initial downfolding in tier III relies on cRPA. Additional studies are needed to establish for which class of compounds the method has predictive power.
FP and PW acknowledge support from the Swiss National Science Foundation through NCCR MARVEL and the European Research Council through ERC Consolidator Grant 724103. FN and FA acknowledge financial support from the Swedish Research Council (VR). The calculations were performed on the Beo04 cluster at the University of Fribourg and on resources provided by the Swedish National Infrastructure for Computing (SNIC) at LUNARC. We thank A. Georges, M. Zingl and J. Mravlje for insightful discussions.
|
---
author:
- 'S. de la Torre'
- 'L. Guzzo'
- 'J. A. Peacock'
- 'E. Branchini'
- 'A. Iovino'
- 'B. R. Granett'
- 'U. Abbas'
- 'C. Adami'
- 'S. Arnouts'
- 'J. Bel'
- 'M. Bolzonella'
- 'D. Bottini'
- 'A. Cappi'
- 'J. Coupon'
- 'O. Cucciati'
- 'I. Davidzon'
- 'G. De Lucia'
- 'A. Fritz'
- 'P. Franzetti'
- 'M. Fumana'
- 'B. Garilli'
- 'O. Ilbert'
- 'J. Krywult'
- 'V. Le Brun'
- 'O. Le Fèvre'
- 'D. Maccagni'
- 'K. Ma[ł]{}ek'
- 'F. Marulli'
- 'H. J. McCracken'
- 'L. Moscardini'
- 'L. Paioro'
- 'W. J. Percival'
- 'M. Polletta'
- 'A. Pollo'
- 'H. Schlagenhaufer'
- 'M. Scodeggio'
- 'L. A. .M. Tasca'
- 'R. Tojeiro'
- 'D. Vergani'
- 'A. Zanichelli'
- 'A. Burden'
- 'C. Di Porto'
- 'A. Marchetti'
- 'C. Marinoni'
- 'Y. Mellier'
- 'P. Monaco'
- 'R. C. Nichol'
- 'S. Phleps'
- 'M. Wolk'
- 'G. Zamorani'
bibliography:
- 'biblio.bib'
subtitle: 'Galaxy clustering and redshift-space distortions at $\mathbf{z\simeq0.8}$ in the first data release'
title: 'The VIMOS Public Extragalactic Redshift Survey (VIPERS)[^1] '
---
Introduction
============
Over the past decades galaxy redshift surveys have provided a wealth of information on the inhomogeneous universe, mapping the late-time development of the small metric fluctuations that existed at early times, and whose early properties can be viewed in the cosmic microwave background (CMB). The growth of structure during this intervening period is sensitive both to the type and amount of dark matter, and also to the theory of gravity, so there is a strong motivation to make precise measurements of the rate of growth of cosmological structure [e.g. @jain2010].
Of course, galaxy surveys do not image the mass fluctuations directly, unlike gravitational lensing. But the visible light distribution does have some advantages as a cosmological tool in comparison with lensing. The number density of galaxies is sufficiently high that the density field of luminous matter can be measured with a finer spatial resolution, probing interesting nonlinear features of the clustering pattern with good signal-to-noise. The price to be paid for this is that the complicated biasing relation between visible and dark matter has to be confronted; but this is a positive factor in some ways, since understanding galaxy formation is one of the main questions in cosmology. Redshift surveys provide the key information needed to meet this challenge: global properties of the galaxy population and their variation with environment and with epoch.
The final advantage of redshift surveys is that the radial information depends on cosmological expansion and is corrupted by peculiar velocities. Although the lack of a simple method to recover true distances can be frustrating at times, it has come to be appreciated that this complication is in fact a good thing. The peculiar velocities induce an anisotropy in the apparent clustering, from which the properties of the peculiar velocities can be inferred much more precisely than in any attempt to measure them directly using distance estimators. The reason peculiar velocities are important is that they are related to the underlying linear fractional density perturbation $\delta$ via the continuity equation: $\dot\delta =
-{\mathbf{\nabla\cdot u}}$, where $\vec{u}$ is the peculiar velocity field. This can be expressed more conveniently in terms of the dimensionless scale factor, $a(t)$, and the Hubble parameter, $H(t)$, as $${\vec{\nabla\cdot u}}= -H f \delta; \quad f\equiv {d\ln \delta\over d\ln a}.$$ The growth rate can be approximated in most models by $f(a)\simeq
\Omega_m(a)^\gamma$, where $\gamma\simeq 0.545$ in standard $\Lambda$-dominated models, but where models of non-standard gravity display a growth rate in which the effective value of $\gamma$ can differ by $30\%$ [@linder2007].
The possibility of using the redshift-space distortion signature as a probe of the growth rate of density fluctuations, together with that of using the Baryonic Acoustic Oscillations (BAO) as a standard ruler to measure the expansion history, is one of the main reasons behind the recent burst of activity in galaxy redshift surveys. The first paper to emphasise this application as a test of gravity theories was the analysis of the VVDS survey by @guzzo08, and subsequent work especially by the SDSS LRG [@samushia12], WiggleZ [@blake12; @contreras13], 6dFGS [@beutler12] and BOSS [@reid12] surveys has exploited this method to make measurements of the growth rate at $z<1$.
Surveys such as SDSS LRG, WiggleZ, or BOSS are characterized by a large volume ($0.5-2\, h^{-3}{\rm Gpc}^3$), and a relatively sparse galaxy population with number density of about $10^{-4}\,h^{3}{\rm
Mpc}^{-3}$. Statistical errors are in this case minimized thanks to the large volume probed, at the expenses of selecting a very specific galaxy population (e.g. blue star forming or very massive galaxies), often with a complex selection function. The goal of the VIMOS Public Extragalactic Redshift Survey (VIPERS, [http://vipers.inaf.it]{}), has been that of constructing a survey with broader science goals and properties comparable to local general-purpose surveys such as the 2dFGRS. The adopted strategy has been to optimise the features of the ESO VLT multi-object spectrograph VIMOS in order to measure about $\sim 400$ spectra at $I_{AB}<22.5$ over an area of $\sim 200$ square arcmin, in a single exposure of less than $1$ hour. The survey is being performed as a “Large Programme” within the ESO general user framework and aims at measuring redshifts for about $10^5$ galaxies at $0.5<z<1.2$.
The prime goal of VIPERS is an accurate measurement of the growth rate of large-scale structure at redshift around unity. The survey should enable us in particular to use of techniques aimed at improving the precision on the growth rate [@mcdonald09] thanks to its high galaxies sampling of about $10^{-2}\,h^{3}{\rm Mpc}^{-3}$. In general, VIPERS is intended to provide robust and precise measurements of the properties of the galaxy population at an epoch when the Universe was about half its current age, representing one of the largest spectroscopic surveys of galaxies ever conducted at these redshifts. Examples can be found in the parallel papers that are part of the first science release [@marulli13; @malek13; @davidzon13].
This paper presents the initial analysis of the real-space galaxy clustering and redshift-space distortions in VIPERS, together with the resulting implications for the growth rate. The data are described in Section 2; Section 3 describes the survey selection effects; Section 4 describes our methods for estimating clustering, which are tested on simulations in Section 5; Section 6 presents the real-space clustering results; Section 7 gives the redshift-space distortions results, and Section 8 summarises our results and concludes.
Throughout this analysis, if not specified otherwise, we assume a fiducial $\Lambda {\rm CDM}$ cosmological model with $(\Omega_m,\Omega_k,w,\sigma_8,n_s)=(0.25,0,-1,0.8,0.95)$ and a Hubble constant of $H_0=100~h~\rm{km~s^{-1}~Mpc^{-1}}$.
Data
====
The VIPERS galaxy target sample is selected from the optical photometric catalogues of the Canada-France-Hawaii Telescope Legacy Survey Wide [CFHTLS-Wide, @goranova09]. VIPERS covers $24$ deg$^2$ on the sky, divided over two areas within the W1 and W4 CFHTLS fields. Galaxies are selected to a limit of $i'_{AB}<22.5$, applying a simple and robust $gri$ colour pre-selection to efficiently remove galaxies at $z<0.5$. Coupled with a highly optimized observing strategy [@scodeggio09], this allows us to double the galaxy sampling rate in the redshift range of interest, with respect to a pure magnitude-limited sample. At the same time, the area and depth of the survey result in a relatively large volume, $5 \times
10^{7}\mhmpcc$, analogous to that of the Two Degree Field Galaxy Redshift Survey (2dFGRS) at $z\simeq0.1$ [@colless01; @colless03]. Such a combination of sampling rate and depth is unique amongst current redshift surveys at $z>0.5$. VIPERS spectra are collected with the VIMOS multi-object spectrograph [@lefevre03] at moderate resolution ($R=210$) using the LR Red grism, providing a wavelength coverage of 5500-9500$\rm{\AA}$ and a typical radial velocity error of $\sigma_v=175(1+z)$[$\,{\rm km\, s^{-1}}$]{}. The full VIPERS area of $24$ deg$^2$ will be covered through a mosaic of 288 VIMOS pointings (192 in the W1 area, and 96 in the W4 area). A discussion of the survey data reduction and management infrastructure is presented in @garilli12. An early subset of the spectra used here is analysed and classified through a Principal Component Analysis (PCA) in @marchetti13. A complete description of the survey construction, from the definition of the target sample to the actual spectra and redshift measurements, is given in the parallel survey description paper [@guzzo13].
The data set used in this and the other papers of the early science release, will represent the VIPERS Public Data Release 1 (PDR-1) catalogue. It will be publicly available in the fall of 2013. This catalogue includes $55358$ redshifts ($27935$ in W1 and $27423$ in W4) and corresponds to the reduced data frozen in the VIPERS database at the end of the 2011/2012 observing campaign; this represents $64\%$ of the final survey in terms of covered area. A quality flag has been assigned to each object in the process of determining their redshift from the spectrum, which quantifies the reliability of the measured redshifts. In this analysis, we use only galaxies with flags 2 to 9 inclusive, corresponding to a sample with a redshift confirmation rate of $98\%$. The redshift confirmation rate and redshift accuracy have been estimated using repeated spectroscopic observations in the VIPERS fields [see @guzzo13 for details]. The catalogue, which we will refer to just as the VIPERS sample in the following, corresponds to a sub-sample of $45871$ galaxies with reliable redshift measurements.
The redshift distribution of the sample is presented in Fig. \[fig1\]. We can see in this figure that the survey colour selection allows an efficient removal of galaxies below $z=0.5$. It is important to notice that the colour selection does not introduce a sharp cut in redshift but a redshift window function which has a smooth transition from zero to one in the redshift range $0.4<z<0.6$, with respect to the full population of $i'<22.5$ galaxies. This effect on the radial selection of the survey, which we refer to as the Colour Sampling Rate ([${CSR}$]{}) in the following, is only present below $z=0.6$. Above this redshift, the colour selection has no impact on the redshift selection and the sample becomes purely magnitude-limited at $i'<22.5$ [@guzzo13]. If we weight the raw redshift distribution by the global survey completeness function described in the next sections, one obtains the $N(z)$ represented by the empty histogram in Fig. \[fig1\]. For convenience, we scaled down the corrected $N(z)$ by $40\%$, the average effective survey sampling rate, to aid the comparison between the shapes of the two distributions. The difference in shape between these two $N(z)$ shows the effect of incompleteness in the survey, which is only significant at about $z>0.9$ [see also @davidzon13].
The observed redshift distribution in the sample can be well described by a function of the form $$N(z)=A \left(\frac{z}{z_0}\right)^\alpha
\exp\left(-\left(\frac{z}{z_0}\right)^\beta\right){CSR}(z), \label{eq:nz}$$ in units of $\rm{deg}^{-2} \cdot (\Delta z=0.03)^{-1}$ and where $(A,
z_0, \alpha, \beta)=(3.103, 0.191, 8.603, 1.448)$. The [${CSR}$]{}is the incompleteness introduced by the VIPERS colour selection. It is primarily a function of redshift and can be estimated from the ratio between the number of galaxies with $i'<22.5$ satisfying the VIPERS colour selection and the total number of galaxies with $i'<22.5$ as a function of redshift. We calibrated this function using the VLT-VIMOS Deep Survey Wide spectroscopic sample [VVDS-Wide, @garilli08] which has a CFHTLS-based photometric coverage and depth that is similar to that of VIPERS, but which is free from any colour selection [see @guzzo13 for details]. The [${CSR}$]{}is well described by a function of the form $${CSR}(z)=\left[\frac{1}{2}-\frac{{\rm
erf}\left(b(z_t-z)\right)}{2}\right], \label{eq:csr}$$ with $(b,z_t)=(17.465,0.424)$.
The fitting of $N(z)$ is important in measuring galaxy clustering: the form of the mean redshift distribution must be followed accurately, but features from large-scale structure must not be allowed to bias the result. We discuss this issue in detail in Section \[sec:test\].
Angular completeness
====================
Slit assignment and footprint
-----------------------------
To obtain a sample of several square degrees with VIMOS, one needs to perform a series of individual observations or pointings. The VIPERS strategy consists in covering the survey area with only one pass. This has been done in order to maximise the volume probed. The survey strategy and the fact that the VIMOS field-of-view is composed of four quadrants delimited by an empty cross, create a particular footprint on the sky which is reproduced in Figs \[fig3\] and \[fig4\]. In each pointing, slits are assigned to a number of potential targets which meet the survey selection criteria. This is shown in Fig. \[fig2\], which illustrates how the slits are positioned in the pointing W1P082. Given the surface density of the targeted population, the multiplex capability of VIMOS, and the survey strategy, a fraction of about $45\%$ of the parent photometric sample can be assigned to slits. We define the fraction of target which have a measured spectrum as the Target Sampling Rate ([${TSR}$]{}) and the fraction of observed spectra with reliable redshift measurement as the Spectroscopic Sampling Rate ([${SSR}$]{}). The number of slits assigned per pointing is maximized by the SSPOC algorithm [@bottini05], but the elongated size of the spectra means that the resulting sampling rate is not uniform inside the quadrants. The dispersion direction of the spectra in VIPERS are aligned with the Dec direction and consequently, the density of spectra along this direction is lower with respect to that along the RA direction. This particular sampling introduces an observed anisotropic distribution of pair separation, which has to be accounted for in order to measure galaxy clustering correctly.
{width="12cm"}
The two empty stripes between the four quadrants in each pointing introduce a particular pattern in the measured correlation functions if not accounted for. We correct for that by applying detailed binary masks of the spectroscopic observations to a random sample of unclustered objects, so that both data and random catalogues contain no objects in these stripes. These masks account for the detailed VIMOS field-of-view geometry as well as for the presence of vignetted areas at the boundaries of the pointings. On top of these spectroscopic masks, we apply a set of photometric masks which discard areas where the parent photometry is affected by defects such as large stellar haloes and where the survey selection is compromised [see @guzzo13].
Small-scale incompleteness
--------------------------
We can characterise the amount of missing small-scale angular pairs induced by the VIPERS spectroscopic strategy, by measuring the angular pair completeness as a function of angular separation. This quantity, defined as the ratio between the number of pairs in the spectroscopic sample and that in the parent photometric sample, can be written in terms of angular two-point correlation functions as [@hawkins03] $$\frac{1}{w^A(\theta)}=\frac{1+w_s(\theta)}{1+w_p(\theta)}, \label{eq:angcomp}$$ where $w_s(\theta)$ and $w_p(\theta)$ are respectively the angular correlation function of the spectroscopic and parent samples. This function is shown in Fig. \[fig5\]. No significant difference is seen between the W1 and W4 fields, as expected. The amount of missing angular pairs is only significant below $\theta=0.03$ deg, which corresponds to a transverse comoving scale of about $1\mhmpc$ at $z=0.8$.
This fraction varies with redshift, although in practice we cannot measure it at different redshifts since we do not have a measured redshift for all galaxies in the parent sample. For this reason we use the global $w^A(\theta)$ (averaged over all observed redshifts) to correct for the small-scale angular incompleteness effect. We will show in Section \[sec:test\] that the level of systematic error introduced by using $w^A(\theta)$ instead of $w^A(\theta|z)$ is very small, of the order of a few percent. When measuring the angular correlation functions, we include the completeness weights introduced in the following section, in a similar way as for the three-dimensional correlation function estimation.
It is important to mention that the small-scale angular incompleteness effect is a general issue for large galaxy redshift surveys, in which one has to deal with the mechanical constraints of multi-object spectrographs and survey strategy. The incompleteness due to slit assignment in VIPERS is to some extent similar to the fibre collision problem in surveys using fibre spectroscopy such as 2dFGRS or SDSS, while the magnitude of the effect is much more severe in our case. Recently, a new method has been developed to accurately correct for fibre collision [@guo12]. Although this method is quite general, it is not applicable here. The exclusion between spectroscopically observed objects in VIPERS is essentially uni-directional, meaning that not all close pairs are excluded. Therefore calculations such as that shown in Fig. \[fig3\] are possible from the set of one-pass observations, whereas the correction scheme of [@guo12] can only be used for SDSS where overlapping observations are included. Thus we need to revise the correction methods developed for such surveys to apply them to VIPERS.
Large-scale incompleteneness {#sec:esr}
----------------------------
{width="17cm"}
{width="17cm"}
In addition to the non-uniform sampling inside the pointings, the survey has variations of completeness from quadrant to quadrant. This incompleteness is the combined effect of the [${TSR}$]{}and [${SSR}$]{}. The latter, which characterises our ability of determining a redshift from a galaxy spectrum, is determined empirically as the ratio between the number of reliable redshifts and the total number of observed spectra. The [${TSR}$]{}and [${SSR}$]{}in each quadrant are shown in in Figs \[fig3\] and \[fig4\]. From these figures one can see clearly that both [${TSR}$]{}and [${SSR}$]{}functions vary according to the position on the sky, although the [${SSR}$]{}tends to have stronger variations. The variations of [${TSR}$]{}reflect the changes in angular galaxy density in the parent catalogue. Indeed, because of the finite maximum number of slits that can be assigned and the fact that each quadrant has a different number of potential targets, the less dense quadrants tend to be better sampled than the denser ones. On the other hand, variations in observational conditions from pointing to pointing induce changes in [${SSR}$]{}. These different observational conditions translate into variations of the signal-to-noise of the measured spectra and so in our ability of extracting a redshift measurement from them. These effects are taken into account in the clustering estimation by weighting each galaxy according to the reciprocal of the [${TSR}$]{}and [${SSR}$]{}.
Clustering estimation {#sec:estimation}
=====================
We characterise the galaxy clustering in the VIPERS sample by measuring the two-point statistics of the spatial distribution of galaxies in configuration space. We estimate the two-point correlation function $\xi(r)$ using the @landy93 estimator $$\xi(r)=\frac{GG(r)-2GR(r)+RR(r)}{RR(r)}, \label{eq:xir}$$ where $GG(r)$, $GR(r)$, and $RR(r)$ are respectively the normalized galaxy-galaxy, galaxy-random, and random-random number of pairs with separation inside $[r-\Delta r/2,r+\Delta r/2]$. Note that here $r$ is a general three-dimensional galaxy separation, not specifically the real-space separation. This estimator minimises the estimation variance and circumvent discreteness and finite volume effects [@landy93; @hamilton93]. A random catalogue must be constructed in this estimator, whose aim is to accurately estimate the number density of objects in the sample. It must be an unclustered population of objects with the same radial and angular selection functions as the data. In this analysis, we use random samples with 20 times more objects than in the data to minimise the shot noise contribution in the estimated correlation functions.
VIPERS has a complex angular selection function which has to be taken into account carefully when estimating the correlation function. For this, we weight each galaxy by the survey completeness weight, as well as each pair by the angular pair weights described in the previous section (Eq. \[eq:angcomp\]). The survey completeness weights correspond to the inverse of the effective sampling rate [${ESR}$]{}in each quadrant $Q$, defined as $$w(Q)=ESR^{-1}(Q)=(SSR(Q)\times TSR(Q))^{-1}. \label{eq:weight}$$ By applying these weights we effectively up-weight galaxies in the pair counts. It is important to note that here we keep the spatial distribution of the random objects uniform across the survey volume. We recall that survey completeness weights account for the quadrant-to-quadrant variations of the survey completeness described in Section \[sec:esr\] but do not correct for the internal quadrant incompleteness. For that we use the angular pair weights $w^A(\theta)$ which are applied to the GG pair counts. In principle the [${ESR}$]{}is also a function of redshift and galaxy type [see @davidzon13]. However, given the statistics of the sample it is impossible to measure the additional dependence of this function on redshift and galaxy properties. Therefore, we decided to only account for its quadrant-to-quadrant variations. We discuss the accuracy of this approximation in Section \[sec:test\].
Additional biases can arise if the radial selection function exhibits strong variations with redshift. The effect is particularly significant for magnitude-limited catalogues covering a large range of redshifts and in which the radial selection function rapidly drops at high redshift. In that case, the pair counts is dominated by nearby, more numerous objects: distant objects, although probing larger volumes, will have less weight. To account for this we use the minimum variance estimator of @davis82 for which the galaxy counts are essentially weighted by the inverse of the volume probed by each galaxy. This weighting scheme, usually referred as the $J_3$ weighting, is defined as [@hamilton93] $$w^{J_3}(z,s)=\frac{1}{1+\bar{n}(z) 4\pi J_3(s)}\, ,$$ where $z$ is the redshift of the object, $s$ is the redshift-space pair separation, $\bar{n}(r)$ the galaxy number density at $z$ and $J_3(s)$ is defined as $$J_3(s)=\int_0^s s^{\prime 2} \xi(s^\prime)ds^\prime.$$ Each pair is then weighted by, $$w^{J_3}_{ij}=w^{J_3}_i(z_i,s_{ij})w^{J_3}_j(z_j,s_{ij}).$$ However, we find that applying $J_3$ weighting does not significantly change the amplitude and shape of the correlation function in our sample, and tends to produce noisy correlation functions especially for high-redshift sub-samples. We thus decided not to apply this correction in this analysis.
The final weight assigned to $GG$, $GR$, and $RR$ pairs combine the survey completeness and angular pair weights as $$\begin{aligned}
GG(r)&=\sum_{i=1}^{N_G}\sum_{j=i+1}^{N_G}w_i(Q_i)w_j(Q_j)w^A(\theta_{ij})\Theta_{ij}\left(r\right) \\
GR(r)&=\sum_{i=1}^{N_G}\sum_{j=1}^{N_R}w_i(Q_i)\Theta_{ij}\left(r\right) \\
RR(r)&=\sum_{i=1}^{N_R}\sum_{j=i+1}^{N_R}\Theta_{ij}\left(r\right) \, ,\end{aligned}$$ where $\Theta_{ij}(r)$ is equal to unity for $r_{ij}$ in $[r-\Delta
r/2,r+\Delta r/2]$ and null otherwise.
We measure correlation functions using both linear and logarithmic binning. We define the separation associated with each bin as the bin centre and as the mean pair separation inside the bin, respectively for the linear and logarithmic binning [@zehavi11]. The latter definition is more accurate than using the bin centre, in particular at large $r$ when the bin size is large.
The galaxy real-space correlation function $\xi(r)$ is not directly measurable from redshift survey catalogues because of galaxy peculiar velocities that affect redshift measurements. Galaxy peculiar velocities introduce distortions in the galaxy clustering pattern and as a consequence we can only measure redshift-space quantities. We measure the anisotropic redshift-space correlation function $\xi(r_p,\pi)$ in which the redshift-space galaxy separation vector has been divided in two components, $r_p$ and $\pi$, respectively perpendicular and parallel to the line-of-sight [@fisher94]. This decomposition, which assumes the plane-parallel approximation, allows us to isolate the effect of peculiar velocities as these modify only the component parallel to the line-of-sight. Redshift-space distortions can then be mitigated by integrating $\xi(r_p,\pi)$ over $\pi$, thus defining the projected correlation function $$w_p(r_p)=\int^{\pi_{\rm max}}_{-\pi_{\rm max}} \xi(r_p,\pi)d\pi.$$ We measure $w_p(r_p)$ using an optimal value of $\pi_{\rm
max}=40\mhmpc$, allowing us to reduce the underestimation of the amplitude of $w_p(r_p)$ on large scales and at the same time to avoid including noise from uncorrelated pairs with separations of $\pi>40\mhmpc$. The projected correlation function allows us to measure real-space clustering (but see the later parts of Section \[sec:test\]). To combine the correlation function measurements from the two fields, we measure the mean of one plus the correlation functions in W1 and W4 weighted by the square of the number density, so that the combined correlation function $\xi(r_p,\pi)$ is obtained from $$1+\xi(r_p,\pi)=\frac{n^2_{W1}(1+\xi_{W1}(r_p,\pi))+n^2_{W4}(1+\xi_{W4}(r_p,\pi))}{n^2_{W1}+n^2_{W4}},$$ where $n_{W1}$ and $n_{W4}$ are the observed galaxy number densities in the W1 and W4 fields respectively.
Tests of the clustering estimation {#sec:test}
==================================
Simulation data {#sec:mocks}
---------------
To test the robustness of our clustering estimation we make use of a large number of mock galaxy samples, which are designed to be a realistic match to the VIPERS sample. We create two sets of mock samples based on the Halo Occupation Distribution (HOD) technique. These two sets only differ by the input halo catalogue that has been used. In the first set of mocks, we used the haloes extracted from the MultiDark dark matter N-body simulation [@prada12]. This simulation, which assumes a flat $\Lambda {\rm CDM}$ cosmology with $(\Omega_m,~\Omega_\Lambda,~\Omega_b,~h,~n,~\sigma_8) =
(0.27,~0.73,~0.0469, ~0.7,~0.95,~0.82)$, covers a volume of $1\mhgpcc$ using $N=2048^3$ particles. In the simulation, the haloes have been identified using a friends-of-friends algorithm with a relative linking length of $b=0.17$ times the inter-particle separation (i.e. $0.083\mhmpc$) . The mass limit to which halo catalogues are complete is $10^{11.5}\mhmsun$. Because this limiting mass is too large to host the faintest galaxies observed with VIPERS, we use the method of @delatorre13 to reconstruct haloes below the resolution limit. This method is based on stochastically resampling the halo number density field using constraints from the conditional halo mass function. For this, one needs to assume the shapes of the halo bias factor and halo mass function at masses below the resolution limit and use the analytical formulae obtained by @tinker08 [@tinker10]. With this method we are able to populate the simulation with low-mass haloes with a sufficient accuracy to have unbiased galaxy two-point statistics in the simulated catalogues [see @delatorre13 for details]. The minimum reconstructed halo mass we consider for the purpose of creating VIPERS mocks is $10^{10}\mhmsun$.
We then apply to the complete halo catalogues the algorithm presented in @carlson10 to remap halo positions and velocities in the initial simulation cube onto a cuboid of the same volume but different geometry. This is done to accommodate a maximum number of disjoint VIPERS W1 and W4 fields within the $1\mhgpcc$ volume of the simulation. This process allows us to create 26 and 31 independent lightcones for W1 and W4 respectively over the redshift range $0.4<z<1.3$. The lightcones are built by considering haloes from the different snapshots, disposing them according to their distance from the coordinate origin of the lightcone. The lightcones are then populated with galaxies using the HOD technique. In this process, we populate each halo with galaxies according to its mass, the mean number of galaxies in a halo of a given mass being given by the HOD. It is common usage to differentiate between central and satellite galaxies in haloes. While the former are put at rest at halo centres, the latter are randomly distributed within each halo according to a NFW radial profile. The halo occupation function and its dependence on redshift and luminosity/stellar mass must be precisely chosen in order to obtain mock catalogues with realistic galaxy clustering properties. We calibrated the halo occupation function directly on the VIPERS data. We performed an analytic HOD modelling of the projected correlation function for different samples selected in luminosity and redshift that we will present in Section \[sec:realclus\]. We obtain from this a series of HOD parameters at different redshifts and for different cuts in $B$-band absolute magnitude, which we then interpolate to obtain a general redshift- and $B$-band absolute magnitude-dependent halo occupation function $\langle N_{\rm
gal}(m|z,M_B)\rangle$. We use the latter function to populate the haloes with galaxies. Finally, we add velocities to the galaxies and measure their redshift-space positions. While the central galaxies are assigned the velocity of their host halo, satellite galaxies have an additional random component for which each Cartesian velocity component is drawn from a Gaussian distribution with a standard deviation that depends on the mass of the host halo. Details about the galaxy mock catalogue construction are given in Appendix A.
The second set of mocks that we built is based on halo catalogues created with the Pinocchio code[^2] [@monaco02]. This code follows the evolution of a set of particles on a regular grid using an ellipsoidal model to compute collapse times and identify dark matter halos, and the Zel’dovich approximation to displace the haloes from their initial position. While the recovery of haloes works well on an object-by-object basis, their positions and velocities on scales below $10\mhmpc$ suffer by the lack of accuracy of the Zel’dovich approximation. The halo positions and velocities obtained with this method are less accurate than those from the N-body simulation, and the halo clustering is generally underestimated on scales below $3\mhmpc$ [e.g. @monaco02]. However this approach has the advantage of being very fast and can be used to generate a large number of independent halo catalogue realizations. We created $200$ independent halo mock realizations assuming the same cosmology as the MultiDark N-body simulation. The remaining steps in generating galaxy mock samples are similar to those used for the mocks based on the MultiDark simulation. The only difference is that here we do not need to divide each simulation into sub-volumes to generate different lightcones: we can directly create volumes of the size of the lightcones.
The final step in obtaining fully realistic VIPERS mocks is to add the detailed survey selection function. The procedure that we follow is similar to that used in the VVDS and zCOSMOS surveys, which were also based on VIMOS observations [@meneux06; @iovino10; @delatorre11]. We start by applying the magnitude cut $i'<22.5$ and the effect of the colour selection on the radial distribution of the mocks. The latter is done by depleting the mocks at $z<0.6$ so as to reproduce the [${CSR}$]{}. The mock catalogues that we obtain are then similar to the parent photometric sample in the data. We next apply the slit-positioning algorithm with the same setting as for the data. This allows us to reproduce the VIPERS footprint on the sky, the small-scale angular incompleteness and the variation of [${TSR}$]{}across the fields. Finally, we deplete each quadrant to reproduce the effect of the [${SSR}$]{}. Thus we are able to produce realistic mock galaxy catalogues that contain the detailed survey completeness function and observational biases of VIPERS.
Effects of systematics on the correlation function
--------------------------------------------------
### Effects related to the radial selection function
We first study the impact on our correlation function measurements of using different methods to estimate the radial selection. A key aspect in three-dimensional clustering estimation is to have a smooth and unbiased redshift distribution from which the random sample can be drawn. In particular, when the data sample used to estimate the radial distribution is not very large, one generally has to deal with strong features associated with prominent structures; these must not be allowed to induce spurious clustering in the random sample.
There are several empirical methods for avoiding this problem. One can for instance interpolate the binned observed distribution using cubic splines, filter the observed distribution with a kernel sufficiently large to erase the strong features in the distribution, or fit the observed distribution with a smooth template $N(z)$ and then randomly sample it. In general most of the methods are parametric and have to be calibrated. An alternative non-parametric method is the $V_{\rm
max}$ method. This method consists in randomly sampling the maximum volumes $V_{\rm max}$ probed by each galaxy in the survey [e.g. @kovac10; @cole11]. The $V_{\rm max}$ value for each galaxy corresponds to the volume between the minimum and the maximum redshifts $z_{\rm min}$ and $z_{\rm max}$ at which the galaxy is observable in the survey.
Fig. \[fig6\] applies three such approaches to estimate the galaxy radial distribution in the combined W1+W4 sample: the analytical $N(z)$ of Eq. \[eq:nz\]; the Gaussian filtering method; and the $V_{\rm max}$ method. This figure shows the recovered $N(D_c)$ in the random sample with each method, with $D_c$ being the radial comoving distance; in practice we work with $N(D_c)$ instead of $N(z)$. We find that the methods give different estimates of the radial distribution. In the case of the the Gaussian filtering, a kernel size of $150\mhmpc$ is needed to smear out the peaks in the distribution, otherwise the recovered $N(D_c)$ is still affected by large structures in the field – particularly by that at $D_c\simeq1600\mhmpc$. As expected, the filtering method tends to artificially broaden the $N(D_c)$ distribution, whereas the analytical and $V_{\rm max}$ methods are much smoother by construction and do not broaden the $N(D_c)$. We find that the $V_{\rm max}$ estimate shows a slightly flatter distribution at the level of the peak of the distribution, which seems visually to be more consistent with the data. In Fig. \[fig7\] we show the effect of using these different estimates of the radial distribution on the shape of the measured correlation function. Gaussian filtering with a kernel size of $150\mhmpc$ and analytical $N(z)$ estimates both yield slightly smaller amplitudes of the projected correlation function on scales of above $10\mhmpc$ than the $V_{\rm max}$ method. Gaussian filtering with a kernel size of $100\mhmpc$ globally underestimates the clustering amplitude on $w_p(r_p)$ as expected, by about $5\%$. The analytical and $V_{\rm
max}$ methods give very similar answers, except on scales above $5\mhmpc$ where the former tends to produce a smaller clustering amplitude by $5-15\%$ with respect to the latter. This comparison shows that the $V_{\rm max}$ method is more robust as it uniquely allows us to restore some correlation signal at large separation. For this reason and the fact that it is non-parametric we finally decided to use the $V_{\rm max}$ estimate to measure two-point correlation functions.
### Effects related to the angular selection function
The most crucial aspect of the galaxy clustering estimation in VIPERS is to account for the angular selection function. We test our methodology and the different assumptions discussed in Section \[sec:estimation\] using the MultiDark mock samples. We measure the accuracy with which we can estimate the two-point correlation function, by confronting the two-point correlation functions measured in the parent catalogues with those measured in the observed mocks when different completeness corrections are included. We measure the average relative difference between the corrected observed mocks and the parent measurement for different statistics. For this test, we consider two galaxy samples encompassing respectively all galaxies in the redshift intervals $0.5<z<0.75$ and $0.75<z<1.0$, using the same redshift distribution in the parent and observed mock samples to construct the radial selection function of the random sample.
-0.8em
It is common usage in clustering analysis to account for the angular survey completeness by down-weighting the random pair counts. This is usually done by keeping the galaxy counts unweighted and depleting the random sample so as to reproduce the survey angular completeness. The same effect can be achieved by using a uniform angular distribution of random objects but weighting each of them by the inverse of the weight defined in Eq. \[eq:weight\]. If we do that and set all the angular pair weights to unity, we obtain the systematic error on $w_p(r_p)$ shown with the dotted curves in Fig. \[fig8\]. We concentrate first on the results in the interval $0.5<z<0.75$. We can see in this figure that the recovered clustering with this method is underestimated by about $10\%$ at about $1<r_p<20\mhmpc$, and then drops rapidly to $35\%$ below. The strong underestimation on small scales is due to the small-scale angular incompleteness effect inside the quadrants. The approach of modulating the random density is dubious in the context of VIPERS, since it treats the sampling variations as a pattern imposed on the large-scale structure. But because of the VIMOS slit allocation, these variations are strongly coupled with the true clustering (i.e. the observed sky distribution of VIPERS galaxies is rather uniform). It is therefore safer if we keep the random sample uniform but upweight the galaxies as described in section \[sec:estimation\]. In this case, we obtain the dot-dashed lines in Fig. \[fig8\]: these represent an improved estimation of $w_p(r_p)$, reducing the underestimation by $5-6\%$. As expected, further including the angular pairs weights permits us to remedy in part the underestimation on scales below $1\mhmpc$, where the systematic error reaches $15\%$ (solid lines).
So far, we have used the global survey completeness and angular weights, i.e. neglecting the redshift dependence. As an exercise we use the redshift information from the parent mocks to compute the true redshift-dependent weights and we obtain the dashed lines in the figure. Including the redshift dependence in the weights has the effect of improving the recovery of the projected correlation function by about $2\%$ over all probed scales. However, this improvement is rather modest – indicating that the use of the redshift-independent weights is a good approximation. Our best estimate of $w_p(r_p)$ therefore allows us to recover the true correlation function of the mocks at $0.5<z<0.75$ with about $7\%$ and $16\%$ underestimation respectively above and below $1\mhmpc$. In the redshift interval $0.75<z<1$ (shown in th bottom panel of Fig. \[fig8\]), we find the same behaviour except that the correlation function is globally better recovered with an underestimation smaller than $2-3\%$ at $r_p>0.6\mhmpc$ with the best method.
This test demonstrates that our methodology gives an accurate estimate of the galaxy clustering in VIPERS, even if there remains some residual systematic errors of up to $7\%$ on the scales above 1[$\,h^{-1}$ Mpc]{}and $15\%$ on smaller scales. We find that the effect varies with redshift, being more important at the lowest redshifts probed by VIPERS. Overall these systematics remain within the Poisson plus sample variance errors, shown with shaded regions in Fig. \[fig8\] and obtained from the standard deviation of [$\,w_p(r_p)$]{}among the parent mock catalogues.
Impact of possible residual zero-point uncertainties in the photometry
----------------------------------------------------------------------
At the time of writing, photometry from the latest CFHTLS release (T0007) has become available [@hudelot12]. We have compared magnitudes and colours of objects in the VIPERS sample with the new CFHTLS-T0007 photometry. For VIPERS, the most important feature of T0007 compared to previous releases is that each tile in the CFHTLS has now been rescaled to an absolute calibration provided by a new photometric pre-survey taken at CFHT for this purpose. In addition, in order to ensure that seeing variations between tiles and filters are correctly accounted for, this scaling has been done using aperture fluxes that are rescaled based on the seeing on each individual tile; detailed tests at Terapix have shown that mag\_auto magnitudes, which are affected by seeing variations, are not sufficiently precise for the percent-level photometric accuracy that is the objective of T0007.
An important consequence of this work for VIPERS is that the effect of seeing variation and photometric calibration errors are now cleanly separated; the stellar-locus fitting technique used to define the VIPERS selection using colours based on mag\_auto magnitudes mixes both these effects. To estimate the size of colour and magnitude offsets between T0007 and the actual VIPERS selection (based on T0005) colours of stars on each VIPERS tiles measured from Terapix IQ20 magnitudes (used to calibrate T0007) and from mag\_auto magnitudes in both releases have been compared. We find that these offsets shift the colour-colour locus we devised to remove lower-redshift $z<0.5$ galaxies [@guzzo13].
We test the effect of these possible variations of the colour selection across the fields in the context of galaxy clustering estimation. For this we use photometric redshifts and quantify the variations in $N(z)$ due to tile-to-tile variations of the colour selection, assuming the T0007 photometry as the reference. When comparing the $N(z_{phot})$ in the different tiles, we find that the redshift distribution varies in shape and amplitude at $z<0.6$ but only in amplitude above. The typical amplitude variations are of the order of about $5\%$ [@guzzo13]. We then measure the ratio between the $N(z)$ per tile and that averaged over the fields and use it as a redshift-dependent correction factor. To test how these variations of the colour selection affect the measured correlation function, we vary the $N(z)$ in the random sample for each quadrant using the correction factor previously defined on the averaged $N(z)$. The projected correlations obtained with and without this correction are shown in Fig. \[fig10\].
We can see that the correction has the effect of decreasing the amplitude of the projected correlation function by about $2-4\%$ on scales below $10\mhmpc$. We find a similar effect on the redshift-space angle-averaged correlation function $\xi(s)$. The amplitude and direction of the systematic effect follows our expectations, since spurious tile-to-tile fluctuations, if not properly corrected, enhance the amplitude of clustering. This test suggests that indeed such tile-to-tile variations of colour selection are present in the data. It is interesting to note that this systematic effect goes in the opposite direction to the effects of slit-positioning and associated incompleteness. In the end, because this possible effect remains very small, we do not attempt to correct it for the clustering analysis.
Real-space clustering {#sec:realclus}
=====================
Before studying redshift-space distortions in VIPERS, we begin by looking at the clustering in real space. The projected correlation function for all galaxies in the redshift range $0.5<z<1$ is shown in Fig. \[fig11\]. It is measured in logarithmic bins of $\Delta \log
r_p=0.2$ over the scales $0.1<r_p<30\mhmpc$. The error bars are estimated from the MultiDark mocks.
The measured [$\,w_p(r_p)$]{}functions in the W1 and W4 fields are very similar, in particular on scales below 5[$\,h^{-1}$ Mpc]{}. The combined projected correlation function in this redshift interval gives an accurate probe of the clustering up to scales of about 30[$\,h^{-1}$ Mpc]{}. We can compare the galaxy projected correlation function to predictions for the mass non-linear correlation function and thus estimate the global effective linear bias of these galaxies. We use the HALOFIT [@smith03] prescription for the non-linear mass power spectrum to compute the projected correlation function of mass at the mean redshift of the sample. By comparing the amplitudes of the measured galaxy and predicted mass correlations on scales of $r_p>1.7\mhmpc$ ($r_p>3.4\mhmpc$), and assuming a linear biasing relation of the form $\smash{w_p^{\rm gal}=b^2_L w_p^{\rm mass}}$, we obtain a linear bias of $b_L=1.35\pm0.02$ ($b_L=1.33\pm0.02$).
In order to make a detailed interpretation of the observed clustering of galaxies and produce realistic mock samples of the survey, we model our [$\,w_p(r_p)$]{}measurements within the context of the HOD [@seljak00; @peacock00; @berlind02; @cooray02]. This method defines the mean distribution of galaxies within haloes; under the assumption of the abundance, large-scale bias, and density profile of haloes, one can then completely specify the clustering of galaxies and predict [$\,w_p(r_p)$]{}. We define four $B$-band absolute magnitude-threshold samples in the redshift bin $0.7<z<0.9$ in which we measured [$\,w_p(r_p)$]{}. We model the projected correlation functions using HOD formalism, within a flat $\Lambda \rm{CDM}$ cosmology with parameters identical to those used in the MultiDark simulation (see Section \[sec:mocks\]). We restrict the fit to scales above $\smash{r_p=0.2\mhmpc}$ and below $r_p=30\mhmpc$ and correct empirically the measured projected correlation function for the residual underestimation at different scales, using the ratio between the parent and recovered [$\,w_p(r_p)$]{}in the observed mocks for the same galaxy selection. We assume that there is negligible error in taking this small correction to be independent of cosmology. In the fitting procedure we used both the sample number density and [$\,w_p(r_p)$]{}constraints in order to estimate the HOD parameters and their errors by exploring the full parameter space of the model.
In our HOD model the occupation number is parameterized as $$\left<N_{\rm gal}|m\right>=\left<N_{\rm cen}|m\right>(1+\left<N_{\rm sat}|m\right>) \label{eq:HOD}$$ where $\left<N_{\rm cen}|m\right>$ and $\left<N_{\rm sat}|m\right>$ are the average number of central and satellite galaxies in a halo of mass $m$. This model explicitly assumes that the first galaxy in haloes, when haloes have reached a sufficient mass, has to be central. Central and satellite galaxy occupations are defined as in [@zheng05]: $$\begin{aligned}
\left<N_{\rm cen}|m\right> &= \frac{1}{2}\left[1+\rm{erf}\left(\frac{\log~m - \log
M_{\rm min}}{\sigma_{\log~m}}\right)\right], \label{ncen} \\
\rlap{$\left<N_{\rm sat}|m\right>$}
\phantom{\left<N_{\rm cen}|m\right>}
&= \left(\frac{m-M_0}{M_1}\right)^{\alpha}.
\label{nsat}\end{aligned}$$ where $M_{\rm min}$, $\sigma_{\log~m}$, $M_{0}$, $M_{1}$, and $\alpha$ are the HOD parameters. The parameter $M_{0}$ is generally poorly constrained and we decided in this analysis to fix $M_{0}=M_{\rm min}$ [see also @white11; @delatorre12].
In the halo model formalism, the galaxy power spectrum or two-point correlation function can be written as the sum of two component: the 1-halo term that describes the correlations of galaxies inside haloes and the 2-halo term that characterises the correlations of galaxies sitting in different halos. We follow the formalism of @vandenbosch13 to define the projected correlation in the context of this model. In particular we use their improved prescriptions for the treatment of the halo-exclusion and residual redshift-space distortions effects on [$\,w_p(r_p)$]{}, induced by the finite $\pi_{\rm max}$ values used in the data [@vandenbosch13]. We use the halo bias factor and mass function of @tinker08 and @tinker10 respectively, and assume that satellite galaxies trace the mass distribution within haloes. We make the assumption of a NFW [@navarro96] radial density profile and use the concentration-mass relation obtained by @prada12 from the MultiDark simulation. The details of the implementation of the HOD model are given in de la Torre at al. (in preparation).
We present in Fig. \[fig12\] the measurements and best-fitting HOD models for the four different volume-limited absolute magnitude-threshold samples. We find that the model reproduces the observations well. To have a global characterisation of the clustering properties of galaxies in VIPERS, we extend this modelling to two additional redshift bins at $0.5<z<0.7$ and $0.9<z<1.1$. The best-fitting $M_{\rm min}$ and $M_{1}$ parameters for the different sub-samples are shown in Fig. \[fig13\] and compared to previous measurement in the same range of redshift and number density. Because in the different surveys the subsamples are not selected with the same absolute magnitude band of selection, it is convenient to compare the HOD parameters in terms of redshift and the number density probed by each sample. Note that here we compare measurements only from analyses using the same HOD parameterization, although the exact implementation of the models can differ slightly. The VIPERS sample allows us to constraint these parameters with an unprecedented accuracy over the redshift range $0.5<z<1.1$. Our results are consistent with previous measurements, in particular with the DEEP2 [@zheng07] and CFHTLS [@coupon12] analyses. Our HOD analysis is aimed at modelling the global clustering properties in VIPERS, but we refer the reader to @marulli13 and de la Torre et al. (2013, in preparation) for detailed analysis and interpretation of the luminosity and stellar dependence of galaxy clustering and luminosity-dependent halo occupation respectively.
We use the derived HOD parameters to define a global luminosity- and redshift-dependent occupation number which is then used to create accurate HOD mocks of the survey. In order to interpolate between the different redshifts we assume a global luminosity evolution proportional to redshift, so that the magnitude threshold values scale linearly with redshift [@brown08; @coupon12]. We find that one can approximate $\langle N_{\rm gal}(m|z,M_B) \rangle$ using Eq. \[eq:HOD\] with $$\begin{aligned}
\log M_{\rm min}(x) &= 10.61\exp(1.49^{-24.66-x}) \\
\sigma_{\log m}(x) &= 0.06\exp(-0.08x+0.34) \\
M_0(x) &= M_{\rm min}(x) \\
M_1(x) &= 13.5 M_{\rm min}(x) \\
\alpha(x) &= 0.29\exp(-0.05x+0.38)\, ,\end{aligned}$$ where $x=M_B-5\log(h)+z$. $M_{\rm min}$ and $M_{1}$ are found to be strongly correlated in such a way that $M_{1}$ is approximately equal to $10-20$ times $M_{\rm min}$ depending on the redshift probed and the model implementation [e.g. @beutler13]. In our analysis we find that $M_1(x)$ can be approximated by $13.5$ times $M_{\rm
min}(x)$. The function $\langle N_{\rm gal}(m|z,M_B) \rangle$ is shown in Fig. \[fig14\] for the different values of $x$ probed with VIPERS. We checked the consistency of this parameterization and verify that the [$\,w_p(r_p)$]{}predicted by the mocks and the that measured are in good agreement for all probed redshift and luminosity thresholds.
Redshift-space distortions
==========================
The main goal of VIPERS is to provide with the final sample accurate measurements of the growth rate of structure in two redshift bins between $z=0.5$ and $z=1.2$. The growth rate of structure $f$ can be measured from the anisotropies observed in redshift space in the galaxy correlation function or power spectrum. Although this measurement is degenerate with galaxy bias, the combination $f\sigma_8$ is measurable and still allows a fundamental test of modifications of gravity since it is a mixture of the differential and integral growth. In this Section, we present an initial measurement of $f\sigma_8$ from the VIPERS first data release.
Method
------
With the first epoch VIPERS data we can reliably probe scales below about 35[$\,h^{-1}$ Mpc]{}. The use of the smallest non-linear scales, i.e. typically below $10\mhmpc$, is however difficult because of the limitations of current redshift-space distortion models, which cannot describe the non-linear effects that relate the evolution of density and velocity perturbations. However, with the recent developments in perturbation theory and non-linear models for RSD [e.g. @taruya10; @reid11; @seljak11], we can push our analysis well into mildly non-linear scales and obtain unbiased measurements of $f\sigma_8$ while considering minimum scales of $5-10\mhmpc$ [@delatorre12].
With the VIPERS first data release, we perform an initial redshift-space distortion analysis, considering a single redshift interval of $0.7<z<1.2$ to probe the highest redshifts where the growth rate is little-konwn. We select all galaxies above the magnitude limit of the survey in that interval. The effective pair-weighted mean redshift of the subsample is $z=0.80$. The measured anisotropic correlation function [$\,\xi(r_p,\pi)$]{}is shown in the top panel of Fig. \[fig15\]. We have used here a linear binning of $\smash{\Delta
r_p=\Delta \pi=1\mhmpc}$. One can see in this figure the two main redshift-space distortion effects: the elongation along the line-of-sight, or Finger-of-God effect, which is due to galaxy random motions within virialized objects and the squashing effect on large scales, or Kaiser effect, which represents the coherent large-scale motions of galaxies towards overdensities. The latter effect is the one we are interested in since its amplitude is directly related to the growth rate of perturbations. Compared to the first measurement at such high redshift done with the VVDS survey [@guzzo08], this signature is detected with much larger significance, with the flattening being apparent to $r_p>30\mhmpc$.
The anisotropic correlation has been extensively used in the literature to measure the growth rate or the distortion parameter $\beta$ [e.g. @hawkins03; @guzzo08; @cabre09; @beutler12; @contreras13]. However, with the increasing size and statistical power of redshift surveys, an alternative approach has grown in importance: the use of the multipole moments of the anisotropic correlation function. This approach has the main advantage of reducing the number of observables, compressing the cosmological information contained in the correlation function. In turn, this eases the estimation of the covariance matrices associated with the data. We adopt this methodology in this analysis and fit for the two first non-null moments $\xi_0(s)$ and $\xi_2(s)$, where most of the relevant information is contained, and ignore the contributions of the more noisy subsequent orders. The multipoles moments are measured from $\xi(s,\mu)$ which is obtained exactly as for [$\,\xi(r_p,\pi)$]{}, except that the redshift-space separation vector $\vec{s}$ is now decomposed into the polar coordinates $(s,\mu)$ such that $r_p=s(1-\mu^2)^{1/2}$ and $\pi=s\mu$. The multipole moments are related to $\xi(s,\mu)$ as, $$\xi_\ell(s)=\frac{2\ell+1}{2}\int_{-1}^{1}\xi(s,\mu)L_\ell(\mu)d\mu, \label{eq:xil}$$ where $L_\ell$ is the Legendre polynomial of order $\ell$. In practice the integration of Eq. \[eq:xil\] is approximated by a Riemann sum over the binned $\xi(s,\mu)$. We use a logarithmic binning in $s$ of $\Delta \log(s)=0.1$ and linear binning in $\mu$ with $\Delta
\mu=0.02$.
Covariance matrix, error estimation, and fitting procedure
----------------------------------------------------------
The different bins in the observed correlation function and associated multipole moments are correlated to some degree, and this must be allowed for in order to fit the measurements with theoretical models. We estimate the covariance matrix of the monopole and quadrupole signal using the MultiDark (MD) and Pinocchio (PN) HOD mocks. The generic elements of the matrix can be evaluated as $$C_{ij}=\frac{1}{N_{R}-1}\sum_{k=1}^{N_R}\left(y^k(s_i)-\bar{y}(s_i)\right)\left(y^k(s_j)-\bar{y}(s_j)\right)$$ where $N_R$ is the number of mock realizations, $y(s)$ is the quantity of interest, and the indices $i,j$ run over the data points.
The number of degrees of freedom in the multipole moments varies between $11$ and $15$ depending on the scales considered. Because we have only 26 MD mock realizations, the covariance matrix elements cannot be constrained accurately with the MD mocks only: the covariance matrix is unbiased, but it can have substantial noise. In order to mitigate the noise and obtain an accurate estimate of the covariance matrix, we apply the shrinkage method [@pope08], using the covariance matrix obtained with the 200 PN mocks as the target matrix. The PN mocks are more numerous and therefore each element of the associated covariance matrix is very well constrained, although the covariance may be biased to some extent. This bias is related to inaccuracies in the predicted moments, which are mainly driven by the limited accuracy of the Zel’dovich approximation used in the PN mocks to predict the peculiar velocity field. The shrinkage technique allows the optimal combination of an empirical estimate of the covariance with a target covariance, minimising the total mean squared error compared to the true underlying covariance. An optimal covariance matrix $C$ is then obtained with $$C=\lambda T + (1-\lambda) S,$$ where $\lambda$ is the shrinkage intensity and the target ${T}$ and empirical ${S}$ covariance matrices correspond respectively to those obtained from the PN and MD mocks. $\lambda$ is calculated from [@pope08] $$\label{shrink}
\lambda=\frac{\sum_{i,j} {\rm Cov}(S_{ij},S_{ij})-{\rm Cov}(T_{ij},S_{ij})}{\sum_{i,j} ({T}_{ij}-{S}_{ij})^2} ,$$ where ${\rm Cov}(A_{ij},B_{ij})$ stands for the covariance between the elements $(i,j)$ of the matrices $A$ and $B$. We note that, since the empirical and target matrices are independent, the term ${\rm
Cov}(T_{ij},S_{ij})$ vanishes in the numerator of Eq. \[shrink\]. The effect of shrinkage estimation on the MD covariance matrix is shown in Fig. \[fig16\].
To measure the growth rate of structure we perform a maximum likelihood analysis of the data given models of redshift-space distortions by adopting the likelihood function $\mathcal{L}$: $$-2\ln{\mathcal{L}}=\sum_{i=1}^{N_p}\sum_{j=1}^{N_p}\Delta_i C^{-1}_{ij} \Delta_j,$$ where $N_p$ is the number of points in the fit, $\Delta$ is the data-model difference vector, and $C$ is the covariance matrix. The likelihood is performed on the quantity $y(s)=s^2\xi_\ell(s)$, rather than simply $y(s)=\xi_\ell(s)$ to reduce the range of variations of multipole values at different $s$ in the fit. In the end, the quantity which is matched with model predictions is the concatenation of $s^2\xi_0$ and $s^2\xi_2$ for the set of separations considered.
As a final remark, we note that we use the direct inverse of the covariance matrix without applying the correction discussed by @hartlap07, as it is not clear how the size of the correction is affected by the shrinkage estimation technique. The resulting errors derived from the likelihood are well matched to the distribution of best-fit values from the mocks, which gives us confidence that only a small correction, if any, would be necessary.
Models
------
The formalism that describes redshift-space anisotropies in the power spectrum can be derived from writing the mass density conservation in real and redshift space [@kaiser87]. In particular, in the plane-parallel approximation, which is assumed in this analysis, the anisotropic power spectrum of mass has the general compact form [@scoccimarro99] $$\begin{aligned}
P^s(k,\nu)&={}\int \frac{d^3\vec{r}}{(2\pi)^3} e^{-i\vec{k} \cdot \vec{r}}\left<e^{-ikf\nu \Delta u_\parallel} \times \right. \nonumber \\
& \left. [\delta(\vec{x})+\nu^2 f \theta(\vec{x})][\delta(\vec{x}^\prime)+\nu^2 f \theta(\vec{x}^\prime)]\right> \label{eq:rspk}\end{aligned}$$ where $\nu=k_\parallel/k$, $u_\parallel(\vec{r})=-v_\parallel(\vec{r})/(f aH(a))$, $v_\parallel(\vec{r})$ is the line-of-sight component of the peculiar velocity, $\delta$ is the density field, $\theta$ is the divergence of the velocity field, $\Delta
u_\parallel=u_\parallel(\vec{x})-u_\parallel(\vec{x}^\prime)$ and $\vec{r}=\vec{x}-\vec{x}^\prime$. Although exact, Eq. \[eq:rspk\] is impractical for direct use on redshift survey measurements and several models have been proposed to approximate it. In the assumption that galaxies linearly trace the underlying mass density field with a bias $b$, we can build three empirical models. These take the form, $$P_g^s(k,\nu)=D(k\nu\sigma_v)P_K(k,\nu;f,b) \label{eq:models}$$ where, $$\begin{aligned}
D(k\nu\sigma_v)=\left\{ \nonumber
\begin{array}{lcl}
\exp(-(k\nu\sigma_v)^2)
\\
\\
1/(1+(k\nu\sigma_v)^2)
\end{array}
\right.\end{aligned}$$ and, $$\begin{aligned}
P_K(k,\nu;f,b) = \hspace{7.0cm} & \nonumber \\
\left\{ \nonumber
\begin{array}{lcl}
\rlap{$b^2 \Pdd(k)+2\nu^2 fb \Pdd(k) +\nu^4 f^2 \Pdd(k)$} \hspace{5.5cm} {\rm (model~A)} \\
\\
\rlap{$b^2 \Pdd(k)+2\nu^2 fb \Pdt(k) +\nu^4 f^2 \Ptt(k)$} \hspace{5.5cm} {\rm (model~B)} \\
\\
b^2 \Pdd(k)+2\nu^2 fb \Pdt(k) +\nu^4 f^2 \Ptt(k) \\
\rlap{$+ C_A(k,\nu;f,b) + C_B(k,\nu;f,b).$} \hspace{5.5cm} {\rm (model~C)}
\end{array}
\right.\end{aligned}$$ In these equations $\Pdd$, $\Pdt$, $\Ptt$ are respectively the non-linear mass density-density, density-velocity divergence, and velocity divergence-velocity divergence power spectra and $\sigma_v$ is an effective pairwise velocity dispersion that we can fit for and then treat as a nuisance parameter. The expressions for $C_A(k,\nu;f,b)$ and $C_B(k,\nu;f,b)$ are given in appendix A of @delatorre12. These empirical models can be seen in configuration space, as a convolution of a damping function $D(k\mu\sigma_v)$, which we assume to be Gaussian or Lorentzian in Fourier space, and a term involving the density and velocity divergence correlation functions and their spherical Bessel transforms. While the first term essentially (but not only) describes the Finger-of-God effect, the second, $P_K(k,\nu,b)$, describes the Kaiser effect. We note that model A is the classical dispersion model [@peacock94] based on the linear @kaiser87 model; model B is the generalisation proposed by @scoccimarro04 that accounts for the non-linear coupling between the density and velocity fields, making explicitly appearing the velocity divergence auto-power spectrum and density–velocity divergence cross-power spectrum; model C is an extension of model B that contains the two additional correction terms proposed by @taruya10 to correctly account for the coupling between the Kaiser and damping terms. We refer the reader to @delatorre12 for a thorough description of these models.
In the end, the model $\xi^s_\ell(s)$ are obtained from their Fourier counterparts, $$\label{expmomK}
P^s_\ell(k)=\frac{2\ell+1}{2} \int_{-1}^1 d\nu P_g^s(k,\nu) L_\ell(\nu),$$ as $$\label{expmom}
\xi^s_\ell(s)=i^\ell \int \frac{dk}{2\pi^2} k^2 P^s_\ell(k)j_\ell(ks),$$ where $j_\ell$ denotes the spherical Bessel functions.
The redshift-space distortion models involve the knowledge of the underlying mass non-linear power spectra of density and velocity divergence at the effective redshift of the sample. Although the real-space non-linear correlation function of galaxies can be recovered from the deprojection of the observed projected correlation function [e.g. @bianchi12] and thus be used to some extent as an input of the model [e.g. @hamilton92], it is not feasible for the more advanced models which involve the velocity divergence power spectrum. The non-linear power spectra can however be predicted from perturbation theory or simulations for different cosmological models. In the case of a $\Lambda \rm{CDM}$ cosmology, the shape of the non-linear power spectra depends on the parameters $P=(\Omega_m,
\Omega_b, h, n_s, \sigma_8)$ and can be obtained to a great accuracy from semi-analytical prescriptions such as [HALOFIT]{}. In this analysis we use the latest calibration of [HALOFIT]{} by @bird12 to obtain $\Pdd$ and use the fitting functions of @jennings12 to predict $\Ptt$ and $\Pdt$ from $\Pdd$. The latter fitting functions are accurate at the few percent level up to $k\simeq0.3$ at $z=1$.
In the models, the bias and growth rate parameters $b$ and $f$ are degenerate with the normalization of the power spectrum parameter $\sigma_8$. Thus, in practice only the combination of $b\sigma_8$ and $f\sigma_8$ can be constrained if no assumption is made on the actual value of $\sigma_8$. This can be done by renormalizing the power spectra in the models so that $P_{xx}(k,z) \rightarrow
P_{xx}(k,z)/\sigma_8^2(z)$, thus reparameterizing the models such that the parameters $(b,f)$ are replaced by $(b\sigma_8,f\sigma_8)$ in Eq. \[eq:models\]. This parameterization can be used for model A and B, although not for model C in which the correction term $C_A$ involves the additional combinations: $b^2f\sigma_8^4$, $bf^2\sigma_8^4$, and $f^3\sigma_8^4$ [see @taruya10; @delatorre12]. The correction term $C_A$, which partially describes the effects of the non-linear coupling between the damping and Kaiser terms, mostly affects the monopole and quadrupole moments of the redshift-space power spectrum on scales of $k>0.1$ [@taruya10]. Therefore, in principle $C_A$ could help breaking the degeneracy between $f$ and $\sigma_8$ although this has to be verified in detail. In the end, in the case of model C we decided to treat $(f,b,\sigma_8,\sigma_v)$ as separate parameters in the fit.
Detailed tests against mock data {#sec:rsdfits}
--------------------------------
{width="13cm"}
We perform the redshift-space distortion analysis of the VIPERS data in the context of a flat $\Lambda\rm{CDM}$ cosmological model. Before considering the redshift-space distortions in the data, we first test the methodology and expected errors on $f\sigma_8$ using the mock samples. We fix the shape of the mass non-linear power spectrum to that of the simulation (since the observed real-space correlations are of high accuracy) and perform a likelihood analysis of each individual MD mock. In the case of model C we also fix the normalisation of the power spectrum as discussed above. The distribution of best-fitting $f\sigma_8$ values gives us a direct estimate of the probability distribution function of the parameter for a given fitting method, and serves as a check on the errors from the full likelihood function. We estimate the median and $68\%$ confidence region of the distribution. These are shown in Fig. \[fig17\] for the different models presented in the previous section and for various minimum scales $s_{min}$ in the fit.
Model A is known to be the most biased model [e.g. @okumura11; @bianchi12; @delatorre12] and our results confirm these findings. We thus decided not to describe in the following the detailed behaviour of this model and focus on models B and C. We find that in general model B tends to be less biased than model C, which is surprising at first sight as model C is the most advanced and supposed to be the most accurate [@kwan12; @delatorre12]. This could be due to the quite restricted scales that we consider and the limited validity of its implementation on scales below $s\simeq10\mhmpc$, as the maximum wavenumber to which we can predict $\Pdt$ and $\Ptt$ is about $k=0.3$. We defer the investigation of this issue to the redshift-space distortion analysis of the final sample and concentrate here on model B. The shape of the damping function in the models also affects the recovered $f\sigma_8$, as expected given the minimum scales we consider, although in the case of model B the change in $f\sigma_8$ is at most $5\%$. Including smaller scales in the fit reduces the statistical error but at the price of slightly larger systematic error. Therefore from this test we decided to use model B and a compromise value for the minimum scale of $s_{min}=6\mhmpc$.
The VIPERS result for the growth rate
-------------------------------------
These comprehensive tests of our methodology give us confidence that we can now proceed to the analysis of the real VIPERS data and expect to achieve results for the growth rate that are robust, and which can be used as a trustworthy test of the nature of gravity at high redshifts.
As explained earlier, we assume a fixed shape of the mass power spectrum consistent with the cosmological parameters obtained from WMAP9 [@hinshaw12] and perform a maximum likelihood analysis on the data, considering variations in the parameters that are not well determined externally. The best-fitting models are shown in Fig. \[fig18\] when considering either a Gaussian or a Lorentzian damping function. Although the mock samples tend to slightly prefer models with Lorentzian damping as seen in Fig. \[fig17\], we find that the Gaussian damping provides a much better fit to the real data and we decided to quote the corresponding $f\sigma_8$ as our final measurement.
We measure a value of $$f(z=0.8)\sigma_8(z=0.8)=0.47\pm0.08,$$ which is consistent with the General Relativity prediction in a flat $\Lambda \rm{CDM}$ Universe with cosmological parameters given by WMAP9, for which the expected value is $f(0.8)\sigma_8(0.8)=0.45$. We find that our result is not significantly altered if we adopt a Planck cosmology [@plank13] for the shape of the mass power spectrum, changing our best-fitting $f\sigma_8$ by only $0.2\%$. This shows that given the volume probed by the survey, we are relatively insensitive to the additional Alcock-Paczynski distortions [@alcock79] on the correlation function. The marginalised likelihood distribution of $f\sigma_8$ is shown superimposed on the mock results in Fig. \[fig19\]. We see that the preferred values of the growth rate are consistent with the mocks, in terms of the width of the likelihood function being comparable to the scatter in mock fitted values. To illustrate the degree of flattening of the anisotropic correlation function induced by structure growth, we show in the middle and bottom panels of Fig. \[fig15\] [$\,\xi(r_p,\pi)$]{}for two MD mocks for which the measured $f\sigma_8$ roughly coincide with the $1\sigma$ limits around the best-fit $f\sigma_8$ value obtained in the data. We therefore conclude that the initial VIPERS data prefer a growth rate that is fully consistent with predictions based on standard gravity. Our measurement of $f\sigma_8$ is also in good agreement with previous measurements at lower redshifts from 2dFGRS [@hawkins03], 2SLAQ [@ross07], VVDS [@guzzo08], SDSS LRG [@cabre09; @samushia12], WiggleZ [@blake12], BOSS [@reid12], and 6dFGS [@beutler12] surveys as shown in Fig. \[fig20\]. In particular, it is compatible within $1\sigma$ with the results obtained in the VVDS [@guzzo08] and WiggleZ [@blake12] surveys at a similar redshift, although WiggleZ measurements tend to suggest lower $f\sigma_8$ values, smaller than expected in standard gravity [but see @contreras13].
Finally we compare our measurement to the predictions of three of the most plausible modified gravity models studied in @diporto12. We consider Dvali-Gabadaze-Porrati [DGP, @dvali00], $f(R)$, and coupled dark energy models and show their predictions in Fig. \[fig20\] [see @diporto12 for the detail of their analytic predictions]. We find that our $f\sigma_8$ measurement is currently unable to discriminate between these modified gravity models and standard gravity given the size of the uncertainty, although we expect to improve the constraints with the analysis of the VIPERS final dataset.
Conclusions
===========
We have analysed in this paper the global real- and redshift-space clustering properties of galaxies in the VIPERS survey first data release. We have presented the selection function of the survey and the corrections that are needed in order to derive estimates of galaxy clustering that are free of observational biases. This has been achieved by using a large set of simulated mock realizations of the survey to quantify in detail the systematics and uncertainties on our clustering measurements.
The first data release of about $54000$ galaxies at $0.5<z<1.2$ in the VIPERS survey allows a measurement of the real-space clustering of galaxies through the measurement of the projected two-point correlation function, to an unprecedented accuracy over $0.5<z<1.2$. This permits detailed modelling of the halo occupation distribution at these redshifts to be carried out. From an initial HOD modelling of $B$-band luminosity selected samples, we have been able to accurately determine the characteristic halo masses for halo occupation in the redshift interval $0.5<z<1.0$. These measurements are invaluable for creating realistic synthetic mock samples.
The main goal of VIPERS is to provide an accurate measurement of the growth rate of structure through the characterisation of the redshift-space distortions in the galaxy clustering pattern. With the first data release we have been able to provide an initial measurement of $f\sigma_8$ at $z=0.8$. We find a value of $f\sigma_8=0.47\pm0.08$ which is in agreement with previous measurements at lower redshifts. This allows us to put a new constraint on gravity at the epoch when the Universe was almost half its present age. Our measurement of $f\sigma_8$ is statistically consistent with a Universe where the gravitational interactions between structures on $10 \mhmpc$ scales can be described by Einstein’s theory of gravity.
The present dataset represents the half-way stage of the VIPERS project, and the final survey will be large enough to subdivide our measurements and follow the evolution of $f\sigma_8$ out to redshift one. This will allow us to address some issues such as the suggestion from the WiggleZ measurements that $f\sigma_8$ is lower than expected at $z>0.5$. Our measurement at $z=0.8$ already argues against such a trend to some extent, but the larger redshift baseline and tighter errors from the final VIPERS dataset can be expected to deliver a definitive verdict on the the high-redshift evolution of the strength of gravity.
We acknowledge the crucial contribution of the ESO staff for the management of service observations. In particular, we are deeply grateful to M. Hilker for his constant help and support of this program. Italian participation to VIPERS has been funded by INAF through PRIN 2008 and 2010 programs. LG acknowledges support of the European Research Council through the Darklight ERC Advanced Research Grant (\# 291521). OLF acknowledges support of the European Research Council through the EARLY ERC Advanced Research Grant (\# 268107). Polish participants have been supported by the Polish Ministry of Science (grant N N203 51 29 38), the Polish-Swiss Astro Project (co-financed by a grant from Switzerland, through the Swiss Contribution to the enlarged European Union), the European Associated Laboratory Astrophysics Poland-France HECOLS and a Japan Society for the Promotion of Science (JSPS) Postdoctoral Fellowship for Foreign Researchers (P11802). GDL acknowledges financial support from the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement n. 202781. WJP and RT acknowledge financial support from the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement n. 202686. WJP is also grateful for support from the UK Science and Technology Facilities Council through the grant ST/I001204/1. EB, FM and LM acknowledge the support from grants ASI-INAF I/023/12/0 and PRIN MIUR 2010-2011. PM acknowledges the support from the grant PRIN MIUR 2010-2011.\
This work made use of the facilities of HECToR, the UK’s national high-performance computing service, which is provided by UoE HPCx Ltd at the University of Edinburgh, Cray Inc and NAG Ltd, and funded by the Office of Science and Technology through EPSRC’s High End Computing Programme. We are grateful to Ken Rice for assisting us with accessing HECToR facilities.\
The MultiDark Database used in this paper and the web application providing online access to it were constructed as part of the activities of the German Astrophysical Virtual Observatory as result of a collaboration between the Leibniz-Institute for Astrophysics Potsdam (AIP) and the Spanish MultiDark Consolider Project CSD2009-00064. The Bolshoi and MultiDark simulations were run on the NASA’s Pleiades supercomputer at the NASA Ames Research Center.
Galaxy mock catalogue construction
==================================
We provide in this appendix some details about the method that we used to create realistic galaxy catalogues based on the Halo Occupation Distribution (HOD) and Stellar-to-Halo Mass Relation (SHMR) formalisms. From the MultiDark simulation and Pinocchio halo lightcones described in Section \[sec:mocks\], we created two types of galaxy mock catalogues: one containing $B$-band absolute magnitudes and associated quantities, and a second one containing stellar masses. We note that the stellar mass mock catalogues have not been explicitly used in this analysis, but in the accompanying VIPERS analyses of @marulli13 and @davidzon13.
For the first set of catalogues we use the HOD formalism and populated dark matter haloes according to their mass by specifying the absolute $B$-band magnitude-dependent halo occupation. We parametrised the latter using Eq. \[eq:HOD\] and used the HOD parameters obtained from the data and given in Section \[sec:realclus\]. We positioned central galaxies at halo centres with probability given by a Bernoulli distribution function with mean taken from Eq. \[ncen\] and assigned host halo mean velocities to these galaxies. The number of satellite galaxies per halo is set to follow a Poisson distribution with mean given by Eq. \[nsat\]. We assumed that satellite galaxies follow the spatial and velocity distribution of mass and randomly distributed their halo-centric radial position so as to reproduce a @navarro96 (NFW) radial profile, $$\rho_{NFW}(r|m)\propto\left(\frac{c_{dm}(m)r}{r_v(m)}\right)^{-1}\left(1+\frac{c_{dm}(m)r}{r_v(m)}\right)^{-2},$$ where $c_{dm}$ is the concentration parameter and $r_v(m)$ is the virial radius defined as $$r_{v}(m)=\left(\frac{3m}{4\pi\bar{\rho}(z)\Delta_{NL}}\right)^{1/3}.$$ In this equation, $\bar{\rho}(z)$ is the mean matter density at redshift $z$ and $\Delta_{NL}=200$ is the critical overdensity for virialisation in our definition. We assumed the mass-concentration relation of @bullock01: $$c_{dm}(m,z)=\frac{c_0}{1+z}\left(\frac{m}{m_*}\right)^{-0.13},$$ where $c_0=11$ and $m_*$ is the non-linear mass scale at $z=0$ defined such as $\sigma(m_*,0)=\delta_c$. Here $\delta_c$ and $\sigma(m,0)$ are respectively the critical overdensity (we fixed $\delta_c=1.686$) and the standard deviation of mass fluctuations at $z=0$. The latter is defined as $$\sigma^2(m,z)=\int_0^\infty \frac{dk}{k}
\frac{k^3P(k,z)}{2\pi^2}|W(kR)|^2 \,\,\,\, ,$$ where $R=\left[3m/\left(4\pi\bar{\rho}(z)\right)\right]^{1/3}$, $P(k,z)$ is the linear mass power spectrum at redshift $z$ in the adopted cosmology, and $W(x)$ is the Fourier transform of a top-hat filter. In order to assign satellite galaxy velocities, we assumed halo isotropy and sphericity, and drew velocities from Gaussian distribution functions along each Cartesian dimension with velocity dispersion given by [@vandenbosch04]: $$\begin{aligned}
\sigma^2_{sat}(r|m)&={}\frac{1}{\rho_{NFW}(r|m)}\int_r^\infty \rho_{NFW}(r|m)\frac{d\psi}{dr}dr \\
&={} \frac{Gm}{r_{v}}\frac{c_{dm}}{f(c_{dm})}\left(\frac{c_{dm} r}{r_v}\right)\left(1+\frac{c_{dm} r}{r_v}\right)^2 I(r/rs),\end{aligned}$$ where $\psi(r)$ is the gravitational potential, $G$ is the gravitational constant, $f(x)=\ln(1+x)-x/(1+x)$, and $$I(x)=\int_x^\infty \frac{f(t)dt}{t^3(1+t)^2}.$$
In these mocks, the absolute $B$-band magnitude for each galaxy was obtained following @skibba06. From the mean rest-frame $B-i'$ colour and K-corrections observed in the data we then derived absolute and apparent $i'$-band magnitudes for each simulated galaxy.
For the mock catalogues with stellar masses, we followed the SHMR approach which is based on the assumption of a monotonic relation between halo/subhalo masses and the stellar masses of the galaxies associated with them. We first populated the haloes in the lightcones with subhaloes. For this we randomly distributed subhaloes around each distinct halo following a NFW profile so that their number density satisfies the subhalo mass function of @giocoli10: $$\frac{d N(m_{\rm{sub}}|m)}{d \ln m_{\rm{sub}}}= N_0 \xi^\alpha \exp(-\beta\xi^3),$$ where $m_{\rm{sub}}$ is the subhalo mass, $\xi=m_{\rm{sub}}/m$, $\alpha=-0.8$, $\beta=12.2715$, and $N_0=0.18$. We then assigned a galaxy to each halo and subhalo, with a stellar mass given by the SHMR of @moster13. The galaxy velocities were assigned in a similar way as for the HOD catalogues, with galaxies associated with distinct haloes and subhaloes being considered as central and satellite galaxies respectively.
[^1]: based on observations collected at the European Southern Observatory, Cerro Paranal, Chile, using the Very Large Telescope under programs 182.A-0886 and partly 070.A-9007. Also based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l’Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. The VIPERS web site is http://www.vipers.inaf.it/.
[^2]: We have used in this analyis a new version of this code, optimised to work on massively parallel computers, which is described in @monaco13.
|
---
abstract: '[0.4cm We study the cosmic peculiar velocity field as traced by a sample of 1184 spiral, elliptical and S0 galaxies, grouped in 704 objects. We carry out a statistical analysis, by calculating bulk flows and velocity correlation functions for this sample and for mock catalogs which we extract from N–body simulations. For the simulations we consider tilted (i.e. with spectral index $n\leq 1$) CDM models with different values of the linear bias parameter $b$. By mean of a maximum likelihood analysis we estimate the ability of the models in fitting the observations as measured by the above statistics and in reproducing the Local Group properties.]{}'
author:
- 'F. LUCCHIN$^{1}$, S. MATARRESE$^{2}$, L. MOSCARDINI$^{1}$, G. TORMEN$^{1}$'
---
[$^{1}$ Dipartimento di Astronomia, Università di Padova, vicolo dell’Osservatorio 5, 35122 Padova, Italy.]{} [$^{2}$ Dipartimento di Fisica G. Galilei, Università di Padova, via Marzolo 8, 35131 Padova, Italy.]{}
Introduction
============
The Standard Cold Dark Matter (SCDM) scenario for structure formation possesses a high predictive power and can explain many observed properties of the large–scale structure of the universe. SCDM is characterized by a primordial scale–invariant spectrum, $P(k) \propto k^n$, with spectral index $n=1$, of Gaussian adiabatic perturbations in an Einstein–de Sitter universe and vanishing cosmological constant. It has became usual to parametrize the amplitude of the primordial perturbations by mean of the linear [*bias*]{} parameter $b$, defined as the inverse of the [*rms*]{} mass fluctuation on a scale of $R_8\equiv 8~h^{-1}$ Mpc: $$b^2 \equiv 1/\sigma^2(R_8) = 2\pi^2 \left/ \int_0^\infty dk \right.
{}~k^2 ~P(k) ~W^2_{TH}(kR_8),$$ where $W_{TH}(kR)=(3/kR) j_1(kR)$ is the top–hat window function and $j_\ell$ denotes the $\ell$–th order spherical Bessel function. We adopt the value $h=0.5$ for the Hubble constant $H_0$ in units of $100$ km ${\rm s^{-1}}$. The COBE detection [@SMO2] of large angular scale Cosmic Microwave Background (CMB) anisotropies can be used to normalize the CDM power–spectrum, resulting in $b
\approx 0.8$. However, the SCDM model has met increasing problems, mostly due to the high ratio of small to large–scale power: in particular the model with the COBE normalization predicts excessive velocity dispersion on scales of order $1~h^{-1}$ Mpc and is unable to reproduce the slope of the galaxy angular correlation function obtained in the APM survey ([@MAD]; see, however, [@FON]).
A well known natural way to reduce the high ratio of small to large–scale power of the SCDM model is to “tilt" the spectral index of primordial perturbations: “tilted", i.e. $n<1$, CDM (hereafter TCDM) models boost power from small to large scales (e.g. [@VIT; @TLM; @ADA; @CEN; @TOR]). The COBE DMR experiment has lead to renewed interest in these models: the observed anisotropy is in fact consistent with $n=1.15^{+.45}_{-.65}$ on scales $\geq
10^{3}~h^{-1}$ Mpc. It is evident that the COBE normalization of TCDM models imply reduced power on all scales below $10^3$ Mpc. Moreover a large number of post–COBE papers (see [@CRI] and references therein) pointed out that properly accounting for the gravitational–wave contribution to the Sachs–Wolfe effect leads to a relevant enhancement of the linear biasing factor. This effect is relevant in power–law inflation: [@LMM] obtained $$b(n) \approx 0.80 {\sqrt {14-12n \over 3-n}}
\times 10^{1.20(1-n)} (1 \pm 0.17).$$\
In a recent work [@TOR] we analyzed the peculiar velocity field traced by optically selected galaxies, which probes the primordial spectrum up to scales $\sim 100~h^{-1}$ Mpc. The results were then compared with similar analyses carried out on mock catalogs in Monte Carlo simulations obtained from linear theory in $n\leq 1$, $\Omega_0 \leq 1$ CDM models, for different values of the bias factor, by assuming that the galaxy velocity field gives an unbiased signal of the underlying mass distribution. We report here the preliminary results of a forthcoming paper [@MOS] where the same analysis is performed on mock catalogs extracted from N–body simulations in $\Omega = 1$ models. A similar method has been applied also to simulations with skewed (i.e. non–Gaussian) CDM initial conditions: first results are presented in [@NG].
The data
========
The sample we used was compiled from the “Mark II" data files kindly supplied by David Burstein, a collection of several data, including distances and peculiar velocities, for more than one thousand spirals, ellipticals and S0 galaxies. The sample include:
i\) the Aaronson ‘good’ and ‘fair’ field spirals [@FB] and the de Vaucouleurs and Peters spirals [@DEV];
ii\) the Aaronson cluster spirals [@AAB; @AAR];
iii\) the ellipticals and S0, which combine the survey by [@LYN] with the data by [@LUC] and [@DRE].
The galaxies were grouped following the rules in [@LYN; @FAB]; we also considered every Aaronson cluster of spirals as a single object. The procedure reduces distance uncertainties of a factor ${\sqrt N}$, if $N$ is the number of grouped galaxies. Our final big sample consists at the end of 1184 galaxies grouped in 704 objects. In Table 1 we give a summary of the different subsamples, for each one indicating the number of galaxies and grouped objects.
[**Table 1.**]{} Samples.
1.8cm Sample Number of Galaxies Number of Objects
------------------------------ -------------------- -------------------
Aaronson ‘good’ spirals 1.3cm 224 1.3cm 224
Aaronson ‘fair’ spirals 1.3cm 139 1.3cm 139
Aaronson cluster spirals 1.3cm 204 1.3cm 17
de Vaucouleurs–Peter spirals 1.3cm 73 1.3cm 73
Elliptical galaxies 1.3cm 544 1.3cm 251
Total sample 1.3cm 1184 1.3cm 704
All distances in the sample have a uniform Malmquist bias correction. Residual Malmquist bias due to clustered structures still affects the data, but removing it would require the knowledge of their selection function, which is not our case, because the spiral subsamples do not have a clear selection criterion (see, e.g. [@DEK]). Nevertheless, the grouping procedure helps in reducing the residual bias, besides reflecting the fact that different galaxies belonging to the same group or cluster actually map only one point of the peculiar velocity field.
Statistical analysis of the velocity field
==========================================
The [*velocity dipole*]{} (or [*bulk flow*]{}) for a galaxy catalog with $N$ objects, endowed with peculiar velocities ${\bf v}_i$, is defined through a least–squares fit (e.g. [@REG]) $$v_{bulk}^\alpha = (M^{-1})^{\alpha\beta} \sum_{i=1}^N u_i^\beta$$ (summation over repeated indices is understood); $u_i^\alpha \equiv ({\bf v}_i
\cdot {\hat {\bf r}}_i) {\hat {\bf r}}_i^\alpha$ is the $\alpha$ component ($\alpha=1,2,3$) of the radial peculiar velocity of the $i$–th galaxy. The [*projection matrix*]{} $$M^{\alpha\beta} = \sum_{i=1}^N \hat
r_i^\alpha \hat r_i^\beta$$ takes into account the geometry of the sample.\
The second statistics we considered is the [*velocity correlation function*]{}. Various alternative have been proposed in the literature; we chose the one proposed by [@GOR], which reads $$\psi_1(r) = \sum_{pairs(r)} {\bf u}_1 \cdot {\bf u}_2 \left/
\sum_{pairs(r)} \hat {\bf r}_1 \cdot \hat {\bf r}_2,\right.$$ where the sum is extended over galaxy pairs separated by a distance $r$.
N–body simulations and mock catalogs
====================================
In order to mimic the large–scale peculiar velocity field we performed two N–body simulations using a particle–mesh code with $N_p = 128^3$ particles on $N_g = 128^3$ grid points; the box–size was $260~h^{-1}$ Mpc. The velocity field was assumed to be Gaussian with power–spectrum $$P_v(k) \propto k^{n-2} T^2(k),$$ where $T(k)$ is the CDM transfer function [@DEFW], $$T(k) = [1 + 6.8 k + 72.0 k^{3/2} + 16.0 k^2 ]^{-1}.$$ We ran two simulations of 12 models, combinations of the values $n=0.6$, 0.8, 1 and $b= 1$, 1.5, 2, 2.5. We will show below the results only for three basic models: $(n,~b) = (0.6,~2.5)$ (high tilt and bias); $(n,~b) = (0.8,~1.5)$ (moderate tilt and bias); $(n,~b) = (1,~1)$ (no tilt and no bias).
We defined the velocity field of each simulation by interpolating the particles’ velocities onto a cubic grid with $128^3$ grid points, using a TSC algorithm and a further Gaussian smoothing with filter width $275$ km ${\rm s^{-1}}$.
We built up our simulated catalogs (e.g., [@GOR; @DSY]) by locating for each simulation 25 “observers" in grid–points with features similar to those of the Local Group (LG). The requirements are the following:
i\) the peculiar velocity $v$ is in the range of the measured LG motion, $v_{LG,obs} = 622 \pm 20$ km ${\rm s^{-1}}$;
ii\) the local flow is quiet, i.e. that the local ‘shear’ is small, ${\cal S}
\equiv |{\bf v} - \langle {\bf v} \rangle|/| {\bf v}| < 0.5$, where $\langle
{\bf v} \rangle$ is the average velocity of a sphere of radius $R= 750$ km ${\rm s^{-1}}$ centered on the LG;
iii\) the density contrast in the same sphere is in the range $-0.2 < \delta <
1.0$.
We then measured radial peculiar velocities by sampling the velocity field at the same positions of the observed galaxies. The reference frame was fixed so that the velocity vector at the central point singles out the CMB dipole direction, while the direction of the remaining axis was selected at random.\
We take into account the random galaxy distance errors by perturbing each distance and radial peculiar velocity with Gaussian noise (e.g. [@DEK]), $r_{i,p} = r_i + \xi_i \Delta r_i $ and $u_{i,p} = u_i - \xi_i \Delta r_i +
\eta_i \sigma_f,$ where $\xi_i$ and $\eta_i$ are independent standard Gaussian variables; $\Delta r_i$ is the estimated galaxy distance error and $\sigma_f=200$ km ${\rm s^{-1}}$ is the Hubble flow noise. Sampling the simulated velocity field at the same positions of the observed galaxies introduces the same sampling errors of the real data.
Results
=======
We applied to our simulated catalogs a first statistic by characterizing the observers using their velocity, local shear and local density contrast, as discussed above. We found that the selection effect of these constraints changed the “observers" distribution. Consequently, the models provided us with statistical results that turned out to be rather different from simple unconstrained estimates. As expected, the constraints were more effective for the models with observed features of the LG corresponding to values far from the mean. The imposed constraints on the Local Group velocity and shear exclude contributions from grid points with large velocity. This translates in a reduced amplitude for the bulk motions and for the value of $\Psi_1(r)$ at small separation when compared with equivalent unconstrained simulations. In Figure 1 we show for our basic models $(n,~b) = (0.6,~2.5), ~(0.8,~1.5),
{}~(1,~1)$ the probability distribution of the three quantities $v$, $\delta$ and ${\cal S}$. The vertical lines show the range allowed by the assumed Local Group constraints.
[0.4cm [**Figure 1.**]{} Probability distribution of the peculiar velocity $v$ (top row), density contrast $\delta$ (central row) and local ‘shear’ $\cal S$ (bottom row), calculated on the grid–points from simulations of the models $(n,~b) =
(0.6,~2.5)$ (first column), $(n,~b) = (0.8,~1.5)$ (central column) and $(n,~b)
= (1,~1)$ (last column). The vertical lines show the range allowed by the different Local Group constraints. ]{}
In Table 2 we report the percentage of grid points fulfilling each constraints separately and altogether \[${\cal P}(LG)$\].
[**Table 2.**]{} Local Group constraints.
0.1cm $n$ 0.1cm $b$ ${\cal P}(v)$ ${\cal P}(\delta$) ${\cal P}({\cal S}$) ${\cal P}(LG)$
----------- ----------- --------------- -------------------- ---------------------- ----------------
0.6 2.5 0.7 67.2 97.3 0.6
0.8 1.5 4.3 50.0 97.2 2.4
1.0 1.0 5.9 37.3 97.4 2.2
While the constraint on the local shear is poorly effective (no differences between all the considered models), those on the density and velocity of the Local Group turn out to depend mostly on the bias parameter and almost nothing on the spectral index: with higher values of $b$, higher ${\cal P}(\delta)$ and lower ${\cal P}(v)$ result, for all considered $n$. The total probability ${\cal P}(LG)$ shows as best model $(n,~b) = (0.8,~1.5)$.
In Figure 2 we plot for the three typical models the bulk flow distribution calculated from our mock catalogs. The continuous vertical line refers to the observed value: we found for our composite galaxy sample $v_{bulk}=306 \pm 72$ km ${\rm s^{-1}}$, with a misalignment angle $\alpha = 54^\circ \pm 13^\circ$ with respect to the direction of the CMB dipole. The plotted observational errors take into account the uncertainties due both to the sparse geometry (by bootstrap resamplings of our catalog) and to the distance errors (by estimating the dispersion after perturbing the true catalog with Gaussian errors).
[0.4cm [**Figure 2.**]{} The probability distribution for the absolute value of the bulk flow, $v_{bulk}$, for the models $(n,~b) = (0.6,~2.5), ~(0.8,~1.5), ~(1,~1)$ from the left to the right. The vertical lines refers to the value obtained from our real catalog. ]{} Due to the small number of mock catalogs, the probability distributions are not well sampled and show an irregular behavior. The preliminary results of this test seem to indicate that models with high tilt and bias are less unlikely than other models: in fact only 14% of the mock catalogs obtained from the simulation $(n,~b) = (0.6,~2.5)$ present values of $v_{bulk}$ in the observational range, while for $(n,~b) = (0.8,~1.5)$ and $(n,~b) = (1,~1)$ the percentages of catalogs are 38% and 42% respectively.
Figure 3 compares the velocity correlation resulting from our mock catalogs of the three typical models to the observed one. We evaluated $\Psi_1(r)$ for the real data by counting galaxy pairs in separation bins of $500$ km ${\rm s^{-1}}$ up to a maximum separation of $5,000$ km ${\rm s^{-1}}$. The error bars were estimated as for bulk flow and take into account both the sparse sampling of the data and the distance errors.
[0.4cm [**Figure 3.**]{} The observed velocity correlation function vs. the separation $r$ (thick solid line with squares; error bars are one standard deviation for each bin) compared to the probability distribution for $\Psi_1$ from the simulated catalogs. Left panel: high tilt and bias. Central panel: moderate tilt and bias. Right panel: no tilt and no bias. The different lines refer to the $5\%$, $25\%$, $50\%$, $75\%$ and $95\%$ percentiles.]{} The simulated distributions are very different: models with high tilt and bias present a narrow distribution. The widest distribution is for $(n,~b) = (1,~1)$.\
In order to compare observations vs. models, we choose to use, between the different possible statistics discussed in [@TOR], the integral of $\Psi_1(r)$ from zero to the maximum considered separation of pairs, $R_{max}=5,000$ km ${\rm s^{-1}}$ (see also [@GOR]): $$J_v = \int_0^{R_{max}} \Psi_1(r) \ dr.$$ This is a simple one–dimensional statistic, bearing as much information as possible on the original correlation function without the necessity to have a very large number of simulated catalogs to sample the whole distribution of $\Psi_1$. In fact the velocity correlation function is a random function of the separation $r$, which can assume infinite values; its probability distribution is then a functional, or at least an $N$–variate distribution if we sample this function with $N$ bins (10 in our case).
We calculated for our typical models the percentage of the simulated catalogs whose value of $J_v$ is similar (i.e. less than one standard deviation different) to the observed one . We found the percentages 28%, 48% and 64% for $(n,~b) = (0.6,~2.5), ~(0.8,~1.5), ~(1,~1)$ respectively. As general trend we can state that low values of the bias parameter are preferred, in particular in connection with low tilt.
Maximum likelihood
==================
We performed a maximum likelihood analysis to compare the statistics from different simulations. Calling $\vec A$ the random vector of the statistics we used to constrain the simulated LG, $\vec A=(v_{LG}, {\cal S}, \delta)$, and $\vec B$ the vector of all other statistics, $\vec B=(v_{bulk}, \alpha, J_v)$, the joint probability distribution of $\vec A$ and $\vec B$, under the condition $\vec A=\vec A_{obs}$, is ${\cal P}(\vec A_{obs},\vec B)=
{\cal P}(\vec A_{obs}) {\cal P}(\vec B |\vec A_{obs})$. For a given model $H$, the likelihood function is ${\cal L}(H)={\cal P}(\vec A_{obs}|H) {\cal P}(\vec B_{obs}|\vec A_{obs},H)$; since we considered models that differ by their values of $n$ and $b$, ${\cal L}={\cal L}(n,~b)$. The joint likelihood ${\cal P}(\vec B_{obs}|\vec A_{obs},H)$ of $v_{bulk}$, misalignment angle $\alpha$ and $J_v$ has been computed counting the number of simulated catalogs that have, at the same time, bulk flow, $\alpha$ and $J_v$ equal to the observed ones, up to the fixed tolerance (i.e. one observational $\sigma$).
In the case of our three typical models we found 0.01%, 0.08% and 0.17% for the models $(n,~b) =
(0.6,~2.5), ~(0.8,~1.5), ~(1,~1)$ respectively. If we extend our analysis to all 12 original models, we found that the best model is $(n,~b) = (0.8,~1)$, but the likelihood is very flat in the region $(n,~b) = (0.8-1,~1-1.5)$ and differences between models inside this region are not significative. Using a [*Chi–square*]{} approximation to provide confidence levels to our predictions, we can reject, in any case, the model with high bias and tilt $(n,~b) = (0.6,~2.5)$ at the $90\%$ confidence level.
Conclusions
===========
In this paper we report the first results from a statistical analysis of the large–scale velocity field in the context of tilted CDM models. We extend our previous work based on Monte Carlo simulations in linear theory [@TOR] using N–body simulations, i.e. in strongly non–linear regime. We consider 12 models, combinations of the values 0.6, 0.8, 1 for the spectral index $n$ and 1, 1.5, 2, 2.5 for the bias parameter $b$. We calculate the probability to have grid points with features similar to Local Group, bulk flows and velocity correlation functions for mock galaxy catalogs and compare the resulting distributions with the results of a composite sample of 1184 galaxies, grouped in 704 objects. Using a maximum likelihood method we calculate the probability of the models to reproduce the observations as measured by the above statistics.
Our results, even if obtained from a small number of mock catalogs and consequently affected by a larger statistical uncertainty, essentially confirm those derived from Monte Carlo simulations [@TOR].
In particular model with high tilt ($n \le 0.6$) are rejected by the combination of the COBE results [@SMO2] and the present analysis. The best model is $(n,~b)= (0.8,~1)$, but the likelihood function is nearly flat in the ranges $0.8 \le n \le 1$ and $1 \le b \le 1.5$. Note that COBE implies for $n=0.8$ and $b=1.35\pm0.3$ a negligible contribution from gravitational waves; in any case the constraints on small–scale velocity dispersion prefer $ n < 1$ and/or $ b > 1$.
Moreover our results show that lower values of the bias parameter are preferred; this implies that the gravitational waves contribution to $\Delta T/T$ at large scales should be negligible.
As a general result, our more accurate treatment of the errors in the observational data shows that tilted CDM models are not excluded by the combination of COBE and present analysis. Of course, the possibility of having larger sample of data (e.g. “Mark III", see [@TB]) can help to increase the discriminatory power of these statistical tests on the large–scale velocity field.
[99]{}
0.4cm
Aaronson, M., 1986. [**302**]{}, 536
Aaronson, M., 1989. [**338**]{}, 654
Adams, F.C., Bond, J.R., Freese, K., Frieman, J.A,. & Olinto, A.V. 1992. [**D47**]{}, 426
Cen, R., Gnedin, N.Y., Kofman, L.A., & Ostriker, J.P. 1992. [**399**]{}, L11
Crittenden, R., Bond, J.R., Davis, R., Efstathiou, G., & Steinhardt, P. 1993. [**71**]{}, 324
Davis, M., Efstathiou, G., Frenk, C.S., & White, S.D.M. 1985. [**292**]{}, 371
Davis, M., Strauss, M.A., & Yahil, A. 1991. [**372**]{}, 394
Dekel, A., Bertschinger, E., & Faber, S.M. 1990. [**364**]{}, 349
de Vaucouleurs, G., & Peters, W.L. 1984. [**287**]{}, 1
Dressler, A., & Faber, S.M. 1990. [**354**]{}, 17
Faber, S.M., & Burstein, D. 1989. in [*Large Scale Motions in the Universe*]{} p. 129, eds Rubin V.C. & Coyne G.V., Princeton University Press
Faber, S.M., Wegner, G., Burstein, D., Davies, R.L., Dressler, A., Lynden–Bell., D., & Terlevich, R.J. 1989. [**69**]{}, 763
Fong, R., Hale–Sutton, D., & Shanks, T. 1992. [**257**]{}, 630
Gorski, K., Davis, M., Strauss, M.A., White, S.D.M., & Yahil, A. 1989. [**344**]{}, 1
Lucchin, F., Matarrese, S., & Mollerach, S. 1992. [**401**]{}, L49
Lucey, J.R., & Carter, D. 1988. [**235**]{}, 1177
Lynden–Bell, D., Faber, S.M., Burstein, D., Davies, R.L., Dressler, A., Terlevich, R.J., & Wegner, G. 1988. [**326**]{}, 19
Maddox, S.J., Efstathiou, G., Sutherland, W.J., & Loveday, J. 1990. [**243**]{}, 692
Moscardini, L., Lucchin, F., Matarrese, S., & Tormen, G. these proceedings
Moscardini, L., Tormen, G., Matarrese, S., & Lucchin, F. 1993. in preparation.
Regős, E., & Szalay, A.S. 1989. [**345**]{}, 627
Smoot, G., 1992. [**396**]{}, L1
Tormen, G., & Burstein, D. these proceedings
Tormen, G., Lucchin, F., & Matarrese, S. 1992. [**386**]{}, 1
Tormen, G., Moscardini, L., Lucchin, F., & Matarrese, S. 1993. [**411**]{}, 16
Vittorio, N., Matarrese, S., & Lucchin, F. 1988. [**328**]{}, 69
|
---
abstract: 'We analyze a model for a walker moving on a ratchet potential. This model is motivated by the properties of transport of motor proteins, like kinesin and myosin. The walker consists of two feet represented as two particles coupled nonlinearly through a bistable potential. In contrast to linear coupling, the bistable potential admits a richer dynamics where the ordering of the particles can alternate during the walking. The transitions between the two stable states on the bistable potential correspond to a walking with alternating particles. We distinguish between two main walking styles: alternating and no alternating, resembling the hand-over-hand and the inchworm walking in motor proteins, respectively. When the equilibrium distance between the two particles divided by the periodicity of the ratchet is an integer, we obtain a maximum for the current, indicating optimal transport.'
address: |
Instituto de Física, Universidad Nacional Autónoma de México,\
Apartado Postal 20-364, 01000 México, D.F., México
author:
- 'José L. Mateos'
title: A random walker on a ratchet
---
Noise; Transport; Brownian motors; Ratchets
PACS: 05.40.-a; 02.50.Ey; 05.60.Cd; 05.10.Gg
Introduction
============
In recent years, advances in non-equilibrium statistical physics have revealed various instances of the surprising phenomenon of noise enhanced order, such as stochastic resonance [@sr1; @sr2; @sr3], Brownian motors, ratchets or noise-induced transport [@rev1; @rev2; @rev3]. These remarkable phenomena occur due to the constructive role of noise in nonlinear dynamical systems [@shura]. Noise-induced, directed transport in a spatially periodic system in thermal equilibrium is ruled out by the second law of thermodynamics. Therefore, in order to generate transport, the system has to be driven away from thermal equilibrium by an additional deterministic or stochastic force. In the most interesting situation, these forces are unbiased, that is, their temporal, spatial or ensemble averages vanish. Besides the breaking of thermal equilibrium, another important requirement to get directed transport in a spatially periodic system is the breaking of the spatial inversion symmetry. We speak then of Brownian motors, ratchet potentials, or, in the biological realm, of molecular motors. This recent burst of work is motivated in part by the challenge to explain the unidirectional transport of molecular motors in the biological realm [@vale; @kel; @how].
One particular motor protein, kinesin, has attracted considerable attention, motivated by experimental results in which the dynamical details of its motion can be measured [@vale; @how]. Kinesin is a protein with two heads that perform a walking on microtubules inside cells. Motivated by these experimental results, several authors [@der; @stra; @klum; @els; @bier; @kan; @dan; @wang] have introduced diverse models in order to understand the particular walking of kinesin. Usually, these models consider two coupled particles on a ratchet potential that represents the periodic asymmetric structure of microtubules. In these papers the authors consider a linear elastic coupling between the particles. This coupling implies that the order of the particles cannot change. However, in recent experiments, it was found that kinesin moves processively alternating its two heads in a way called hand over hand [@blo3; @alvaro; @selvin2]. Other processive motor proteins that move on actin filaments in a hand-over-hand way are myosin V [@selvin1; @selvin3] and myosin VI [@selvin4; @spudich].
We introduced in this paper a model inspired by the walking of motor proteins, like kinesin on microtubules, but that is not restricted to walking of motor proteins. The model can be useful to describe as well the walking of macroscopic objects in the presence of fluctuations. It consists of two particles coupled through a nonlinear bistable potential and subjected to independent white noises. This system of two coupled particles is acted upon by a spatially-periodic force, due to the presence of a ratchet potential and, additionally, we have a common time-dependent periodic force. We are interested in analyzing the trayectories of the walker following in detail the motion of each of the particles, and we are also interested in the current or noise-induced transport for this system.
The model of a walker with two Brownian motors
==============================================
The model considers a walker moving on an asymmetric ratchet potential mediated by noise. This walker has two feet that are represented as two particles coupled nonlinearly through a bistable potential [@upon; @spie; @matfnl]. The walker moves along a track formed by an asymmetric ratchet potential, and is subjected to the influence of two independent white noises acting on the two particles and a common external harmonic force. The stochastic differential equations for the two particles, represented by $x$ and $y$, in the overdamped regime, are:
$$m\gamma \dot{x} = -\partial_{x}V(x) - \partial_{x}V_{b}(x-y) + m\gamma\sqrt{2D}\xi_{1}(t) + F_{D}\sin(\Omega t+\varphi)$$
$$m\gamma \dot{y} = -\partial_{y}V(y) - \partial_{y}V_{b}(x-y) + m\gamma\sqrt{2D}\xi_{2}(t) + F_{D}\sin(\Omega t+\varphi).$$
where $m$ is the mass of each particle, $\gamma$ is the friction coefficient, $-\partial_{x}V(x)$ is the force due to the ratchet potential, $-\partial_{x}V_{b}(x-y)$ is the coupling force due to the bistable potential. The common external harmonic force has three parameters: the amplitude force $F_{D}$, the frequency $\Omega$ and the initial phase $\varphi$.
These equations represent two coupled particles on a periodic asymmetric ratchet potential given by [@matprl]
$$V(x) = V_1 - V_{R} \left [\sin {\frac{2\pi (x-x_0)}{L}} - {\frac{1}{4}} \sin {\frac{4\pi (x-x_0)}{L}} \right ].$$
where $L$ is the period of the potential $V(x + L) = V(x)$; the other constants will be discussed later. Additionally, these particles are coupled by the [*nonlinear*]{} cubic force coming from a bistable potential $V_{b}(x - y)$ given by
$$V_{b}(x - y) = V_{b} + V_{b} \left[ {\frac{(x - y)^4}{l^4}} - 2{\frac{(x - y)^2}{l^2}} \right].$$
Here, $V_{b}$ is the amplitude of the bistable potential and represents the coupling strenght between the particles, and $2l$ is the distance between the two minima.
Finally, the parameter $D$ is the intensity of the zero-mean statistically independent Gaussian white noises $\xi_{1}(t)$ and $\xi_{2}(t)$ acting on particles $x$ and $y$, respectively. Being statistically independent, the following equation is satisfied:
$$\langle \xi_{i}(t) \xi_{j}(s) \rangle = \delta_{ij} \delta (t - s).$$
Let us derive now dimensionless equations of motion for the model. We use as the characteristic lenght scale the period of the ratchet potential $L$, the characteristic time scale will be given by the inverse of the friction coeficient $\tau = 1/ \gamma$, and the charactersitic force is $mL\gamma^{2 }$. Let us define the following dimensionless units: $x^{\prime } = x/L$, $x_{0}^{\prime } = x_{0}/L$, $y^{\prime } = y/L$, $y_{0}^{\prime } = y_{0}/L$, $t^{\prime } = \gamma t$, ${l^{\prime } = l/L}$, $\Omega^{\prime} = \Omega/\gamma$ and $D^{\prime} = D/ \gamma L^{2}$. The dimensionless equations of motion, after renaming the variables again without the primes, become
$$\dot{x} = -\partial_{x}V(x) -\partial_{x} V_{b}(x - y) + \sqrt{2D}\xi_{1}(t) + A \sin(\Omega t + \varphi),$$
$$\dot{y} = -\partial_{y}V(y) - \partial_{y}V_{b}(x - y) + \sqrt{2D}\xi_{2}(t) + A \sin(\Omega t + \varphi),$$
The dimensionless ratchet potential is
$$V(x) = C - U_{R} \left [\sin 2\pi (x-x_{0}) + {\frac{1}{4}} \sin 4\pi (x-x_{0}) \right ].$$
The constant $C = -U_{R} (\sin 2\pi x_{0} + 0.25 \sin 4\pi x_{0})$ is such that $V(0)=0$. The constant $x_{0}$ is introduced in order to center the minima of the periodic potential on the integers [@matprl].
The dimensionless bistable potential is given by
$$V_{b}(x - y) = U_{b} \left[1 + {\frac{(x - y)^4}{l^4}} - 2{\frac{(x - y)^2}{l^2}} \right].$$
Here the dimensionless amplitude of the ratchet potential is given by $U_{R} = V_{R}/(mL^{2}\gamma^{2})$, and the dimensionless amplitude of the bistable potential $U_{b} = V_{b}/(mL^{2}\gamma^{2})$. The amplitude of the external force is $A = F_{D}/(mL\gamma^{2})$.
In Fig. 1 we depict the ratchet (solid line) and the bistable (dashed line) potential for this model. The dotted line is the sum of both potentials. As is clear in the figure, the total potential has three minima instead of two, due to the interplay between the stable points in both potentials. These means that we have three stable equilibria configurations for the walker: $-l$, $0$ and $l$. In our model the situation is even more complicated since we have two particles in a potential like in Fig. 1. We can think of this problem as a single particle in a two-dimensional potential given by $\Psi(x,y) = V(x) + V(y) + V_{b}(x - y)$. A discussion of this 2D problem and the analysis of its bifurcations can be seen in [@upon; @spie; @matfnl].
This model is different from previous ones described in the literature, since it incorporates a [*nonlinear*]{} coupling between the two particles through the bistable potential, as has been discussed before [@upon; @spie; @matfnl]. It is important to stress the following: the coupling through the bistable potential involves the variable $x - y$. This variable can be positive, negative or zero. When $x - y > 0$ the $x$ particle is ahead of the $y$ particle. On the other hand, when $x - y < 0$ the $y$ particle is ahead of the $x$ particle. Therefore, the transitions between the two stable states in the bistable potential correspond to an exchange of the order between the particles. The minima, corresponding to the two stable points in the bistable potential, are located at $x - y = l$ and $x - y = -l$, that is, when the distance between the two particles is $l$. Thus, we have two equilibrium configurations for the walker: $x - y = l$ and $x > y$, or $x - y = -l$ and $x < y$. The local maximum at the origin in the bistable potential is unstable without the ratchet, but it can become a third stable configuration in the presence of the ratchet potential, as can be seen in Fig. 1. This stable state corresponds to the case when $x - y = 0$, that is, when the two particles coincide in space. So, we can think of a state oscillating in the bistable potential back and forth between the two minima, as the walking of a motor protein alternating its two heads, or a walker alternating its two feet. This nonlinear coupling allow us then to consider a very important aspect of a real walking that was lacking in previous models: the possibility of alternating the two feet. In the models given in [@der; @stra; @klum; @els; @dan; @wang], the coupling between the particles is linear (a harmonic spring) and thus the particles cannot alternate positions. They simply can approach to each other, but once you have an ordering, say $x>y$, the ordering remains for the rest of the dynamics. In our case, on the other hand, we can have several types of walking: alternating random walking, where the two particles alternate their order randomly when the walker moves through the ratchet (hand over hand) or; rigid random walking, where the two particles move on the ratchet without exchanging their order (inchworm).
Coming back to the stochastic equations that define our model, we notice that an important parameter is $r = U_{b}/U_{R}$, the ratio of two barrier heights: the bistable and the ratchet. Both are energetic barriers that have to be overcome in order to perform a particular walking: alternating or rigid. If we overcome only the ratchet barrier, then we have a rigid walking, whereas if the walker can overcome both barriers, then the walking can alternate the feet. From this particular coupling we see a clear connection between the phenomena of stochastic resonance and Brownian motors. One expects that in an optimal situation, aided by the stochastic resonance mechanism, one can transit very efficiently between the two states in the bistable potential and at the same time walk optimally on the ratchet potential using an alteration of the two particles. This might happen when the equilibrium distance $l$ between particles coincide with the periodicity of the ratchet potential. In our units this case corresponds to $l = 1$.
Numerical results
=================
In this section we will solve numerically the Langevin equations of motion to obtain the trajectories of the walker. We use a stochastic fourth-order Runge-Kutta algorithm to solve the system of stochastic differential equations with additive Gaussian white noise. This means that we use a fourth-order Runge-Kutta algorithm for the deterministic part and a random number generator to obtain Gaussian-distributed random numbers from uniformly distributed ones in the unit interval, using the Box-Mueller algorithm, as described in [@press]. We calculate the current, which is simply the ensemble average velocity of the center of mass of the walker, where the center of mass is given by $z(t) = (x(t) + y(t))/2$. In the numerical results that we present now, we fix the value of $\Omega = 0.5$. The other parameters are indicated in the text or in the figure captions. The quantities that we compute involve an averaging of an ensemble with different initial random phases $\varphi$ uniformly distributed on a circle between $0$ and $2\pi$.
In Fig. 2 we show a typical trajectory for the walker. The solid line corresponds to the $x$ particle and the dashed line to the $y$ particle. Notice that due to the coupling the two particles tend to move together and the center of mass advance with a positive current. As can be seen in the figure, the two feet can exchange positions and different types of walking patterns can be observed. For instance, the walker can overcome the ratchet barriers without exchanging the order of the particles for a while, but later on the feet tend to be together and the walker jumps in this way to the next minimum in the ratchet and finally the order of the feet can change. In Fig 3 we show the current as a function of noise intensity. We notice that the current increases until it reaches a maximum and then tend to decrease. This remind us the phenomenon of stochastic resonance. For zero noise the current is finite, since we have a large amplitude $A = 1$ for the external forcing. So we obtain a nonzero current even in the deterministic limit. In Fig. 4 we depict the current as a function of $r$, the ratio between the bistable and ratchet potentials, for two values of the equilibrium distance between feet $l$. For $l = 1$ we see that the current is almost constant, but for $l = 1.5$ the current depends strongly on $r$ and can even reverse sign. So we obtain a current reversal as a function of the ratio $r$. Finally, in Fig. 5 we show the current as a function of the equilibrium distance between feet $l$, for different values of the ratio $r$. We did not show the current for small values of $l$, since in this case the bistable potential can be very large in comparison with the other terms in the Langevin equations. We notice that the current attains a maximum when $l$ is near an integer. In some case the maximum is not exactly at the integers, but close to it. In fact, for large values of the ratio $r$ (for instance $r = 7.85$ in Fig. 5), the current develops a local minimum at $l = 1$. We are planning to analyze further this effect for other parameters in order to determine its possible generality. This means that the walker can move through the ratchet in a very efficient way when each particle is located close to a minimum of the ratchet. On the other hand, if the distance $l$ is in between integers, the current attains a minimum, indicating that the walker is unable to move efficiently.
Concluding remarks
==================
In summary, we have introduced a model for a random walker that consists of two particles coupled nonlinearly through a bistable potential, steeping on an asymmetric periodic ratchet potential. In contrast to linear coupling, the bistable potential admits a richer dynamics where the ordering of the particles can alternate. The dynamics then include two typical stepping patterns: alternating (hand over hand) and non alternating walking (inchworm). In our model we can obtain both types of walking, depending on the ratio between the ratchet and the bistable barriers, and on the ratio between the equilibrium distance of the particles and the periodicity of the ratchet. It is worth mentioning that according to recent experiments in motor proteins [@blo3; @alvaro; @selvin2; @selvin1; @selvin3; @selvin4; @spudich] the hand over hand type of walking is more likely. We have calculated the current, defined as the average velocity of the center of mass, as a function of parameters related with the coupling and the distance between particles. In the case where the equilibrium distance between particles is a multiple of the periodicity of the ratchet potential, we obtained a maximum value for the current. Thus, we have a new model that may shed light on a number of currently interesting problems, ranging from noisy locomotion to transport of motor proteins, that establish a connection between Brownian motors and stochastic resonance.
The author gratefully acknowledges helpful discussions with Alexander Neiman, Frank Moss, Lutz Schimansky-Geier, Jan Freund, Igor Sokolov and Peter Hänggi. Financial support from UNAM through project DGAPA-IN-111000, is acknowledged. The author also wants to thank the Alexander von Humboldt Foundation for support.
[00]{}
L. Gammaitoni, P. Hänggi, P. Jung, F. Marchesoni, Rev. Mod. Phys. 70 (1998) 223.
R. D. Astumian, F. Moss, Chaos 8 (1998) 533.
P. Hänggi, ChemPhysChem 3 (2002) 285.
R. D. Astumian, P. Hänggi, Physics Today 55, No. 11 (2002) 33.
P. Reimann, Phys. Rep. 361 (2002) 57.
P. Reimann, P. Hänggi, Applied Physics A 75 (2002) 169.
V. S. Anishchenko, V. V. Astakhov, A. B. Neiman, T. E. Vadivasova, L. Schimansky-Geier, Nonlinear Dynamics of Chaotic and Stochastic Systems, Springer-Verlag, Berlin, 2002.
R. D. Vale, R. A. Milligan, Science 288 (2000) 88.
D. Keller, C. Bustamante, Biophysical Journal 78 (2000) 541.
J. Howard, Mechanics of Motor Proteins and the Cytoskeleton, Sinauer Associated, Inc., Sunderland, Massachusetts, 2001.
I. Derényi, T. Vicsek, Proc. Natl. Acad. Sci. USA 93 (1996) 6775.
G. N. Stratopoulos, T. E. Dialynas, G. P. Tsironis, Phys. Lett. A 252 (1999) 151.
S. Klumpp, A. Mielke, C. Wald, Phys. Rev. E 63 (2001) 031914.
T. C. Elston, D. You, C. S. Peskin, SIAM J. Appl. Math. 61 (2000) 776.
M. Bier, Phys. Rev. Lett. 91 (2003) 148104.
R. Kanada, K. Sasaki, Phys. Rev. E 67 (2003) 061917.
D. Dan, A. M. Jayannavar, G. I. Menon, Physica A 318 (2003) 40.
H.-Y. Wang, J.-D. Bao, Physica A 337 (2004) 13.
C. L. Asbury, A. N. Fehr, S. M. Block, Science 302 (2003) 2130.
W. R. Shief, R. H. Clark, A. H. Crevenna, J. Howard, Proc. Natl. Acad. Sci. USA 101 (2004) 1183.
A. Yildiz, M. Tomishige, R. D. Vale, P. R. Selvin, Science 303 (2004) 676.
A. Yildiz, J. N. Forkey, S. A. McKinney, T. Ha, Y. E. Goldman, P. R. Selvin, Science 300 (2003) 2061.
G. E. Snyder, T. Sakamoto, J. A. Hammer III, J. R. Sellers, P. R. Selvin, Biophys. J. 87 (2004) 1776.
A. Yildiz, H. Park, D. Safer, Z. Yang, L.-Q. Chen, P. R. Selvin, H. L. Sweeney, J. Biol. Chem. 279 (2004) 37223.
Z. Ökten, L. S. Churchman, R. S. Rock, J. A. Spudich, Nature Struct. & Mol. Biol. 11 (2004) 884.
J. L. Mateos, A. Neiman, F. Moss, in: S. M. Bezrukov (Ed.), Unsolved Problems of Noise and Fluctuations UPON 2002, AIP Conference Proceedings 665 (2003) 569.
J. L. Mateos, A. Neiman, F. Moss, J. A. Freund, L. Schimansky-Geier, I. M. Sokolov, in: L. Schimansky-Geier, D. Abbot, A. Neiman, C. Van den Broeck (Eds.), Noise in Complex Systems and Stochastic Dynamics, Proc. of SPIE 5114 (2003) 20.
J. L. Mateos, Fluct. Noise Lett. 4 (2004) L161.
J. L. Mateos, Phys. Rev. Lett. 84 (2000) 258.
W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, Numerical Recipes in Fortran. The Art of Scientific Computing, Second edition, Cambridge University Press, New York, NY, 1994.
|
---
abstract: 'This talk consists of four parts. In part one, I give an elementary discussion on constructing a Lorentz-invariant spin sum rule for the nucleon. In part two, I discuss a gauge-dependent spin sum rule, explore its relation with the polarized gluon distribution, and introduce the complete evolution equation for the spin structure. In part three, I consider a gauge-invariant spin sum rule and the related evolution equation. The solution of the equation motivates the possibility that half of the nucleon spin may be carried by gluons at low energy scales. In the final part, I discuss deeply-virtual Compton scattering as a possible way to measure the canonical orbital angular momentum of quarks in the nucleon.'
address: |
Department of Physics\
University of Maryland\
College Park, Maryland 20742\
[ ]{}
author:
- Xiangdong Ji
date: 'U. of MD PP\#97-042 DOE/ER/40762-102 October 1996'
title: '[HUNTING FOR THE REMAINING SPIN IN THE NUCLEON]{} [^1]'
---
From XDJ@mitlns.mit.eduTue Oct 15 13:43:24 1996 Date: Tue, 15 Oct 1996 13:41:15 -0400 (EDT) From: XDJ@mitlns.mit.edu To: xdj@quark.umd.edu
\#1 \#1 \#1
Yesterday and today, we have heard essentially two kinds of explanations to the so-called “spin crisis”[@talks]. The first kind says that the experimental data do not rule out the simple quark model prediction that the quark spin carries a large fraction of the nucleon spin. One way to see this is that the deep-inelastic sum rule has an unknown uncertainty about the small $x$ contribution. Another way to see this is that one has to subtract the anomaly contribution from the measured $\Delta \Sigma$ before comparing it with the quark model prediction, and the subtraction is potentially large. The second kind of explanations is that the quark spin carries little of the nucleon spin, due to for instance a large negative sea polarizations. On the other hand, in the skyrme model discussed by J. Ellis, it seems that the majority of the nucleon spin is carried by orbital angular momentum. Whatever position one may take, it is safe to conclude that the nucleon spin carried by other sources is significant. Thus, in my talk, I will concentrate on the subject of the remaining spin in the nucleon, i.e. the part not measured by the polarized deep-inelastic scattering experiment.
Construct a Lorentz-Invariant Spin sum rule
===========================================
To understand what are the remaining components of the nucleon spin, it is important to construct a [*Lorentz-invariant*]{} spin sum rule. At first, it appears difficult to talk about different contributions to the nucleon spin because in field theory angular momentum operators do not commute with boost operators.
States of a spin-1/2 particle are labelled by 4-momentum $p^\mu$ and polarization vector $s^\mu$. Hence we write the nucleon states as $|p,s\rangle$. To talk about spin in a relativistic way, one has to introduce the relativistic spin operator $\hat W_\mu$, which is also called the Pauli-Lubanski spin, $$\hat W_\mu \sim \epsilon_{\mu\alpha\beta\gamma}
\hat J^{\alpha\beta} \hat P^\gamma \ ,$$ where $\hat J^{\alpha\beta}$ are the generators of Lorentz transformations and $\hat P^\mu$ is the energy-momentum operator. The fact that the nucleon has spin 1/2 in all frames is represented by the following equation, $$\hat W^2 |ps\rangle = {1\over
2}\left({1\over2}+1\right)|ps\rangle \ .$$ Since $\hat W^2$ is quadratic in angular momentum and boost operators, the equation doesn’t seem to offer any interesting spin sum rule.
Notice, however, $s_\mu\hat W^\mu$ is also a Lorentz scalar and it has $|ps\rangle$ as its eigenstate, $$s_\mu\hat W^\mu |ps \rangle = {1\over 2}|ps\rangle\ .$$ Or, we can write, $${1\over 2} = \langle ps|s_\mu \hat W^\mu |ps\rangle\ ,$$ where I have been casual about the normalization. The equation can be used to construct spin sum rules: If the Pauli-Lubanski spin is a sum of several contributions, $\hat W^\mu = \sum_i \hat W^\mu_i$, we can write, $${1\over 2} = \sum_i ~\langle ps|s_\mu\hat W^\mu_i|ps\rangle\ .
\label{sum}$$ The above equation contains the boost operators in general. However, if one chooses $\vec{s}$ to be in the direction of $\vec{p}$, which without loss of generality can be chosen to be the $z$ axis, then, $$s_\mu \hat W^\mu \sim \hat J^{xy} \equiv \hat J^z\ ,$$ where $\hat J^z$ is the $z$ component of the angular momentum operator. The nucleon is now in the helicity eigenstate $\lambda=1/2$, and a helicity sum rule emerges from Eq. (\[sum\]), $${1\over 2} = \sum_i \left\langle p {1\over 2}
\left|\hat J^z_i\right|p{1\over 2}\right\rangle \ ,$$ where $\sum_i \hat J_i^z=\hat J^z$. This sum rule is most suitable for studying the spin structure of the nucleon [@jaffemanohar].
To actually construct a spin sum rule, one needs to know the angular momentum operators in QCD, which are identified as the generators of spatial rotations. By Noether’s theorem, we can derive these from the transformation property of the QCD lagrangian density under rotations. Depending upon the final form of the angular momentum operators one prefers to take, both gauge-dependent and gauge-invariant sum rules can result.
A gauge-dependent sum rule
==========================
In a 1989 paper, Jaffe and Manohar wrote down the following form of the QCD angular momentum operator [@jaffemanohar], $$\begin{aligned}
\vec{J} &=& \int d^3\vec{x} ~\Big[~
{1\over 2}\bar \psi \vec{\gamma}\gamma_5\psi
+ \psi^\dagger \vec{x}\times (-i\vec{\bigtriangledown})\psi
\nonumber \\
&+& \vec{E}\times \vec{A}
+ E_i(\vec{x}\times \vec{\bigtriangledown})A_i ~\Big]\ .
\label{ang}\end{aligned}$$ An advantage of this form is that the physical meaning of the individual terms is quite obvious: The first term is the quark spin, the second term is the quark orbital angular momentum, the third term is the gluon spin, and the final term is the gluon orbital angular momentum. According to the above equation, one can write down a sum rule for the nucleon spin, $${1\over 2} = {1\over 2}\Delta \Sigma(\mu^2)
+ L_q'(\mu^2) + \Delta g(\mu^2) + L_g'(\mu^2) \ ,$$ where, for instance, $$\Delta g(\mu^2) = \langle ps|\int d^3\vec{x} (\vec{E}\times\vec{A})^z|ps\rangle\ ,$$ etc. Clearly, $L_q'$, $\Delta g$ and $L_g'$ are gauge, and hence frame, dependent.
Interestingly, $\Delta g$ in the infinite momentum frame and light-like gauge ($A^+=0$) is related to a quantity present in polarized high-energy scattering, $$\Delta g(\mu^2) = \int^1_0 \Delta G(x,\mu^2) dx$$ where $\Delta G(x,\mu^2)$ is the polarized gluon distribution. Recently, there has been a lot of discussion in the literature about measuring $\Delta G(x)$ at polarized RHIC and HERA. I am happy to see that there will be a round table discussion about this topic on Friday.
The individual contributions to the nucleon spin are scale-dependent. Recently, Hoodbhoy, Tang and myself [@ji1] have worked out the scale dependence of the orbital angular momentum contributions. This subject was first recognized by Phil Ratcliffe [@rat]. Together with the well-known Altarelli-Parisi equation [@ap], we now have a complete set of equations to evolve the spin structure of the nucleon at the leading-log level, $${\partial \over \partial \ln \mu^2}
\left(\begin{array}{c}
\Delta \Sigma(\mu^2) \\
\Delta g(\mu^2) \\
L_q'(\mu^2) \\
L_g'(\mu^2)
\end{array} \right)
= {\alpha_s(\mu^2)\over 2\pi}
\left( \begin{array}{rrrr}
0 & 0 &0 &0 \\
{3\over 2}C_F &{\beta_0\over 2} &0 &0 \\
-{2\over 3}C_F &{n_F\over 3} &-{4\over 3}C_F &{n_F \over 3} \\
-{5\over 6}C_F& -{11\over 2}& {4\over 3}C_F &-{n_F \over 3} \\
\end{array} \right)
\left( \begin{array}{c}
\Delta \Sigma(\mu^2) \\
\Delta g(\mu^2) \\
L_q'(\mu^2) \\
L_g(\mu^2)
\end{array} \right) \ .$$ If one knows the decomposition of the spin of the nucleon at one perturbative scale, one can solve from the above equation the decomposition at any other perturbative scale. As $\mu^2\rightarrow \infty$, one has the following asymptotic solution, $$\begin{aligned}
&&\Delta \Sigma \rightarrow {\rm const.} \nonumber \\
&&\Delta g \rightarrow {\lambda}\ln \mu^2 + {\rm const.} \nonumber
\\
&& L_q' \rightarrow {\rm const.} \nonumber \\
&& L_g' \rightarrow {\rm -\lambda \ln \mu^2 + const.}\end{aligned}$$ Thus, the gluon helicity increases logarithmically with the probing scale. That increase is entirely cancelled by the gluon orbital contribution in the asymptotic limit.
a gauge-invariant sum rule
===========================
Recently, I have proposed to reorganize the angular momentum operator in Eq. (\[ang\]) so that it is explicitly gauge-invariant [@ji2], $$\begin{aligned}
\vec{J} &=& \int d^3 \vec{x}~
\Big[~ {1\over 2}\bar \psi \vec{\gamma}\gamma_5 \psi \nonumber \\
& + & \psi^\dagger (\vec{x}\times (-i\vec{D}))\psi \nonumber \\
& + &\vec{x}\times(\vec{E}\times\vec{B}) ~\Big] \ .\end{aligned}$$ As before, the first term is the quark spin. The second term, in which the covariant derivative is ${\vec D} = \vec{\partial} + ig\vec{A}$, is the canonical orbital angular momentum of quarks. The last term is the angular momentum of the gluons, as is clear from the appearance of the Poynting vector. According to the above, we can write down a gauge-invariant spin sum rule, $${1\over 2} = {1\over 2}
\Delta \Sigma(\mu^2) + L_q(\mu^2) + J_g(\mu^2) \ ,$$ where the second and third terms are quark orbital and gluon contributions, respectively. I introduce the sum of the first and second terms as $J_q(\mu^2)$, representing the total quark contribution. It is interesting to notice that although $\Delta \Sigma(\mu^2)$ is affected by the axial anomaly, $J_q(\mu^2)$ is anomaly-free [@ji1].
The evolution equation for the quark and gluon contributions is, $${\partial \over \partial \ln \mu^2}
\left(\begin{array}{c}
J_q(\mu^2) \\
J_g(\mu^2)
\end{array} \right)
= {\alpha_s(\mu^2)\over 2\pi}
{1\over 9}\left( \begin{array}{rr}
-16 & 3n_F \\
16 & -3n_F \\
\end{array} \right)
\left( \begin{array}{c}
J_q(\mu^2) \\
J_g(\mu^2)
\end{array} \right) \ .$$ As $\mu^2\rightarrow \infty$, there is a fixed point solution, $$\begin{aligned}
J_q(\infty) &=& {1\over 2} {3n_f\over 16 + 3n_f} \ , \nonumber \\
J_g(\infty) &=& {1\over 2} {16\over 16 + 3n_f} \ . \end{aligned}$$ Thus we see about half of the nucleon spin is carried by gluons. A similar result was obtained by Gross and Wilczek in 1974 for the quark and gluon contributions to the momentum of the nucleon [@gw]. Experimentally, one finds that about half of the nucleon momentum is carried by gluons already at quite low-energy scales. An interesting question is whether the gluons carry half of the nucleon spin at low energy scales?
It is difficult to answer this question theoretically, because QCD is difficult to solve. Recently, Balitsky and I made an estimate using the QCD sum rule approach [@jibalitsky]. We find, $$J_g(\mu^2\sim 1 {\rm GeV}^2) \simeq {4\over 9} {e<\bar u\sigma Gu>
<\bar uu> \over M_{1^{-+}}^2\lambda_N^2}$$ which gives approximately 0.25. If this calculation indicates anything about the truth, the spin structure of the nucleon roughly looks like this, $${1\over 2} = 0.10({\rm from~} {1\over2}\Delta \Sigma)
+ 0.15({\rm from~} L_q) + 0.25({\rm from~} J_g) \ .$$ It would be interesting to test this scenario.
How to measure $J_{q,g}$?
=========================
By examining carefully the definition of the matrix elements, $$J_{q,g}(\mu^2) = \langle p{1\over 2} \left|
\int d^3x (\vec{x}\times \vec{T}_{q,g})^z
\right|p{1\over 2}\rangle \ ,
\label{matrix}$$ one realizes that they can be extracted from the form factors of the quark and gluon parts of the QCD energy-momentum tensor $T^{\mu\nu}_{q,g}$. Using Lorentz symmetry, we can write down the forward matrix elements of $T^{\mu\nu}_{q,g}$, $$\begin{aligned}
\langle p'| T_{q,g}^{\mu\nu} |p\rangle
&=& \bar u(p') \Big[A_{q,g}(\Delta^2)
\gamma^{(\mu} \bar P^{\nu)} +
B_{q,g}(\Delta^2) \bar P^{(\mu} i\sigma^{\nu)\alpha}\Delta_\alpha/2M \nonumber \\
&& + C_{q,g}(\Delta^2)(\Delta^\mu \Delta^\nu - g^{\mu\nu}\Delta^2)/M
+ \bar C_{q,g}(\Delta^2) g^{\mu\nu}M\Big] u(p)\ , \end{aligned}$$ where $\bar p^\mu=(p^\mu+{p^\mu}')/2$, $\Delta^\mu
= {p^\mu}'-p^\mu$, and $u(p)$ is the nucleon spinor. Taking the forward limit in the $\mu=0$ component and integrating over 3-space, one finds that $A_{q,g}(0)$ give the momentum fractions of the nucleon carried by quarks and gluons ($A_q(0)+A_g(0)= 1$). On the other hand, substituting the above into the nucleon matrix element of Eq. (\[matrix\]), one finds [@ji2], $$\begin{aligned}
J_{q, g} = {1\over 2} \left[A_{q,g}(0) + B_{q,g}(0)\right] \ . \end{aligned}$$ There is an analogy for this. If one knows the Dirac and Pauli form factors of the electromagnetic current, $F_1(Q^2)$ and $F_2(Q^2)$, the magnetic moment of the nucleon, which is defined as the matrix element of (1/2)$\int d^3x (\vec{x} \times \vec{j})^z$ , is just $F_1(0) +F_2(0)$.
How to measure the form factors of the energy momentum tensor? If one has two vector currents which are separated along the light-cone, it is known from the operator product expansion that, $$TJ_\alpha(z)J_\beta(0) \rightarrow ... +
C_{\alpha\beta\mu\nu}(z^2) T^{\mu\nu} + ....$$ Thus to get the matrix element $\langle p'|T^{\mu\nu}|p\rangle$, we need $\langle p'|TJ_\alpha(z)J_\beta(0)|p\rangle$, i.e. a Compton scattering amplitude. To ensure the separation of the two currents is along the light-cone, we let one of the photon momenta approach the Bjorken limit. Then it is easy to show that the Compton scattering is dominated by the single quark process. I shall call such a scattering process deeply-virtual Compton scattering (DVCS).
What does one learn from DVCS? An analysis shows that one learns about the off forward parton distributions (OFPDs), which are defined through the following light-cone correlations, $$\begin{aligned}
\int {d\lambda \over 2\pi} e^{i\lambda x}
\langle p'|\bar\psi(-{\lambda n/ 2})\gamma^\mu
\psi(\lambda n/2)|p \rangle
&=& H(x,\Delta^2, \xi) \bar u(p')\gamma^\mu u(p) \nonumber \\
&& + E(x,\Delta^2, \xi) \bar u(p'){i\sigma^{\mu\nu}
\Delta_{\nu}
\over 2M}u(p) + ... \nonumber \\
\int {d\lambda \over 2\pi} e^{i\lambda x}
\langle p'|\bar\psi(-{\lambda n/ 2})\gamma^\mu\gamma_5
\psi(\lambda n/2)|p \rangle
& =& \tilde H(x,\Delta^2, \xi)
\bar u(p')\gamma^\mu \gamma_5 u(p) \nonumber \\
&& + \tilde E(x, \Delta^2, \xi) \bar u(p')
{\gamma_5\Delta^\mu
\over 2M}u(p)
+ ...\end{aligned}$$ where I have neglected the gauge link and the dots denote higher-twist distributions. >From the definition, $H$ and $\tilde H$ are nucleon helicity-conserving amplitudes and $E$ and $\tilde E$ are helicity-flipping. Such distributions have been considered in the literature before [@other].
The off-forward parton distributions have the characters of both ordinary parton distributions and nucleon form factors. In fact in the limit of $\Delta^\mu \rightarrow 0$, we have $$H(x,0,0) = q(x)\ ,~~~ \tilde H(x,0,0) = \Delta q (x) \ ,$$ where $q(x)$ and $\Delta q(x)$ are quark and quark helicity distributions. On the other hand, forming the first moment of the new distributions, one gets the following sum rules [@ji2; @other], $$\begin{aligned}
\int^1_{-1} dx H(x,\Delta^2, \xi) &=& F_1(\Delta^2) \ , \nonumber \\
\int^1_{-1} dx E(x,\Delta^2, \xi) &=& F_2(\Delta^2) \ . \end{aligned}$$ where $F_1$ and $F_2$ are the Dirac and Pauli form factors. The most interesting sum rule relevant to the nucleon spin is, $$\begin{aligned}
\int^1_{-1} dx x [H(x, \Delta^2, \xi) +
E(x, \Delta^2, \xi) ]
= A_q(\Delta^2) + B_q(\Delta^2) \ , \end{aligned}$$ where luckily the $\xi$ dependence, or $C_q(\Delta^2)$ contamination, drops out. Extrapolating the sum rule to $\Delta^2=0$, the total quark (and hence quark orbital) contribution to the nucleon spin is obtained. By forming still higher moments, one gets form factors of various high-spin operators.
There are a lot of theoretical and experimental questions about DVCS. Theoretical questions include: is there a factorization theorem for DVCS? is there an Altarelli-Parisi equation for the OFPDs evolution? what is the small $x$ and $\xi$ behavior? how to extrapolate the form factors to $\Delta^2=0$? Experiment-related questions include: how big is the cross section? will the Bethe-Heitler process overshadow DVCS? what kinematic region corresponds to DVCS? does one need polarizations of beam? target? how practical is to form sum rules? etc. Some of these questions have been answered in recent papers [@ji3; @ra]. Others are open.
I thank the organizers of this meeting for the opportunity to discuss this interesting subject and for the remarkable effort to arrange a visa in two days, which even surprised the Dutch Embassy in Washington DC! I thank Wally Melnitchouk for a careful reading of this write-up.
See talks by J. Ellis, S. Forte, and P. Ratcliffe in this preceeding.
R. L. Jaffe and A. Manohar, Nucl. Phys. B337 (1990) 509.
X. Ji, J. Tang, and P. Hoodbhoy, Phys. Rev. Lett. 76 (1996) 740.
P. G. Ratcliffe, Phys. Lett. B 192 (1987) 180.
G. Altarelli and G. Parisi, Nucl. Phys. B126 (1977) 278.
X. Ji, Hep-ph/9603249, MIT-CTP-2517, March, 1996.
D. Gross and F. Wilczek, Phys. Rev. D9 (1974) 980.
I. Balitsky and X. Ji, to be published.
F. M. Dittes, D. Muller, D. Robsschik, B. Geyer, and J. Horejsi, Phys. Lett. B 209 (1988) 325; Fortschr. Phys. 42 (1994) 101; P. Jain and J. P. Ralston, in the proceedings of the workshop on Future Directions in Particle and Nuclear Physics at Multi-GeV Hadron Beam Facilities, BNL, March, 1993.
X. Ji, Hep/ph9609381, U. of MD PP\#97-026, MIT-CTP-2568, 1996.
A. V. Radyushkin, Phys. Lett. B380 (1996) 417; Also Hep-ph/9605431, CEBAF-TH-96-06, May, 1996.
[^1]: Plenary talk given at the 12th International Symposium on High-Energy Spin Physics, Amsterdam, Sept. 1996. This work is supported in part by funds provided by the U.S. Department of Energy (D.O.E.) under cooperative agreement DOE-FG02-93ER-40762.
|
---
abstract: 'Solid state materials hosting pseudospin-1 quasiparticles have attracted a great deal of recent attention. In these materials, the energy band contains of a pair of Dirac cones and a flat band through the connecting point of the cones. As the “caging” of carriers with a zero group velocity, the flat band itself has zero conductivity. However, in a non-equilibrium situation where a constant electric field is suddenly switched on, the flat band can enhance the resulting current in both the linear and nonlinear response regimes through distinct physical mechanisms. Using the ($2+1$) dimensional pseudospin-$1$ Dirac-Weyl system as a concrete setting, we demonstrate that, in the weak field regime, the interband current is about twice larger than that for pseudospin-1/2 system due to the interplay between the flat band and the negative band, with the scaling behavior determined by the Kubo formula. In the strong field regime, the intraband current is $\sqrt{2}$ times larger than that in the pseudospin-1/2 system, due to the additional contribution from particles residing in the flat band. In this case, the current and field follows the scaling law associated with Landau-Zener tunneling. These results provide a better understanding of the role of the flat band in non-equilibrium transport and are experimentally testable using electronic or photonic systems.'
author:
- 'Cheng-Zhen Wang'
- 'Hong-Ya Xu'
- Liang Huang
- 'Ying-Cheng Lai'
title: 'Non-equilibrium transport in the pseudospin-1 Dirac-Weyl system'
---
Introduction {#sec:intro}
============
Solid state materials, due to the rich variety of their lattice structures and intrinsic symmetries [@bradlyn2016beyond; @beenakker2016bringing], can accommodate quasiparticles that lead to quite unconventional and interesting physical phenomena. The materials and the resulting exotic quasiparticles constitute the so-called “material universe.” Such materials range from graphene that hosts Dirac fermions [@neto2009electronic] to 3D topological insulators [@hasan2010colloquium; @qi2011topological] and 3D Dirac and Weyl semimetals [@xu2015discovery; @lv2015experimental], in which the quasiparticles are relativistic pseudospin-$1/2$ fermions. Recently, Dirac-like pseudospin-1 particles have attracted much attention [@Bercioux2009; @shen2010single; @urban2011barrier; @dora2011lattice; @goldman2011topological; @guzman2014experimental; @Li2014; @Giovannetti2015; @vicencio2015observation; @mukherjee2015observation; @taie2015coherent; @diebel2016conical; @paavilainen2016coexisting; @zhu2016blue; @fang2016klein; @Malcolm2016; @xu2016; @Tsuch2016; @XL2017; @Fang2017], which are associated with a unique type of energy band structure: a pair of Dirac cones with a flat band through the conical connecting point. Materials that can host pseudospin-1 particles include particularly engineered photonic crystals [@fang2016klein; @guzman2014experimental; @vicencio2015observation; @mukherjee2015observation; @diebel2016conical], optical dice or Lieb lattices with loaded ultracold atoms [@Bercioux2009; @shen2010single; @urban2011barrier; @goldman2011topological; @Raoux2014], and certain electronic materials [@Li2014; @Giovannetti2015; @paavilainen2016coexisting; @zhu2016blue]. In contrast to the Dirac cone system with massless pseudospin-$1/2$ particles that exhibit conventional relativistic quantum phenomena, in pseudospin-$1$ systems an array of quite unusual physical phenomena can arise, such as super-Klein tunneling associated with one-dimensional barrier transmission [@shen2010single; @dora2011lattice; @fang2016klein], diffraction-free wave propagation and novel conical diffraction [@guzman2014experimental; @mukherjee2015observation; @vicencio2015observation; @diebel2016conical], unconventional Anderson localization [@chalker2010anderson; @bodyfelt2014flatbands; @Fang2017], flat-band ferromagnetism [@taie2015coherent], unconventional Landau-Zener Bloch oscillations [@KF2016], and peculiar topological phases under external gauge fields or spin-orbit coupling [@goldman2011topological; @wang2011nearly; @aoki1996hofstadter; @weeks2010topological]. The aim of this paper is to present the phenomenon of enhanced non-equilibrium quantum transport of pseudospin-1 particles.
Quantum transport beyond the linear response and equilibrium regime is of great practical importance, especially in device research and development. There have been works on nonlinear and non-equilibrium transport of relativistic pseudospin-$1/2$ particles in Dirac and Weyl materials. For example, when graphene is subject to a constant electric field, the dynamical evolution of the current after the field is turned on exhibits a remarkable minimal conductivity behavior [@lewkowicz2009dynamics]. The scaling behavior of nonlinear electric transport in graphene due to the dynamical Landau-Zener tunneling or the Schwinger pair creation mechanism has also been investigated [@rosenstein2010ballistic; @dora2010nonlinear]. Under a strong electrical field, due to the Landau-Zener transition, a topological insulator or graphene can exhibit a quantization breakdown phenomenon in the spin Hall conductivity [@dora2011dynamics]. More recently, non-equilibrium electric transport beyond the linear response regime in 3D Weyl semimetals has been studied [@vajna2015nonequilibrium]. In these works, the quasiparticles are relativistic pseudospin-1/2 fermions arising from the Dirac or Weyl system with a conical type of dispersion in their energy momentum spectrum. In this paper, we study the transport dynamics of pseudospin-1 quasiparticles that arise in material systems with a pair of Dirac cones and a flat band through their connecting point. Under the equilibrium condition and in the absence of disorders, the flat band acts as a perfect “caging” of carriers with zero group velocity and hence it contributes little to the conductivity [@vigh2013diverging; @hausler2015flat; @louvet2015origin]. However, as we will show in this paper, the flat band can have a significant effect on the non-equilibrium transport dynamics. Through numerical and analytic calculation of the current evolution for both weak and strong electric fields, we find the general phenomenon of current enhancement as compared with that associated with non-equilibrium transport of pseudospin-1/2 particles. In particular, for a weak field, the interband current is twice as large as that for pseudospin-1/2 system due to the interference between particles from the flat band and from the negative band, the scaling behavior of which agrees with that determined by the Kubo formula. For a strong field, the intraband current is $\sqrt{2}$ times larger than that in the pseudospin-1/2 system, as a result of the additional contribution from the particles residing in the flat band. In this case, the physical origin of the scaling behavior of the current-field relation can be attributed to Landau-Zener tunneling. Our findings suggest that, in general, the conductivity of pseudospin-1 materials can be higher than that of pseudospin-$1/2$ materials in the nonequilibrium transport regime.
Pseudospin-1 Hamiltonian and current {#sec:Hamiltonian}
====================================
We consider a system of 2D noninteracting, Dirac-like pseudospin-1 particles subject to a uniform, constant electric field applied in the $x$ direction. The system is described by the generalized Dirac-Weyl Hamiltonian [@xu2016; @urban2011barrier]. The electric field, switched on at $t=0$, can be incorporated into the Hamiltonian through a time-dependent vector potential [@lewkowicz2009dynamics; @rosenstein2010ballistic; @dora2010nonlinear; @dora2011dynamics; @vajna2015nonequilibrium; @cohen2008schwinger; @ishikawa2010nonlinear; @lee2014nonlinear]: $\boldsymbol{A}(t)=[A(t), 0, 0]$, where $A(t)=-Et\Theta (t)$. The resulting Hamiltonian is $$\label{eq:Hamiltonian}
H=v_F \{S_x [p_x - qA(t)] + S_y p_y\},$$ where $v_F$ is the Fermi velocity of the pseudospin-1 particle from the Dirac cones, $q=-e$ $(e>0)$ is the electronic charge, $\boldsymbol{S}=(S_x, S_y, S_z)$ is a vector of matrices with components $$S_x=\frac{1}{\sqrt{2}}
\begin{bmatrix}
0 & 1 & 0 \\
1 & 0 & 1 \\
0 & 1 & 0
\end{bmatrix},
S_y=\frac{1}{\sqrt{2}}
\begin{bmatrix}
0 & -i & 0 \\
i & 0 & -i \\
0 & i & 0
\end{bmatrix},$$ $$S_z=
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & -1
\end{bmatrix}.$$ The three matrices form a complete representation of pseudospin-1 particles, which satisfy the angular momentum commutation relations $[S_l, S_m]=i\epsilon_{lmn}S_n$ with three eigenvalues: $s=\pm1,0$, where $\epsilon_{lmn}$ is the Levi-Civita symbol. However, the matrices do not follow the Clifford algebra underlying spin-1/2 particles. The corresponding time dependent wave equation is $$\label{eq:wave_equation}
i\hbar \partial_t \Psi_{p}(t) = H\Psi_{p}(t).$$ Under the unitary transformation $$U=
\begin{bmatrix}
\frac{1}{2}e^{-i\theta} & -\frac{1}{\sqrt{2}}e^{-i\theta} & \frac{1}{2}e^{{-i\theta}} \\
\frac{\sqrt{2}}{2} & 0 & -\frac{\sqrt{2}}{2} \\
\frac{1}{2}e^{i\theta} & \frac{1}{\sqrt{2}}e^{i\theta} & \frac{1}{2}e^{{i\theta}}
\end{bmatrix}$$ with $\tan\theta = p_y/[p_x-qA(t)]$, we can rewrite Eq. (\[eq:wave\_equation\]) in the basis of adiabatic energy as $$\begin{aligned}
\label{eq:Diracoriginal}
i\hbar \partial_t \Phi_p(t) = &\big[S_z \epsilon_p(t)
+ S_x \sqrt{2}C_0(t)\big]\Phi_p(t),\end{aligned}$$ where $\Phi_p(t)=U^{\dagger}\Psi_p(t)=[\alpha_p(t),\gamma_p(t),\beta_p(t)]^T$, $C_0(t)={\hbar v_F^{2}p_y eE}/{\sqrt{2}\epsilon_p^2(t)}$, and $\epsilon_p = v_F \sqrt{(p_x - eEt)^2 + p_y^2}$. Initially at $t = 0$, the negative band is assumed to be fully filled: $\Phi_p(t=0) = [0, 0, 1]^T$. From the equation of motion, we obtain the current operator in the original basis as $J_x=-e\nabla_{\boldsymbol{p}}H=-ev_F S_x$. In the transformed adiabatic energy base, the current operator is $$\label{eq:J_x}
J_x=-ev_F(S_z\cos\theta - S_y \sin\theta).$$ We thus have the current density for a certain state as $$\begin{aligned}
\label{eq:current1}
\langle J_x \rangle _p(t)&=-ev_F\big\{\cos\theta[|\alpha_p(t)|^2 - |\beta_p(t)|^2] \nonumber \\
&- \sqrt{2}\sin\theta \mbox{Re}[i\alpha_p(t)\gamma_p^{*}(t)+i\gamma_p(t)\beta_p^{*}(t)]
\big\}.\end{aligned}$$ In Eq. (\[eq:current1\]), the first term is related to the particle number distribution associated with the positive and negative bands, which is the intraband or conduction current. The second term in Eq. (\[eq:current1\]) characterizes the interference between particles from distinct bands, which is related to the phenomenon of relativistic Zitterbewegung and can be appropriately called the interband or polarization current.
To assess the contribution of a band (i.e., positive, flat, or negative) to the interband current, we seek to simplify the current expression. Through some algebraic substitutions, we get $$\begin{aligned}
\partial_t |\alpha_p(t)|^2 = 2\mbox{Re} [\alpha_p(t) \partial_t \alpha_p^*(t)], \nonumber\\
\partial_t |\gamma_p(t)|^2 = 2\mbox{Re} [\gamma_p(t) \partial_t \gamma^*_p(t)]. \nonumber\end{aligned}$$ From the Dirac equation (\[eq:Diracoriginal\]), we have $$\begin{aligned}
\hbar \alpha_p(t) \partial_t \alpha^*_p(t) = i\epsilon_p \alpha_p(t) \alpha^*_p(t) + iC_0 \alpha_p(t) \gamma^*_p(t), \nonumber\\
\hbar \gamma_p(t) \partial_t \gamma^*_p(t) = iC_0 \gamma_p(t) \alpha^*_p(t) + iC_0 \gamma_p(t) \beta^*_p(t), \nonumber\end{aligned}$$ which gives $$\begin{aligned}
\nonumber
\mbox{Re} [i\alpha_p(t) \gamma^*_p(t)] & = & \frac{\hbar}{2C_0}\partial_t |\alpha_p(t)|^2, \nonumber \\
\mbox{Re} [i\gamma_p(t) \beta^*_p(t)] & = & \frac{\hbar}{2C_0}\big[ \partial_t |\alpha_p(t)|^2 + \partial_t |\gamma_p(t)|^2 \big].\end{aligned}$$ Using the total probability conservation $|\alpha_p|^2 + |\gamma_p|^2 + |\beta_p|^2 = 1$, we finally arrive at the following current expression $$\begin{aligned}
\label{eq:current2}
\langle J_x \rangle _p (t)
&= -ev_F \Big\{ \frac{v_F (p_x - eEt)}{\epsilon_p(t)}\big[2|\alpha_p(t)|^2 + |\gamma_p(t)|^2 - 1\big] \nonumber \\
& - \frac{\epsilon_p(t)}{v_F e E}\big(2\partial_t |\alpha_p|^2 + \partial_t |\gamma_p|^2\big) \Big\},\end{aligned}$$ where the third term in the first part that is independent of the particle distribution vanishes after an integration over the momentum space.
For convenience, in our numerical calculations we use dimensionless quantities, which we obtain by introducing the scale $\Delta$, the characteristic energy of the system. The dimensionless time, electric field, momentum, energy, and coefficient are $$\begin{aligned}
\nonumber
\tilde{t} & = & \Delta t/\hbar, \\ \nonumber
\tilde{E} & = & ev_F\hbar E/\Delta^2, \\ \nonumber
\tilde{p} & = & v_Fp/\Delta, \\ \nonumber
\tilde{\epsilon} & = & \sqrt{(\tilde{p}_x - \tilde{E}\tilde{t})^2
+ \tilde{p}_y^2}, \\ \nonumber
\tilde{C}_0 & = & \tilde{E}\tilde{p}_y/\sqrt{2}[(\tilde{p}_x -
\tilde{E}\tilde{t})^2 + \tilde{p}_y^2],\end{aligned}$$ respectively. The dimensionless current $\tilde{J}$ can be expressed in units of $e\Delta^2/v_F \hbar^2 \pi^2$.
Weak field regime: enhancement of interband current {#sec:weak_field}
===================================================
![ [**Interband current in pseudospin-1 and pseudospin-1/2 systems**]{}. (a) Evolution of the total current to electric field ratio $\tilde{J}/\tilde{E}$ with time $\tilde{t}$ for pseudospin-1 and 1/2 systems for a fixed electric field $\tilde{E}=0.0004$, where the dashed lines denote the theoretical values $\pi^2/2$ and $\pi^2/4$ for the pseudospin-1 and pseudospin-1/2 systems, respectively. The yellow and green lines represent the respective numerical results. (b) The total current $\tilde{J}$ versus the electric field $\tilde{E}$ at time $\tilde{t}=2$ for the two systems. Comparing with the pseudospin-1/2 system, the interband current in the pseudospin-1 system is greatly enhanced.[]{data-label="fig:Interband_J_tE"}](figure1.pdf){width="\linewidth"}
In the weak field regime, the intraband current is negligible as compared to the interband current due to the fewer number of conducting particles [@rosenstein2010ballistic; @dora2010nonlinear] (see Appendix B for an explanation and representative results). In particular, the interband current for a certain state can be expressed as $$J_p^{inter} = \frac{\epsilon_p(t)}{E}[2\partial_t |\alpha_p|^2 +
\partial_t |\gamma_p|^2].$$ For pseudospin-1/2 particles, the interband current has only the first term [@dora2010nonlinear]. The additional term $[\epsilon_p(t)/E]\partial_t |\gamma_p|^2$ is unique for pseudospin-1 particles. To reveal the scaling behavior of the interband current and to assess the role of the positive and the flat bands in the current, we impose the weak field approximation: $|p|=\sqrt{p_x^2 + p_y^2} \gg eEt $ everywhere except in the close vicinity of the Dirac point, which allows us to obtain an analytic expression for the interband current. Under the approximation, the coefficients $\epsilon_p$ and $C_0$ become $\epsilon_p\approx v_F p$ and $C_0\approx \hbar p_y e E/(\sqrt{2}p^2)$, which are time independent. Substituting these approximations into Eq. (\[eq:Diracoriginal\]), we obtain the three components of the time dependent state $\Phi_p(t)$ as $$\begin{aligned}
&\alpha_p(t) = \frac{1}{2} [\cos \omega t + m_0^2(\cos \omega t - 1) - 1], \\
&\beta_p(t) = \frac{1}{2} [\cos \omega t - 2m_0 \sin\omega t - m_0^2 [\cos\omega t - 1] + 1], \\
&\gamma_p(t) = \frac{1 + m_0^2}{2C_0} [-i\hbar \omega \sin \omega t - \epsilon_p(\cos \omega t - 1)].\end{aligned}$$ The interband current contains two parts: $$\label{eq:interpositive}
J_p^{\alpha} = 2\frac{\epsilon_p C_0^4 \omega}{E(\epsilon_p^2 + 2C_0^2)^2}(2\sin \omega t - \sin 2\omega t),$$ and $$\label{eq:interflat}
J_p^{\gamma} = 2\frac{\epsilon_p C_0^2 \omega}{E(\epsilon_p^2 + 2C_0^2)^2}(\epsilon_p^2 \sin \omega t + C_0^2 \sin 2\omega t),$$ which correspond to contributions from the positive and the flat bands, respectively, where $\omega = \sqrt{\epsilon_p^2 + 2C_0^2}/\hbar$. For sufficiently weak field such that the off diagonal term is small compared with the diagonal term in Eq. (\[eq:Diracoriginal\]), we have $\epsilon_p^2 \gg 2C_0^2$, i.e., $$v_F^2 p^2 \gg \frac{p_y^2}{p^2}\frac{\hbar^2 e^2 E^2}{p^2}.$$ In this case, the contribution from the positive band is nearly zero and the flat band contribution is $$\begin{aligned}
\label{eq:interflat_approx}
J_p^{\gamma} \approx 2\frac{\epsilon_p^3 C_0^2 \omega}{E(\epsilon_p^2 + 2C_0^2)^2}\sin{(\omega t)} \approx e^2 \hbar E \frac{\sin^2 \theta}{p^2}\sin{(\frac{v_Fpt}{\hbar})}.\end{aligned}$$ The total positive band contribution over the momentum space is negligibly small, so the flat band contributes dominantly to the total interband current: $$\begin{aligned}
\label{eq:intercurrent}
J_{inter} & = \frac{1}{\pi^2 \hbar^2}\iint e^2 \hbar E \frac{\sin^2 \theta}{p}\sin{(\frac{v_F pt}{\hbar})} d\theta dp \nonumber \\
& = \frac{e^2}{2\hbar}E = \frac{e\Delta^2}{v_F \hbar^2 \pi^2} \cdot \frac{\pi^2}{2}\tilde{E}.\end{aligned}$$ The dimensionless current is given by $$\begin{aligned}
\tilde{J} = \frac{\pi^2}{2}\tilde{E}.\end{aligned}$$
![ [**Origin of interband current in the pseudospin-1 system.**]{} (a) Ratio between interband currents from the pseudospin-1 and pseudospin-1/2 systems as a function of time for electric field strength $\tilde{E}=0.0004$, (b) current ratio versus $\tilde{E}$ for fixed time $\tilde{t}=2$. The black dashed lines are theoretical results, and the red and blue lines are for flat and positive bands, respectively. These results indicate that, for the pseudospin-1 system, the flat band is the sole contributor to the interband current.[]{data-label="fig:Interband_flat"}](figure2.pdf){width="\linewidth"}
To verify the analytical prediction Eq. (\[eq:intercurrent\]), we calculate the interband current by numerically solving the time dependent Dirac equation (\[eq:Diracoriginal\]). For comparison, we also calculate the current for the pseudospin-1/2 system both numerically and analytically. The results are shown in Fig. \[fig:Interband\_J\_tE\]. For the numerical results in Fig. \[fig:Interband\_J\_tE\](a), the momentum space is defined as $\tilde{p}_x \in [-8, 8]$ and $\tilde{p}_y \in [-8, 8]$ and the integration grid has the spacing $0.0002$. In Fig. \[fig:Interband\_J\_tE\](b), we use the same momentum space grid for $\tilde{E}=0.0001,0.0002,0.0004$ but for $\tilde{E}= 0.0008, 0.0016, 0.0032$, the ranges of the momentum space are doubled. From Fig. \[fig:Interband\_J\_tE\](a), we see that the interband current for both pseudospin-1 and pseudospin-1/2 cases are independent of time. That is, after a short transient, the interband current approaches a constant. From Fig. \[fig:Interband\_J\_tE\](b), we see that the current is proportional to the electric field $E$ for both pseudospin-1 and pseudospin-1/2 particles (with unity slope on a double logarithmic scale), but the proportional constant is larger in the pseudospin-1 case. While in the weak field regime, the scaling relation between the interband current and the electric field is the same for pseudospin-1 and pseudospin-1/2 particles, there is a striking difference in the current magnitude. In particular, the interband current for the pseudospin-1 system is about twice that for the pseudospin-1/2 counterpart, as revealed by both the theoretical approximation Eq. (\[eq:intercurrent\]) and the numerical result \[corresponding to the dashed and solid lines in Fig. \[fig:Interband\_J\_tE\](a), respectively\]. The interband current in the pseudospin-1 system is thus greatly enhanced as compared with that in the pseudospin-1/2 system.
![ [**Interband current distribution in the momentum space**]{}: (a) pseudospin-1 and (b) pseudospin-1/2 systems. The time and electric field strength are $\tilde{t}=2$ and $\tilde{E}=0.0128$ respectively.[]{data-label="fig:Interband_dist1"}](figure3.pdf){width="\linewidth"}
Intuitively, the phenomenon of current enhancement can be attributed to the extra flat band in the pseudospin-1 system: while the band itself does not carry any current, it can contribute to the interband current. Indeed, the theoretical results in Eqs. (\[eq:interpositive\]) and (\[eq:interflat\]) indicate that the flat band contributes to the total interband current, while the positive band contributes little to the current. To gain physical insights, we numerically calculate three currents: the positive and flat band currents from the pseudospin-1 system, and the current from the pseudospin-1/2 system. Figure \[fig:Interband\_flat\] shows that the ratio of the flat band current to the pseudospin-1/2 current is two, while the ratio between the positive band and pseudospin-1/2 currents is nearly zero, indicating that in the pseudospin-1 system, almost all the interband current originates from the flat band.
To better understand the phenomenon of interband current enhancement in the pseudospin-1 system, we calculate the current distribution for both pseudospin-1 and pseudospin-1/2 systems in the momentum space, as shown in Fig. \[fig:Interband\_dist1\]. We see that the area in the momentum space with significant current is larger for the pseudospin-1 case, although the current magnitude is almost the same near the Dirac point for both systems. This is indication that the flat band can contribute substantially more current because the Landau-Zener transition “gap” $P_y$ for the pseudospin-1 system is small compared to that for the pseudospin-1/2 system. Mathematically, with respect to the single state current expression (\[eq:interflat\_approx\]) for the pseudospin-1 system, the corresponding one state contribution to the current for the pseudospin-1/2 system is $$J_p^{half} \approx \frac{e^2 \hbar E}{2} \frac{\sin^2 \theta}{p^2}\sin{(\frac{2v_F pt}{\hbar})}.$$ The integration of current over the entire momentum space gives the factor 2 of enhancement for the pseudospin-1 system as compared with the pseudospin-1/2 system. This implies that quantum interference occurs mainly between particles from the negative and flat bands due to the small gap between them.
![ [**Enhancement of intraband current in the strong electric field regime**]{}. Intraband current and contributions from distinct bands (a) versus time for $\tilde{E}=0.8192$, where the black dashed lines represent the analytical values $2(\sqrt{2} -1)$, $2$, $2\sqrt{2}$ (from bottom) and (b) versus electric field at time $\tilde{t}=10$ (for six values of the electric field: $\tilde{E} = 0.2048, 0.4096, 0.8192, 1.6384,
3.2768$.[]{data-label="fig:Intraband_J_tE"}](figure4.pdf){width="\linewidth"}
Strong field regime: enhancement of intraband current {#sec:strong_field}
=====================================================
In the strong field regime, the intraband current \[the first term in Eq. (\[eq:current2\])\] dominates (see Appendix B). The transition probabilities for the positive, flat and negative bands are given, respectively, by [@carroll1986generalisation] $$\begin{aligned}
n_p^{+} &= \Theta (p_x) \Theta (eEt - p_x) \exp (-\frac{\pi v_F p_y^2}{\hbar e E}), \label{eq:positive}\\
n_p^{0} &= \Theta (p_x) \Theta (eEt - p_x) \nonumber \\
&\cdot 2 \Big[ 1 - \exp (-\frac{\pi v_F p_y^2}{2\hbar e E}) \Big] \Big[ \exp (-\frac{\pi v_F p_y^2}{2\hbar e E}) \Big], \label{eq:flaten}\\
n_p^{-} &= \Theta (p_x) \Theta (eEt - p_x)\Big[ 1 - \exp (-\frac{\pi v_F p_y^2}{2\hbar e E}) \Big]^2, \label{eq:negative}\end{aligned}$$ subject to the momentum constraint: $(p_x, eEt - p_x) \gg |p_y|$. The transition probabilities are essentially the pair production or transition probabilities in the generalized three-level Landau-Zener model. Substituting Eqs. (\[eq:positive\]) and (\[eq:negative\]) into Eq. (\[eq:current1\]) \[or equivalently Eq. (\[eq:current2\])\] and integrating its first term over the momentum space, we obtain the positive-band contribution to the intraband current with conducting electrons (or partially filled electrons) populated from the filled bands $$\begin{aligned}
J^{+}&= \frac{ev_F}{\hbar^2 \pi^2}\iint\frac{v_F(eEt -p_x)}{\epsilon_p(t)} \cdot |\alpha_p(t)|^2 dp_x dp_y \nonumber \\
&\approx \frac{ev_F}{\hbar^2 \pi^2} \int_{0}^{eEt}dp_x \int_{-p_x}^{p_x} |\alpha_p(t)|^2 dp_y \nonumber \\
&\approx \frac{ev_F}{\hbar^2 \pi^2} \int_{0}^{eEt}dp_x \int_{-\infty}^{+\infty} |\alpha_p(t)|^2dp_y \nonumber \\
&= \frac{e^2}{\hbar \pi^2}\sqrt{\frac{ev_F}{\hbar}} E^{3/2} t \tag{20}\\
& = \frac{e\Delta^2}{v_F \hbar^2 \pi^2} \tilde{E}^{3/2} \tilde{t}. \tag{21} \label{eq:J_intra_positive}\end{aligned}$$ The contribution to the current from the initially filled negative band with holes left by the electrons driven into the positive and flat bands, the conducting hole based intraband current $J^{-}$, is given by $$\begin{aligned}
J^{-} &= (2\sqrt{2} - 1)\frac{e^2}{\hbar \pi^2}\sqrt{\frac{ev_F}{\hbar}} E^{3/2} t \tag{22}\\
& = \frac{e\Delta^2}{v_F \hbar^2 \pi^2} (2\sqrt{2}-1)\tilde{E}^{3/2} \tilde{t}, \tag{23}\end{aligned}$$ which can be written as $$\begin{aligned}
J^{-} = J^{-}_{positive} + J^{-}_{flat}, \tag{24}\end{aligned}$$ where the first term accounts for the contribution by the holes left by electrons finally driven into the positive band only while the second term represents the current contribution associated with the hole concentration induced by the flat band. We have $J^{-}_{positive}=J^{+}$. The flat band induced current results from the hole concentration in the dispersive band, which can be written as $$\begin{aligned}
J^{-}_{flat} &=J^{-} - J^{+} \nonumber\\
& = \frac{e\Delta^2}{v_F \hbar^2 \pi^2}2(\sqrt{2}-1)\tilde{E}^{3/2} \tilde{t}. \tag{25}\end{aligned}$$ Taking into account both the conducting electrons and the corresponding holes, we obtain the following expression for the dispersive positive band based current: $$\begin{aligned}
J_{positive} &= J^{+}+J^{-}_{positive}=2\cdot \frac{e^2}{\hbar \pi^2}\sqrt{\frac{ev_F}{\hbar}} E^{3/2} t \tag{26}\\
& = 2\cdot \frac{e\Delta^2}{v_F \hbar^2 \pi^2} \tilde{E}^{3/2} \tilde{t}. \tag{27}\end{aligned}$$ Note that, for the pseudospin-$1/2$ system, this is the total current in the strong field regime. The total intraband current in the presence of the flat band in the pseudospin-$1$ system is $$\begin{aligned}
J^{intra} &= J^{+} + J^{-} = J_{positive} + J^{-}_{flat} \nonumber \\
&= 2\sqrt{2}\frac{e^2}{\hbar \pi^2}\sqrt{\frac{ev_F}{\hbar}} E^{3/2} t \tag{28}\\
& = \frac{e\Delta^2}{v_F \hbar^2 \pi^2} 2\sqrt{2}\tilde{E}^{3/2} \tilde{t}. \tag{29}
\label{eq:J_intra_total}\end{aligned}$$ Comparing with the pseudospin-$1/2$ case, we see that the current enhancement is due to the enhanced hole concentration as a result of the additional flat band.
The intraband current scales with the electrical field as $E^{3/2}$ and scales linearly with time, which are the same as those for the pseudospin-1/2 system [@dora2010nonlinear]. However, for the pseudospin-1 system, the magnitude of the intraband current is larger: there is an enhancement factor of $\sqrt{2}$ as compared with the pseudospin-1/2 system. Since the positive band contribution is the same as for the pseudospin-1/2 system, the enhancement is due entirely to the flat band contribution.
![ [**Further evidence of enhancement of intraband current in the pseudospin-1 system**]{}. (a) The ratio of the intraband currents in the pseudospin-1 and pseudospin-1/2 systems versus time $\tilde{t}$ for $\tilde{E} = 0.8192$. (b) The current ratio versus $\tilde{E}$ for $\tilde{t} = 10$.[]{data-label="fig:Intraband_J_ratio_tE"}](figure5.pdf){width="\linewidth"}
We now provide numerical evidence for the predicted phenomenon of intraband current enhancement in the pseudospin-1 system. Figures \[fig:Intraband\_J\_tE\](a) and \[fig:Intraband\_J\_tE\](b) show the intraband current versus time $\tilde{t}$ and the electric field strength $\tilde{E}$, respectively, where the momentum space grid is $p_x \in [-16, 16]$ and $p_y \in [-16, 16]$ with spacing $0.002$ in (a) and the momentum space range is increased according to the increase in the electric field strength in (b). We see that the intraband current scales with $E$ as $E^{3/2}t$ - the same as for the pseudospin-1/2 system [@dora2010nonlinear; @rosenstein2010ballistic]. There is a good agreement between the numerical results and the theoretical predictions Eqs. ([\[eq:J\_intra\_positive\]]{}-[\[eq:J\_intra\_total\]]{}).
![ [**Numerical evidence of pair creation mechanism for the intraband current**]{}. The ratio of particle number distribution for pseudospin-1 and pseudospin-1/2 systems (a) versus time $\tilde{t}$ for $\tilde{E} = 0.8192$ and (b) versus $\tilde{E}$ for $\tilde{t} = 10$.[]{data-label="fig:Intraband_n_ratio_tE"}](figure6.pdf){width="\linewidth"}
To provide further confirmation of the enhancement of the intraband current, we calculate the ratio between the currents from the pseudospin-1 and pseudospin-1/2 systems versus time for certain electric field, as shown in Fig. [\[fig:Intraband\_J\_ratio\_tE\]]{}(a). The ratio versus the electric field for a given time is shown in Fig. [\[fig:Intraband\_J\_ratio\_tE\]]{}(b). We see that, in the long time regime, under a strong electric field the total intraband current for the pseudospin-1 system is about $\sqrt{2}$ times the current of the pseudospin-1/2 system. However, the positive band currents are approximately the same for both systems. The extra current in the pseudospin-1 system, which is about 0.4 times the contribution from the positive band, is originated from the flat band. These numerical results agree well with the theoretical predictions. The physical mechanism underlying the intraband current enhancement is the Schwinger mechanism or Landau-Zener tunneling. Note that, in Fig. [\[fig:Intraband\_J\_ratio\_tE\]]{}, the transition of an electron from the negative to the flat bands does not contribute to the intraband current, as the process leaves behind a hole in the negative band that contributes to the net current.
![ [**Current density distribution in the momentum space**]{}. (a,b) For pseudospin-1 and pseudospin-1/2 systems, respectively, the distributions of the current density in the momentum space for $\tilde{t}=20$ and $\tilde{E}=0.0512$. When the momentum gap value $P_y$ is large, the flat band can enhance the current.[]{data-label="fig:Intraband_dist"}](figure7.pdf){width="\linewidth"}
If the intraband current is generated by pair creation through Landau-Zener tunneling, the number of created particles should be consistent with the current behaviors. To test this, we numerically calculate the particle number distribution in different bands and plot the ratio between the numbers of particles for pseudospin-1 and pseudospin-1/2 systems versus time and the electric field, as shown in Fig. \[fig:Intraband\_n\_ratio\_tE\]. For the pseudospin-1 system, the number of particles created in the positive band is approximately the same as that created in the upper band in the pseudospin-1/2 system, and the number of particles in the flat band is about half of that in the positive band. Note that, for the positive band, it is necessary to count the particle number twice as both electrons and holes contribute to the transport current. However, for the flat band, only holes contribute to the current. We see that, for each band, the particle number distribution is consistent with the current distribution, providing strong evidence that the intraband current results from pair creation in the negative band. In fact, under the strong field approximation, the intraband current is the particle distributions in the positive and flat bands multiplying by the constant $ev_F$, as current is due to electron and hole transport.
We also calculate the current density distribution in the momentum space for a fixed time and electric field strength, as shown in Fig. \[fig:Intraband\_dist\]. We see that the current distribution range in the $P_y$ direction is wider for the pseudospin-1 system than for the pseudospin-1/2 system. However, the current distribution near $P_y=0$ is approximately the same for the two systems, and the current decays in the $p_y$ direction. In addition, there is a current cut-off about $\tilde{p}_x = \tilde{E}\tilde{t}$ along the $p_x$ axis. All these features of the current density distribution can be fully explained by the theoretical formulas (\[eq:positive\]-\[eq:negative\]). The general result is that the flat band can enhance the current when the “gap” $P_y$ is large.
Conclusion and Discussion {#sec:conclusion}
=========================
We investigate non-equilibrium transport of quasiparticles subject to an external electric field in the pseudospin-1 system arising from solid state materials whose energy band structure constitutes a pair of Dirac cones and a flat band through the conical connecting point. Since the group velocity for carriers associated with the flat band is zero, one may naively think that the flat band would have no contribution to the current. However, we find that the current in the pseudospin-1 system is generally enhanced as compared with that in the counterpart (pseudospin-1/2) system. In particular, in the weak field regime, for both systems the interband current dominates, is proportional to the electric field strength, and is independent of time. However, the interference between quasiparticles associated with the flat and the negative bands in the pseudospin-1 system leads to an interband current whose magnitude is twice the current in the pseudospin-1/2 system. In the strong field regime, for both systems the intraband current dominates and scales with the electric field strength as $E^{3/2}$ and linearly with time. We find that the current associated with carrier transition from the negative to the positive bands is identical for both systems, but the flat band in the pseudospin-1 system contributes an additional term to the current, leading to an enhancement of the total intraband current. The general conclusion is that, from the standpoint of generating large current, the presence of the flat band in the pseudospin-1 system can be quite beneficial. Indeed, the interplay between the flat band and the Dirac cones can lead to interesting physics that has just begun to be understood and exploited.
We discuss a few pertinent issues.
#### **Time scale of validity of effective Dirac Hamiltonian.** {#time-scale-of-validity-of-effective-dirac-hamiltonian. .unnumbered}
For a real material, the effective Dirac Hamiltonian description is valid about the degeneracy (Dirac) point only, imposing an intrinsic upper bound on time in its applicability. Similar to the situation of using the two-band Dirac Hamiltonian to describe graphene [@rosenstein2010ballistic], such a time bound can be approximately estimated as the Bloch oscillation period, i.e., the time required for the electric field to shift the momentum across the Brillouin zone: $\Delta p_x = eEt \approx \hbar/a$ with $a$ being the lattice constant. We obtain $t_B \sim \hbar/(eEa)$. Since the aim of our work is to investigate the physics near the Dirac point, the effective Hamiltonian description is sufficient. For clarity and convenience, all the calculations are done in terms of dimensionless quantities through the introduction of an auxiliary energy scale $\Delta$ whose value can be properly set to make the calculations under the restriction relevant to the real materials hosting pseudospin-$1$ quasiparticles. More specifically, the estimated time restriction $t<t_B$ gives rise to the following condition in terms of the dimensionless quantities $$\tilde{E}\tilde{t}<\frac{\hbar v_F}{\Delta a}.$$ For the given values of $\tilde{t}$ and the range of $\tilde{E}$ in all figures, the condition is fulfilled by setting $\Delta = \hbar v_F/50a$, based on which the actual physical units can be assigned to the dimensionless quantities. It is possible to test the results of this paper experimentally through tuning the characteristic energy $\Delta$ of the underlying system. While our work uses a model Hamiltonian to probe into the essential physics of pseudospin-1 systems in a relatively rigorous manner, the issue of dissipation (in momentum or energy) is beyond the intended scope of this paper.
#### **Bloch oscillations.** {#bloch-oscillations. .unnumbered}
If the whole band structure is taken into account, Bloch oscillations will occur under an external electric field for $t\gtrsim t_B$, i.e., the electron distribution will oscillate over a certain range of the lattice sites. In this case, the Dirac Hamiltonian description will no longer be valid. Instead, a full tight-binding Hamiltonian $H_{TB}(\boldsymbol{p})$ characterizing the multiband structure associated with a particular lattice configuration should be used. For the dice or $T_3$ lattice with intersite distance $a$ and hopping integral $t$, the tight-binding Hamiltonian is $$\begin{aligned}
\nonumber
& & H_{TB}^{(dice)}(\boldsymbol{p}) = \begin{bmatrix}
0 & h_{\boldsymbol{p}} & 0 \\
h_{\boldsymbol{p}}^* & 0 & h_{\boldsymbol{p}} \\
0 & h_{\boldsymbol{p}}^* & 0
\end{bmatrix}, \\ \nonumber
& & h_{\boldsymbol{p}} = -t\left(1 + 2\exp{(3ip_ya/2)}\cos(\sqrt{3}p_xa/2)\right).\end{aligned}$$ A previous work [@rosenstein2010ballistic] showed that, for the honeycomb lattice, the corresponding two-band tight-binding model can indeed give rise to Bloch oscillations for $t > t_B$. To investigate Bloch oscillations in the large time regime for pseudospin-1 systems with an extra flat band is certainly an interesting issue that warrants further efforts.
We note that, in a recent paper [@KF2016], the striking phenomenon of tunable Bloch oscillations was reported for a quasi one-dimensional diamond lattice system with a flat band under perturbation. It would be interesting to extend this work to two-dimensional lattices. The main purpose of our work is to uncover new phenomena in physical situations where the Dirac Hamiltonian description is valid (first order expansion of the tight binding Hamiltonian about the Dirac points).
#### **Effect of band anisotropy.** {#effect-of-band-anisotropy. .unnumbered}
For a particular lattice configuration associated with a real material, band anisotropy, e.g., the trigonal warping, will generally arise when entering the energy range relatively far from the Dirac points at a later time. In this case, direction dependent transport behavior can arise. Insights into the phenomena of driving direction resolved Bloch oscillations and Zener tunneling can be gained from existing studies of the two-band systems with the so-called “semi-Dirac” spectrum (a hybrid of the linear and quadratic dispersion) [@lim2012bloch; @lim2014mass]. At the present, the interplay between an additional flat band and dispersion anisotropy remains largely unknown, which is beyond the applicable scope of the idealized Dirac Hamiltonian framework.
Acknowledgement {#acknowledgement .unnumbered}
===============
We thank Dr. Guang-Lei Wang for helpful discussions, and would like to acknowledge support from the Vannevar Bush Faculty Fellowship program sponsored by the Basic Research Office of the Assistant Secretary of Defense for Research and Engineering and funded by the Office of Naval Research through Grant No. N00014-16-1-2828. L.H. was supported by NSF of China under Grant No. 11422541.
Analytic calculation of the interband current
=============================================
In the weak field regime, we can expand Eq. (\[eq:Diracoriginal\]) as $$\begin{aligned}
i\hbar \partial_t\alpha_p(t) &= \epsilon_p \alpha_p(t) + C_0 \gamma_p(t), \label{eq:alpha} \\
i\hbar \partial_t\gamma_p(t) &= C_0[\alpha_p(t) + \beta_p(t)], \\
i\hbar \partial_t\beta_p(t) &= -\epsilon_p \beta_p(t) + C_0 \gamma_p(t) \label{eq:beta}.\end{aligned}$$ Applying the time differential operator $i\hbar \partial_t$ to Eqs. (\[eq:alpha\]) and (\[eq:beta\]), we get $$\begin{aligned}
i\hbar \partial_t (i\hbar \partial_t \alpha_p(t)) = \epsilon_p i\hbar \partial_t\alpha_p(t) + C_0 i\hbar \partial_t\gamma_p(t), \label{eq:alpha2}\\
i\hbar \partial_t (i\hbar \partial_t \beta_p(t)) = -\epsilon_p i\hbar \partial_t\beta_p(t) + C_0 i\hbar \partial_t\gamma_p(t), \label{eq:beta2}\end{aligned}$$ and, hence, $$\label{eq:couple1}
-\hbar^2 \partial_t^{2}\alpha_p(t)-\hbar^2 \partial_t^{2}\beta_p(t) = [\alpha_p(t) + \beta_p(t)][\epsilon_p^{2} + 2C_0^{2}].$$ From Eqs. (\[eq:alpha\]) and (\[eq:beta\]), we have $$\label{eq:couple2}
i\hbar\partial_t\alpha_p(t) - i\hbar\partial_t\beta_p(t)=\epsilon_p [\alpha_p(t) + \beta_p(t)].$$ Defining $x_p(t)=\alpha_p(t)+\beta_p(t)$, and $y_p(t)=\alpha_p(t)-\beta_p(t)$, we get, from Eqs. (\[eq:couple1\]) and (\[eq:couple2\]), respectively, the following relations: $$\begin{aligned}
&\frac{d^2x_p}{dt^2}+ \frac{\epsilon_p^2 + 2C_0^2}{\hbar^2}x_p = 0, \label{eq:xequation}\\
&\frac{dy_p}{dt} = \frac{\epsilon_p}{i\hbar}x_p. \label{eq:yequation}\end{aligned}$$ Solving Eq. (\[eq:xequation\]), we get $$x_p(t) = A\cos\omega t + B\sin\omega t, \nonumber$$ where $A$ and $B$ are constant, and $\omega = \sqrt{(\epsilon_p^2 + 2C_0^2)/\hbar^2}$. Using the initial condition that the negative band is fully filled: ($\Phi_p(t=0) = [0, 0, 1]^T$), we have $x_p(t=0) = A = 1$. From Eq. (\[eq:yequation\]), we have $$y_p(t) = \frac{\epsilon_p}{i\hbar \omega} [\sin\omega t - B\cos\omega t] +d.\nonumber$$ Using the initial condition, we get $y_p(t=0) = -m_0B + d = -1$, where $m_0 = \epsilon_p/(i\hbar \omega)$, $d = m_0 B -1$, which leads to $$\begin{aligned}
&\alpha_p(t) = \frac{1}{2}(x + y) \nonumber \\
&=\frac{1}{2}[\cos\omega t + B \sin\omega t + m_0(\sin\omega t - B\cos\omega t + B) - 1], \nonumber\\
&\beta_p(t) = \frac{1}{2}(x - y) \nonumber \\
&=\frac{1}{2}[\cos\omega t + B \sin\omega t - m_0(\sin\omega t - B\cos\omega t + B) + 1].\nonumber\end{aligned}$$ Substituting the expressions of $\alpha_p(t)$ and $\beta_p(t)$ into Eqs. (\[eq:alpha\]) and (\[eq:beta\]), we obtain an expression for $\gamma_p(t)$. Using $\gamma_p(t=0) = 0$, we have $B = -m_0$ and, hence, $$\begin{aligned}
&\alpha_p(t) = \frac{1}{2} [\cos \omega t + m_0^2(\cos \omega t - 1) - 1], \\
&\beta_p(t) = \frac{1}{2} [\cos \omega t - 2m_0 \sin\omega t - m_0^2 [\cos\omega t - 1] + 1], \\
&\gamma_p(t) = \frac{1 + m_0^2}{2C_0} [-i\hbar \omega \sin \omega t - \epsilon_p(\cos \omega t - 1)].\end{aligned}$$
Dominant current source in the weak and strong field regimes
============================================================
For the three-band dispersion profile investigated in this work, there are two distinct current sources: the intraband and interband currents, where the former is proportional to the number of electrons (holes) within an unfilled (occupied) band while the latter depends on the rate of change in the particle number - a characteristic of interband interference. From Eq. (\[eq:current2\]), we see that the intraband current is determined by the transition amplitudes while the interband current depends on the rate of change of the amplitudes. For a weak driving field, the transition amplitudes between the occupied and the empty bands are negligibly small, so is the number of electron-hole generation, resulting in a weak intraband current. However, the rate of change in the transition amplitudes may not be small, neither is the interband current. Our calculations reveal that, indeed, in the weak (strong) driving regime, the interband (intraband) current dominates. As the field is increased from the weak to the strong regime, the algebraic scaling exponent of the current-field relation changes from 1 to 1.5, as shown in Fig. \[fig:Inter\_intra\_dist\].
![ [**Current versus electric field of pseudospin-1 system for $\boldsymbol{\tilde{t} = 5}$**]{}. As the magnitude of the external electrical field is increased, the dominant contribution to the total current changes from interband to intraband, and the algebraic scaling exponent of the current-field relation changes from 1 to 1.5.[]{data-label="fig:Inter_intra_dist"}](figure8.pdf){width="\linewidth"}
[48]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{}
|
---
abstract: |
Reaction networks are mathematical models of interacting chemical species that are primarily used in biochemistry. There are two modeling regimes that are typically used, one of which is deterministic and one that is stochastic. In particular, the deterministic model consists of an autonomous system of differential equations, whereas the stochastic system is a continuous time Markov chain. Connections between the two modeling regimes have been studied since the seminal paper by Kurtz (1972), where the deterministic model is shown to be a limit of a properly rescaled stochastic model over compact time intervals. Further, more recent studies have connected the long-term behaviors of the two models when the reaction network satisfies certain graphical properties, such as weak reversibility and a deficiency of zero.
These connections have led some to conjecture a link between the long-term behavior of the two models exists, in some sense. In particular, one is tempted to believe that positive recurrence of all states for the stochastic model implies the existence of positive equilibria in the deterministic setting, and that boundary equilibria of the deterministic model imply the occurrence of an extinction event in the stochastic setting. We prove in this paper that these implications do not hold in general, even if restricting the analysis to networks that are bimolecular and that conserve the total mass. In particular, we disprove the implications in the special case of models that have absolute concentration robustness, thus answering in the negative a conjecture stated in the literature in 2014.
author:
- 'David F. Anderson'
- Daniele Cappelletti
bibliography:
- 'bib.bib'
title: Discrepancies between extinction events and boundary equilibria in reaction networks
---
Introduction
============
Reaction systems are mathematical models that are used to describe the dynamical behavior of interacting chemical species. Such models are often utilized in the biochemical setting, where they describe biological processes. Traditionally, we distinguish between a deterministic and a stochastic modeling regime, with the deterministic regime appropriate when the counts are so high that the concentrations of the species can be well modeled via a set of autonomous differential equations and with the stochastic model appropriate when the counts are low. For the stochastic model, one usually assumes the counts of the different chemical species involved evolve according to a continuous time Markov chain in ${\mathbb{Z}}_{\ge 0}^d$ (where $d$ is the number of distinct chemical species).
It is natural to wonder about the relationship between the stochastic and deterministic models for reaction systems. The first paper in this direction was [@kurtz:classical], where it was shown that on compact time intervals, the deterministic model is the weak limit of the stochastic model, conveniently rescaled, when the initial counts of molecules go to infinity in an appropriate manner (see Theorem \[thm:classical\_scaling\] below). Followup works consider piecewise deterministic limits in multiscale settings [@KK:multiscale; @PP:multiscale; @KR:multiscale; @KR:enzyme], and connections have been found between equilibria of the deterministic model and stationary distributions of the stochastic model, under certain assumptions [@ACK:poisson; @AC2016; @CW:poisson; @CJ:graphical]. Further connections have been studied in terms of Lyapunov functions of the deterministic model and stationary distributions of the stochastic model [@ACGW:lyapunov].
The focus of this paper is on the relationship (or lack thereof) between the occurrence of extinction events in the stochastic model and the equilibria of the corresponding deterministic model. The relevance of this question resides in the fact that equilibria of the deterministic model are typically easier to analyze than the state space of the stochastic model, so finding a link between the two is desirable. Moreover, understanding when extinction events can occur is relevant in the biological setting as such events may imply that the production of a certain protein has halted, or that a certain important reactant is eventually consumed completely. However, we demonstrate through the analysis of a number of examples that a series of expected connections do not hold in general, and therefore discourage interested researchers from assuming them, or trying to prove them. In particular, we prove Conjecture 3.7 in [@AEJ:ACR] to be false. To better understand the work carried out here, we briefly describe some of the work in [@AEJ:ACR].
In [@AEJ:ACR], an interesting connection between extinction events and systems with *absolute concentration robustness* (ACR) is unveiled. ACR systems are deterministic reaction systems in which at least one chemical species has the same value at every positive equilibrium of the system. Such species are called absolute concentration robust (ACR) species. As an example, consider the network $$\label{eq:toy_model}
\begin{split}
\schemestart
A+B \arrow{->[$\kappa_1$]} 2B
\arrow(@c1.south east--.north east){0}[-90,.25]
B \arrow{->[$\kappa_2$]} A
\schemestop
\end{split}$$ Under the assumption of mass-action kinetics, the considered system is ACR: the species $A$ has the value $\kappa_2/\kappa_1$ in all the positive equlibria of the system. When modeled stochastically, the reaction $B\to A$ can take place until no molecules of $B$ are left. When this happens, no reaction can take place anymore as each of the reactions in requires a molecule of $B$ as a reactant. We call this an *extinction event* and note that it eventually occurs with a probability of one, regardless of the rate constants $\kappa_1, \kappa_2\in {\mathbb{R}}_{>0}$. This differing qualitative behavior between the two models (robustness for the ODE and eventual extinction for the stochastic) was studied in [@AEJ:ACR] and was proven to be general, in the following sense. In [@SF:ACR], Shinar and Feinberg provided sufficient necessary conditions for a deterministic reaction system to be ACR. However, in [@AEJ:ACR] it was shown that the stochastic model will, with a probability of one, undergo an extinction event if the reaction network satisfies those same conditions and also has a positive conservation relation. Moreover, the eventual extinction holds regardless of choice of rate constants or initial condition. The assumptions of [@AEJ:ACR] (and by extension [@SF:ACR]) seem technical and do not unveil a clear reason for why the family of stochastic models considered have almost sure extinction events. Since under the same assumptions the corresponding deterministic system is ACR, it seems natural to conjecture that (i) absolute concentration robustness of the associated deterministic model, and (ii) the existence of a positive conservation relation, are sufficient to imply almost sure extinction for the stochastic model. This is the content of Conjecture 3.7 in [@AEJ:ACR].
Belief in Conjecture 3.7 in [@AEJ:ACR] becomes even stronger when we realize that conservative ACR deterministic reaction systems often have boundary equilibria. For example, in the model , the deterministic model has boundary equilibria of the form $(a,0)$, which are attracting for initial conditions where the total mass is lower than $\kappa_2/\kappa_1$. In the stochastic model the species $B$ is eventually completely consumed, showing that the behaviors of the two models do have some connection. Returning to the general setting, the existence of boundary equilibria occurs often for conservative ACR models since on certain invariant regions the total “mass” of the species (as determined by the positive conservation law) is *strictly less than* the ACR value, implying there can be no positive equilibria in that invariant region. Since the invariant regions are compact due to the existence of a mass conservation, the ODE solution is often attracted to the boundary. Intuitively, such attraction might indicate a propensity of the stochastic model to reach the boundary, and get absorbed.
However, in section \[sec:ACR\], we will show with non-trivial examples that the intuition the conjecture is based upon does not hold in general. In particular, we show that no assumption in the main result of [@AEJ:ACR] can be eliminated. Moreover, we do so with bimolecular examples (which are the most commonly used examples in the biological setting). In section \[sec:stable\_boundary\], we will further explore the connection (or rather, the lack thereof) between extinction events of the stochastic model and equilibria of the associated deterministic model, and put to rest the common misconceptions that (i) a complete lack of positive equilibria in a conservative deterministic model implies the occurrence of an extinction event in the stochastic model, and (ii) that positive recurrence of all the states of the stochastic model implies the existence of a positive equilibrium of the associated deterministic model.
The outline of the remainder of the paper is as follows. In section \[sec:notation\], we provide the necessary material on notation, and the formal introduction of the relevant mathematical models. In section \[sec:correspondence\], we will state the classical result from [@kurtz:classical; @kurtz:strong] precisely. This results provides a connection between the behavior of the stochastic and deterministic models on compact time intervals. We will then provide examples which demonstrate a discrepancy in the long-term behavior of the models (in terms of explosions and positive recurrence of the stochastic model with respect to blow-ups and compact trajectories of the deterministic model). In section \[sec:extinction\], we provide the necessary definitions relating to extinctions in the present context. In particular, we point out that extinctions should refer to both species and reactions, as opposed to just the counts of species. Finally, in sections \[sec:ACR\] and \[sec:stable\_boundary\] we provide our main results and analyze a number of examples as described in the previous paragraph.
Necessary Background and Notation {#sec:notation}
=================================
Notation
--------
We denote the non-negative integer numbers by ${\mathbb{Z}}_{\geq0}$. We also denote the non-negative and the positive real numbers by ${\mathbb{R}}_{\geq0}$ and ${\mathbb{R}}_{>0}$, respectively. Given a real number $a$, we denote by $|a|$ its absolute value. For any real vector $v\in{\mathbb{R}}^d$, we denote its $i$th entry by $v_i$, and we use the notation $$\|v\|_1=\sum_{i=1}^d |v_i|.$$ We further write $v>0$ and say that $v$ is positive if every entry of $v$ is positive. Moreover, given two real vectors $v,w\in{\mathbb{R}}^d$, we write $v\geq w$ if $v_i\geq w_i$ for all $1\leq i\leq d$. Finally, given a set $\mathcal{A}$, we denote its cardinality by $|\mathcal{A}|$.
Basic definitions of reaction network theory {#sec:background}
--------------------------------------------
A *reaction network* is a triple ${\mathcal{G}}=({\mathcal{X}},{\mathcal{C}},{\mathcal{R}})$. ${\mathcal{X}}$ is a finite non-empty set of symbols, referred to as *species*, and ${\mathcal{C}}$ is a finite non-empty set of linear combinations of species with non-negative integer coefficients, referred to as *complexes*. After ordering the set of species, the $i$th species of the set can be identified with the vector $e_i\in{\mathbb{R}}^{|{\mathcal{X}}|}$, whose $i$th entry is 1 and whose other entries are zero. It follows that any complex $y\in{\mathcal{C}}$ can be identified with a vector in ${\mathbb{Z}}^{|{\mathcal{X}}|}_{\geq0}$, which is the corresponding linear combination of the vectors $e_i$. Finally, ${\mathcal{R}}$ is a non-empty subset of ${\mathcal{C}}\times{\mathcal{C}}$, whose elements are called *reactions*, such that for any $y\in{\mathcal{C}}$, $(y,y)\notin{\mathcal{R}}$. Following the common notation, we will denote any element $(y,y')\in{\mathcal{R}}$ by $y\to y'$. We require that every species in ${\mathcal{X}}$ appears in at least one complex, and that every complex in ${\mathcal{C}}$ appears as an element in at least one reaction. Note that under this condition, a reaction network is uniquely determined by the set of reactions ${\mathcal{R}}$.
In this paper, species will always be letters and will be alphabetically ordered. A directed graph can be associated in a very natural way to a reaction network by considering the set of complexes as nodes and the set of reactions ad directed edges. Such a graph is called a *reaction graph*. Usually, a reaction network is presented by means of its reaction graph, which defines it uniquely. By using the reaction graph, we can further define the *terminal* complexes as those nodes that are contained in some strongly connected component of the graph, that is those nodes $y$ such that for any directed path from $y$ to another complex $y'$ there exists a directed path from $y'$ to $y$. We say that a complex is *non-terminal* if it is not terminal.
We define the *stoichiometric subspace* of a reaction network as $$S=\operatorname{span}_{{\mathbb{R}}}\{y'-y\,:\,y\to y'\in{\mathcal{R}}\}.$$ We also let $\ell$ denote the number of connected components of the reaction graph (or *linkage classes* of the network) and define the *deficiency* of the network as $$\delta=|{\mathcal{C}}|-\ell-\dim S.$$ The geometric interpretation of the deficiency is not clear from the above definition, see [@Gun2003] for more detailes on this object.
We say that a vector $v\in{\mathbb{Z}}^{|{\mathcal{X}}|}$ is a *conservation law* if it is orthogonal to the stoichiometric subspace $S$. We say that a network is *conservative* if there is a positive coservation law: this means that it is possible to assign a mass to the molecules of each chemical species such that the total mass (i.e. the sum of the masses of all the molecules present) is conserved by the occurrence of every reaction.
We say that a reaction network is *bimolecular* if $\max_{y\in{\mathcal{C}}}\|y\|_1\leq 2$. Many biological models fall into this category, since it is often the case that at most two molecules react at a time.
Finally, we associate with each reaction $y\to y'\in{\mathcal{R}}$ a positive real number $\kappa_{y\to y'}$, called a *rate constant*. A reaction network with a choice of rate constants is called a *mass-action system*, and a stochastic or a deterministic dynamics can be associated with it, as described later. A mass-action system is usually presented by means of the reaction graph, where the reactions have been labelled with the corresponding rate constant. An example of this can be found in .
### Stochastic Model
In a *stochastic mass action system*, the evolution in time of the copy-numbers of molecules of the different chemical species is considered. Specifically, the copy-numbers of the molecules of different chemical species at time $t\geq0$ form a vector $X(t)\in{\mathbb{Z}}_{\geq0}^{|{\mathcal{X}}|}$. The process $X$ is assumed to be a continuous time Markov chain. Specifically, for any two states $x, x'\in{\mathbb{Z}}_{\geq0}^{|{\mathcal{X}}|}$ the transition rate from $x$ to $x'$ is given by $$q(x,x')=\sum_{\substack{y\to y'\in{\mathcal{R}}\\y'-y=x'-x}}\lambda_{y\to y'}(x),$$ where $$\lambda_{y\to y'}(x)=\kappa_{y\to y'}\prod_{i=1}^{|{\mathcal{X}}|}\prod_{j=0}^{y_i-1}(x_i-j)\quad\text{for }x\in{\mathbb{Z}}_{\geq0}^{|{\mathcal{X}}|}$$ is the *stochastic mass action rate function* of $y\to y'$. Note that $\lambda_{y\to y'}(x)>0$ if and ony if $x\geq y$, which prevent the entries of the process $X$ from becoming negative. Also, note that the process is confined within an *stoichiometric compatibility class*, that is for every $t\geq0$ $$X(t)\in \{X(0)+v\,:\, v\in S\}\cap {\mathbb{Z}}_{\geq0}^{|{\mathcal{X}}|}.$$
It is worth noting that the stochastic mass-action kinetics follows from the assumption that the molecules of the different species are well-mixed, so the propensity of each reaction to take place is proportional to the number of possible sets of molecules that can give rise to an occurrence of the reaction. Other kinetics may arise in different scenarios, but in the present paper we are only concerned with mass action systems.
### Deterministic Model
In a *deterministic mass action system* the evolution in time of the concentrations of the different chemical species is modeled. We consider the concentrations of the different chemical species at time $t\geq0$ as a vector $z(t)\in{\mathbb{R}}_{\geq0}^{|{\mathcal{X}}|}$. It is then assumed that the function $z$ is a solution to the Ordinary Differential Equation (ODE) $$\label{eq:dma1}
\frac{d}{dt}z(t) = g(z(t)),$$ where $$g(x)=\sum_{y\to y'\in{\mathcal{R}}}(y'-y)\kappa_{y\to y'}\prod_{i=1}^{|{\mathcal{X}}|}x_i^{y_i}\quad\text{for }x\in{\mathbb{R}}_{\geq0}^{|{\mathcal{X}}|}$$ is the *deterministic mass action species formation rate*. As in the stochastic case, the solution $z$ is confined within a stoichiometric compatibility class, meaning that for any $t\geq0$ $$z(t)\in \{z(0)+v\,:\, v\in S\}\cap {\mathbb{R}}_{\geq0}^{|{\mathcal{X}}|}.$$ Finally, as for stochastic models, the choice of mass action kinetics corresponds to the assumption that the molecules of the different species are well-mixed. Other kinetics (such as Michaelis-Menten kinetics, Hill kinetics, power law kinetics) are considered in the literature, but are not dealt with in this paper.
Correspondences between the two modeling regimes and known discrepancies {#sec:correspondence}
========================================================================
The aim of this section is to describe why it was believed that certain properties of the deterministic mass action system would imply the occurrence of extinction events for the stochastically modeled mass action system. Here, we briefly describe or give reference to some known connections between the two modeling regimes, and provide some warnings in the form of examples of discrepancies between the two models.
Connections
-----------
The first connection found between stochastic and deterministic models dates back to [@kurtz:classical]. The situation considered is the following: a reaction network ${\mathcal{G}}$ is given. It is assumed that the volume $V$ of a container where the chemical transformations occur is increased. The propensity of a reaction to occur changes with the volume, depending on how many molecules are needed (i.e. need to collide) for the reaction to take place. Specifically, the rate constants scale with the volume as $$\kappa^V_{y\to y'}=V^{1-\|y\|_1}\kappa_{y\to y'},$$ for some fixed positive constants $\kappa_{y\to y'}$. A family of continuous-time Markov chain $\{X^V\}_{V}$ is then defined, with $X^V$ being the process associated with the stochastic mass action system with rate constants $\kappa^V_{y\to y'}$. Then, the following holds [@kurtz:classical; @kurtz:strong].
\[thm:classical\_scaling\] Assume that for a fixed positive state $z_0\in{\mathbb{R}}_{>0}^{|{\mathcal{X}}|}$ and for all $\varepsilon>0$ we have $$\lim_{V\to\infty}P\Big(\Big|V^{-1}X^V(0)-z_0\Big|>\varepsilon\Big)=0.$$ Moreover, assume that the solution $z$ of the ODE with $z(0)=z_0$ is unique and is defined up to a finite fixed time $T>0$. Then, for any $\varepsilon>0$ $$\lim_{V\to\infty}P\Big(\sup_{t\in[0,T]}\Big|V^{-1}X^V(t)-z(t)\Big|>\varepsilon\Big)=0.$$
Roughly speaking, the theorem states the rescaled stochastic processes $X^V$ converge path-wise to the ODE solution of the deterministic mass action system, over compact intervals of time. The theorem also holds for more general kinetics than mass action kinetics, as long as the rate functions $\lambda_{y\to y'}$ are locally Lipschitz [@kurtz:strong]. Theorem \[thm:classical\_scaling\] has been extended for the multiscale setting [@KK:multiscale].
It is interesting to note that model reduction techniques over compact intervals of time also work for both the deterministic and stochastic models in the exact same manner. In particular, we refer to the assumptions under which intermediate species can be eliminated from a multiscale model, and to the description of the resulting simplified model [@CW:intermediate_stoc; @CW:intermediate_det].
Known discrepancies
-------------------
Here we cite some known examples showing that compactness of the trajectories of a deterministic mass action system do not imply positive recurrence or even regularity (that is, lack of explosions) for the associated stochastic model. Moreover, we demonstrate that a blowup of the deterministic mass-action system does not in general imply explosions for the associated stochastic mass action system.
In [@ACKN:endotactic] it is shown that the mass action system $$\label{ex:endotactic_trans}
\begin{split}
\schemestart
0\arrow{->[$\kappa_1$]}2A+B\arrow{->[$\kappa_2$]}4A+4B\arrow{->[$\kappa_3$]}A
\schemestop
\end{split}$$ is transient (i.e. all states are transient) when stochastically modeled, for all choices of rate constants. For the deterministic mass action system, however, for any choice of rate constants there exists a compact set $K$ such that for all initial conditions $z(0)$, a $t^*>0$ exists with $z(t)\in K$ for all $t>t^*$ (the system is said to be *permanent*, a property that in this case follows from the network being *strongly endotactic* [@GMS:geometric]). Moreover, in [@ACKN:endotactic] it is also shown that for any choice of rate constants the mass action system $$\label{ex:endotactic_expl}
\begin{split}
\schemestart
0\arrow{->[$\kappa_1$]}2A\arrow{->[$\kappa_2$]}4A+B\arrow{->[$\kappa_2$]}6A+4B\arrow{->[$\kappa_3$]}3A
\schemestop
\end{split}$$ is explosive (in the sense of [@norris:markov]) when stochastically modeled, while the deterministic mass action system is permanent as for (which again follows because the network is strongly endotactic).
Since Theorem \[thm:classical\_scaling\] holds, we expect the time of drifting towards infinty of the processes $X^V$ associated with to increase with $V$. Similarly, the time until explosion of the processes $X^V$ associated with necessarily tends to infinity, as $V\to\infty$.
We have shown examples of mass action systems that are somehow well behaved if deterministically modeled, and transient or explosive if stochastically modeled. For completeness, we also present here an example of a mass action system that is positive recurrent (i.e. all states are positive recurrent) if stochastically modeled, while the associated determinisitc ODE solution has blow-ups for any positive initial condition. The system is discussed in [@ACKK:explosion] and is the following. $$\label{ex:blow_pos_rec}
\begin{split}
\schemestart
A\arrow{<=>[1][2]}2A\arrow{<=>[3][1]}3A\arrow{->[1]}4A.
\schemestop
\end{split}$$ It is worth citing here a very similar example, also discussed in [@ACKK:explosion], where the behaviour of the two modeling regimes is similar. Consider the mass action system $$\label{ex:no_blow_expl}
\begin{split}
\schemestart
A\arrow{<=>[1][2]}2A\arrow{<=>[7][4]}3A\arrow{<=>[6][1]}4A\arrow{->[1]}5A
\schemestop
\end{split}$$ It is shown in [@ACKK:explosion] that the corresponding stochastic mass action system is explosive for any positive initial condition $X(0)$, and the associated deterministic mass action system has a blow up for any positive initial condition $z(0)$.
Other examples of coincidence between the long-term behaviour of the stochastic and deterministic models of mass action systems are given by the family of *complex balanced systems*, and are studied in [@ACK:poisson; @CW:poisson; @CJ:graphical]. The existence of such connections and Theorem \[thm:classical\_scaling\] contributed to the formulation of the conjecture that “mass action systems with ACR species when deterministically modeled undergo an extinction event when stochastically modeled”, for the reasons concerning boundary equilibria of the deterministic model already discussed in the Introduction. However, we have shown in this section that the long-term dynamics of stochastically and deterministically modeled mass action systems can differ greatly.
Extinction {#sec:extinction}
==========
In this section, we will formally describe what is meant by the term “extinction” in the present context. We begin with the following standard definitions.
Consider a stochastic mass action system. We say that
- a state $x'$ is *reachable* from $x$ if for some $t>0$ $$P(X(t)=x'\, |\, X(0)=x)>0;$$
- a set $\Gamma\subseteq {\mathbb{Z}}_{\geq0}^{|{\mathcal{X}}|}$ is *reachable* from $x$ if for some $t>0$ $$P(X(t)\in\Gamma \, |\, X(0)=x)>0;$$
- a set $\Gamma\subseteq {\mathbb{Z}}_{\geq0}^{|{\mathcal{X}}|}$ is *closed* if for all $t>0$ $$P(X(t)\in\Gamma \, |\, X(0)\in\Gamma)=1;$$
- a reaction $y\to y'\in{\mathcal{R}}$ is *active* at a state $x$ if $\lambda_{y\to y'}(x)>0$.
- a set $\Gamma\subseteq {\mathbb{Z}}_{\geq0}^{|{\mathcal{X}}|}$ is an *extinction set* for the reaction $y\to y'\in{\mathcal{R}}$ if $\Gamma$ is closed and $y\to y'$ is not active at any state of $\Gamma$.
\[def:extinction\] Consider a stochastic mass action system. We say that the process $X$ undergoes an *extinction event* at time $t^*>0$ if there is a reaction $y\to y'\in{\mathcal{R}}$ and an extinction set $\Gamma$ for $y\to y'$ such that $X(t^*)\in\Gamma$ and $X(t^*-) \notin \Gamma$.
The meaning of the above definition is the following: at a certain time $t^*$ the copy-number of some chemical species (or a set of chemical species) gets so low that a certain reaction can not occur anymore, and the loss is irreversible.
In connection with Definition \[def:extinction\], we give some useful results.
\[prop:consecutio\] Consider a stochastic mass action system, and two reactions $y\to y', \tilde y\to \tilde y'\in{\mathcal{R}}$, with $y'\geq \tilde y$ . Assume that no extinction set for $y\to y'$ is reachable from the state $x$. Then, no extinction set for $\tilde y\to \tilde y'$ is reachable from $x$.
Since no extinction set for $y\to y'$ is reachable from $x$, any closed set $\Gamma$ that is reachable from $x$ contains a state $x'$ such that $\lambda_{y\to y'}(x')>0$, or equivalently $x'\geq y$. Hence, the state $x'+y'-y$ is reachable from $x'$, and since $\Gamma$ is closed it follows that $x'+y'-y\in\Gamma$. Moreover, since $x'\geq y$ and $y'\geq \tilde y$ we have $x'+y'-y\geq y' \geq \tilde y$, which implies that $\lambda_{\tilde y\to \tilde y'}(x'+y'-y)>0$ and that $\Gamma$ is not an extinction set for $\tilde y\to\tilde y'$. This concludes the proof.
\[prop:dominatio\] Consider a stochastic mass action system, and two reactions $y\to y', \tilde y\to \tilde y'\in{\mathcal{R}}$, with $y\geq \tilde y$. Assume that no extinction set for $y\to y'$ is reachable from the state $x$. Then, no extinction set for $\tilde y\to \tilde y'$ is reachable from $x$.
As in the proof of Proposition \[prop:consecutio\], since no extinction set for $y\to y'$ is reachable from $x$, any closed set $\Gamma$ that is reachable from $x$ contains a state $x'$ with $\lambda_{y\to y'}(x')>0$, which is equivalent to $x'\geq y$. Since $ y\geq \tilde y$, we have $x'\geq \tilde{y}$, which is equivalent to $\lambda_{\tilde y\to \tilde y'}(x')>0$. This concludes the proof.
Extinction and ACR systems {#sec:ACR}
==========================
We give here the formal definition of system that has Absolute Concentration Robustness (ACR) . The definition is purely in terms of deterministic systems, and the precise connection with extinction events will be described later.
\[def:ACR\] Consider a deterministic mass action system. The system is said to be *Absolute Concentration Robust* (ACR) if there exists an index $1\leq i\leq |{\mathcal{X}}|$ and a real number $u\in{\mathbb{R}}_{>0}$ such that all $c>0$ with $g(c)=0$ satisfy $c_i=u$. In this case, the $i$th species is called an *ACR species* with *ACR value* $u$.
Note that, by definition, all deterministic mass action systems with no positive equilibria, or with only one positive equilibrium are ACR. Specifically, in these cases all species are ACR. However, such degenerate cases elude the sense of the definition of ACR models, which captures an important biological property: whenever the system is at a positive equilibrium (and there could be many positive equilibria), some special chemical species are always expressed at the same level, and are therefore “robust” to environmental changes.
Structural sufficient conditions for a model to be ACR can be found in the following result, due to Feinberg and Shinar [@SF:ACR].
\[thm:ACR\_det\] Consider a deterministic mass action system and assume
1. there exists at least one $c>0$ with $g(c)=0$;
2. the reaction network has deficiency 1;
3. there are two non-terminal complexes $y\neq y'$ such that only the $i$th entry of $y'-y$ is different from 0.
Then, the $i$th species is ACR.
Note that the assumption on the existence of a positive equilibrium is not needed for the theorem to hold, since if there were none then all the species would automatically be ACR by Definition \[def:ACR\]. However, the assumption was included because this degenerate case was not considered in [@SF:ACR].
The connection with extinction events is given by the following result, due to Anderson, Enciso, and Johnston [@AEJ:ACR].
\[thm:ACR\_stoc\] Consider a stochastic mass action system and assume
1. there exists at least one $c>0$ with $g(c)=0$;
2. \[part:def\] the reaction network has deficiency 1;
3. \[part:non\_term\] there are two non-terminal complexes $y\neq y'$ such that only the $i$th entry of $y'-y$ is different from 0;
4. \[part:cons\] the reaction network is conservative.
Then, an extinction event occurs almost surely. In particular, with a probability of one the process $X$ enters a closed set $\Gamma\subset {\mathbb{Z}}^d_{\geq0}$ such that $\lambda_{y\to y'}(x)=0$ for all $x\in\Gamma$ and for all $y\to y'\in{\mathcal{R}}$ with $y$ being a non-terminal complex.
As an example of how to apply Theorems \[thm:ACR\_det\] and \[thm:ACR\_stoc\], consider the mass-action system . The deficiency is $\delta=4-2-1=1$, the two non-terminal complexes $A+B$ and $B$ are such that their difference is $A$, and there is at least one $c>0$ with $g(c)=0$. Hence, by Theorem \[thm:ACR\_det\], when deterministically modeled, the mass action is ACR, and in particular the species $A$ appears with the same value at every positive equilibrium of the system. In fact, it can be easily checked that all positive equilibria are of the form $\left(\frac{\kappa_2}{\kappa_1}, \beta\right)$ for some $\beta>0$, where the species are ordered as $(A,B)$. Moreover, the reaction network of is conservative, since $(1,1)$ is a positive conservation law. Hence, by Theorem \[thm:ACR\_stoc\], the stochastically modeled mass action system will, with a probability of one, undergo an extinction event. In particular, both reactions $B\to A$ and $A+B\to 2B$ cannot occur after the extinction. Since the total mass is conserved, this can only mean that the molecules of $B$ are eventually completely consumed.
Note that under the assumptions of Theorem \[thm:ACR\_stoc\], the deterministically modeled mass action system is ACR. Since the assumption of Theorem \[thm:ACR\_stoc\] are quite technical in nature, as discussed in the Introduction it was thought for a long time that the real reason leading to the extinction event in the stochastic mass action system was the associated deterministic mass action system being ACR. This seemed plausible since ACR systems often exhibit attracting boundary equilibria. As an example, if the total mass of the deterministic mass action system is lower then the ACR value of $A$, that is if $\|z(0)\|<\kappa_2/\kappa_1$, then the ODE solution is confined within a compact stoichiometric compatibility class with no positive equilibria, and is eventually attracted by the boundary equilibrium $(z(0),0)$. This resembles what happens with the stochastically modeled system.
In this section we show that such intuition is not correct. We do so by proving that if you remove any of the technical assumptions , , and from Theorem \[thm:ACR\_stoc\], while maintaining the absolute concentration robustness of the associated deterministic mass action system, then the result no longer holds. Moreover, the previous sentence holds even if we add the additional constraint that the reaction network be bimolecular.
Example 1: assumption cannot be removed {#example-1-assumption-cannot-be-removed .unnumbered}
----------------------------------------
In [@AEJ:ACR], the authors realized that assumption could not be removed from the statement of Theorem \[thm:ACR\_stoc\], and proved that with the following example. Consider the reaction network $$\label{eq:example0}
\begin{split}
\schemestart
A+B \arrow{->[$\kappa_1$]} 0
\arrow(@c1.south east--.north east){0}[-90,.25]
B \arrow{->[$\kappa_2$]} A+2B
\schemestop
\end{split}$$ The mass action system satisfies the assumptions of Theorem \[thm:ACR\_det\] and the species $A$ is ACR, with ACR equilibrium $\kappa_2/\kappa_1$. The only assumption of Theorem \[thm:ACR\_stoc\] that is not met is , as the network is not conservative. Indeed, the stoichiometric subspace is $$S=\left\{\begin{pmatrix}s\\s\end{pmatrix}\,:\,s\in{\mathbb{R}}\right\}$$ and no positive vector is contained in $$S^\perp=\left\{\begin{pmatrix}s\\-s\end{pmatrix}\,:\,s\in{\mathbb{R}}\right\}.$$ In [@AEJ:ACR], it is shown that the stochastically modeled mass action system does not undergo any extinction event if $X_1(0)<X_2(0)$ (remember we assume the species to be alphabetically ordered). In this case, the stationary distribution is computed by using birth and death process techniques, and every state is shown to be positive recurrent.
We can modify to obtain the bimolecular mass action system $$\label{eq:example0_bi}
\begin{split}
\schemestart
A+B \arrow{->[$\kappa_1$]} 0
\arrow(@c1.south east--.north east){0}[-90,.25]
B \arrow{->[$\kappa_2$]} A+C
\arrow(@c3.south east--.north east){0}[-90,.25]
C \arrow{<=>[$\kappa_3$][$\kappa_4$]} 2B
\schemestop
\end{split}$$ The mass action system still satisfies the assumptions of Theorem \[thm:ACR\_det\], and it can therefore be concluded that the species $A$ is ACR. In fact, the ACR value for $A$ is still $\kappa_2/\kappa_1$. Again, the only assumption of Theorem \[thm:ACR\_stoc\] that is not satisfied is , since it can be checked that the orthogonal to the stoichiometric subspace is $$S^\perp=\left\{\begin{pmatrix}s\\-s\\-2s\end{pmatrix}\,:\,s\in{\mathbb{R}}\right\},$$ which does not contain any positive vector. Define the conserved quantity $m=X_1(0)-X_2(0)-2X_3(0)$ and assume that $m<0$. We will show that no extinction set for any reaction exists, which in turn proves that no extinction event can occur.
Note that since $X$ is confined within a stoichiometric compatibility class, for any $t\geq0$ $$X_1(t)-X_2(t)-2X_3(t)=m<0.$$
Assume that from $X(0)$ a state $x'$ can be reached with $\lambda_{A+B\to 0}(x')=0$. Then, either
- $x'_2=0$, or
- $x'_2>0$ and $x'_1=0$.
Suppose we are in case (i). Then, since $$m=x'_1-2x'_3<0,$$ we must have $x'_3>0$. Hence the reaction $C\to2B$ can take place, in which case at least one molecule of $B$ is present. At this point, either $A+ B \to 0$ is active, or we are in case (ii). Hence, assume that (ii) holds. Then $B\to A+C$ can occur, which can then be followed by $C \to 2B$. After these two reactions take place, both a molecule of $A$ and a molecule of $B$ are necessarily present, so a state is reached where the reaction $A+B\to 0$ is active.
Combining all of the above, we have proven that no extinction set for $A+B\to 0$ is reachable from $X(0)$. By applying Proposition \[prop:dominatio\], it follows that the same holds for the reaction $B\to A+C$. Then, by Proposition \[prop:consecutio\] we have that no extinction set for $C\to 2B$ is reachable from $X(0)$, and finally by applying Proposition \[prop:consecutio\] we conclude that the same holds for $2B\to C$. In conclusion, no extinction event can occur with the chosen initial conditions.
Example 2: assumption cannot be removed {#example-2-assumption-cannot-be-removed .unnumbered}
----------------------------------------
Consider the bimolecular mass action system $$\label{ex:no_def_1}
\begin{split}
\schemestart
A+B\arrow{->[$\kappa_1$]}B+C\arrow{<=>[$\kappa_2$][$\kappa_3$]}2B\arrow{->[$\kappa_4$]}2D
\arrow(@c2.south east--.north east){0}[-90,.25]
C\arrow{->[$\kappa_5$]}A
\arrow(@c5.south east--.north east){0}[-90,.25]
D\arrow{->[$\kappa_6$]}B
\schemestop
\end{split}$$ The following holds.
The species $A$ is ACR. Moreover, there exists at least one $c>0$ with $g(c)=0$.
: Indeed, it can be checked that $$g(c)=\begin{pmatrix}
-\kappa_1c_1c_2+\kappa_5c_3\\
\kappa_2c_2c_3-\kappa_3c_2^2-2\kappa_4c_2^2+\kappa_6c_4\\
\kappa_1c_1c_2-\kappa_2c_2c_3+\kappa_3c_2^2-\kappa_5c_3\\
2\kappa_4c_2^2-\kappa_6c_4
\end{pmatrix}$$ is zero if and only if $c_2=c_3=c_4=0$ or $$c=\left(
\frac{\kappa_3\kappa_5}{\kappa_1\kappa_2},
s,
\frac{\kappa_3}{\kappa_2}s,
\frac{2\kappa_4}{\kappa_6}s^2
\right)\quad\text{for some }s\in{\mathbb{R}}_{>0}.$$
The reaction network has deficiency 2.
: This can be easily checked, since $$S=\operatorname{span}_{\mathbb{R}}\left\{
\begin{pmatrix}
-1\\ 0\\ 1\\ 0
\end{pmatrix},
\begin{pmatrix}
0\\ 1\\ -1\\ 0
\end{pmatrix},
\begin{pmatrix}
0\\ -1\\ 0\\ 1
\end{pmatrix}
\right\}$$ and $$\delta=|{\mathcal{C}}|-\ell-\dim S=8-3-3=2.$$
There are two non-terminal complexes $y\neq y'$ such that only the second entry of $y'-y$ is different from 0.
: Indeed, the two complexes $B+C$ and $C$ are non-terminal and their difference is $B$. It is interesting to note that in this example the species $B$ is not the ACR species, so the conclusions of Theorem \[thm:ACR\_det\] do not hold.
The reaction network is conservative.
: Indeed, the vector $(1,1,1,1)$ is a positive conservation law.
Hence, all assumptions of Theorem \[thm:ACR\_stoc\] are fulfilled except for , and the deterministically modeled mass action system is ACR. We will show that no extinction event can take place for the stochastic mass action system, if we choose an initial condition $X(0)$ with the minimal requirements that the conserved mass $m=X_1(0)+X_2(0)+X_3(0)+X_4(0)\geq 2$ and that $X_2(0)+X_4(0)\geq 1$. Specifically, we will show that there is no extinction set for any $y\to y'\in{\mathcal{R}}$ which is reachable from $X(0)$ .
Assume that a state $x'$ with $\lambda_{A+B\to B+C}(x')=0$ is reachable from $X(0)$. Then, there are two cases:
- We have $x'_2=0$. Note that the only reaction reducing the total number of $B$ and $D$ molecules is $2B\to B+C$, which decreases the total number by 1 but can only take place if at least 2 molecules of $B$ are present. Hence, at least one molecule of $B$ or $D$ is always present (because $X_2(0)+X_4(0)\geq 1$), which implies that $x'_2+x'_4\geq 1$. More specifically, from $x'_2=0$ it follows that $x'_4\geq1$, implying that the reaction $D\to B$ can take place. Hence, a state with at least one molecule of $B$ can be reached. Either $A+B\to B+C$ is active at this state, or the following case can be considered.
- We have $x'_2>0$ and $x'_1=0$. We consider three further subcases:
- If $x'_3>0$, then a molecule of $A$ can be created by the occurrence of $C\to A$, which does not modify the number of molecules of $B$. Hence, a state can be reached where $A+B\to B+C$ is active.
- If $x'_3=0$ and $x'_2\geq 2$, then a molecule of $B$ can be transformed into a molecule of $C$ through $2B\to B+C$, which only consumes one molecule of $B$. This subcase is therefore reduced to the previous one.
- If $x'_3=0$ and $x'_2=1$, then $$2\leq m=x'_1+x'_2+x'_3+x'_4=0+1+0+x'_4,$$ which implies that $x'_4\geq 1$ and an additional molecule of $B$ can be created by the occurrence of $D\to B$. This subcase is therefore reduced to the previous one.
It follows from the above analysis that no extinction set for $A+B\to B+C$ is reachable from $X(0)$. By consecutive applycations of Proposition \[prop:consecutio\], it follows that the same occurs for all the other reactions as well.
As noted above, in this case the two non-terminal complexes $B+C$ and $C$ differ in the second entry, but the second species (that is $B$) is not ACR. This is not what prevents the occurrence of an extinction event. A similar analysis as before can be conducted on the following bimolecular mass action system: $$\label{ex:no_def_1_remark}
\begin{split}
\schemestart
A+B\arrow{->[$\kappa_1$]}B+C\arrow{<=>[$\kappa_2$][$\kappa_3$]}2B
\arrow(@c1.south east--.north east){0}[-90,.25]
B\arrow{<=>[$\kappa_4$][$\kappa_5$]}2E\arrow{->[$\kappa_6$]}2D
\arrow(@c4.south east--.north east){0}[-90,.25]
C\arrow{->[$\kappa_7$]}A
\arrow(@c7.south east--.north east){0}[-90,.25]
D\arrow{->[$\kappa_8$]}E
\schemestop
\end{split}$$ For completeness, the analysis of is carefully carried out in the Appendix. There, it is shown that the species $A$ is the only ACR species, with ACR value $(\kappa_3\kappa_7)/(\kappa_1\kappa_2)$. Moreover, the only two non-terminal complexes differing in one entry are $A+B$ and $B$, and they differ in the first entry. It is further shown that satisfies all the assumptions of Theorem \[thm:ACR\_stoc\], except for , and that no extinction event can occur provided that $$\begin{aligned}
2X_1(0)+2X_2(0)+2X_3(0)+X_4(0)+X_5(0)&\geq4,\\
2X_2(0)+X_4(0)+X_5(0)&\geq2.\\\end{aligned}$$
Example 3: assumption cannot be removed {#example-3-assumption-cannot-be-removed .unnumbered}
---------------------------------------
Consider the following bimolecular mass action system. $$\label{ex:non_non_terminal}
\begin{split}
\schemestart
A+B \arrow{->[$\kappa_1$]} B+C \arrow{<=>[$\kappa_2$][$\kappa_3$]} 2B
\arrow(@c1.south east--.north east){0}[-90,.25]
C \arrow{->[$\kappa_4$]} A
\schemestop
\end{split}$$ The following holds.
The species $A$ is ACR. Moreover, there exists at least one $c>0$ with $g(c)=0$.
: Indeed, it can be checked that $$g(c)=\begin{pmatrix}
-\kappa_1c_1c_2+\kappa_4c_3\\
\kappa_2c_2c_3-\kappa_3c_2^2\\
\kappa_1c_1c_2-\kappa_2c_2c_3+\kappa_3c_2^2-\kappa_4c_3
\end{pmatrix}$$ is zero if and only if $c_2=c_3=0$ or $$c=\left(
\frac{\kappa_3\kappa_4}{\kappa_1\kappa_2},
s,
\frac{\kappa_3}{\kappa_2}s
\right)\quad\text{for some }s\in{\mathbb{R}}_{>0}.$$
The reaction network has deficiency 1.
: Indeed, $$S=\operatorname{span}_{\mathbb{R}}\left\{
\begin{pmatrix}
-1\\ 0\\ 1
\end{pmatrix},
\begin{pmatrix}
0\\ 1\\ -1
\end{pmatrix}
\right\}$$ and $$\delta=|{\mathcal{C}}|-\ell-\dim S=5-2-2=1.$$
There are no non-terminal complexes $y\neq y'$ such that $y'-y$ has only one entry different from 0.
: This can be easily checked, since the only non-terminal complexes are $A+B$ and $C$.
The reaction network is conservative.
: Indeed, the vector $(1,1,1)$ is a positive conservation law.
We will now show that no extinction event can occur for the stochastically modeled mass action system, provided that $X_2(0)\geq 1$ and the conserved mass $m=X_1(0)+X_2(0)+X_3(0)\geq 2$. Note that for any time $t\geq0$ we have $X_2(t)\geq1$, since the only reaction decreasing the number of molecules of $B$ is $2B\to B+C$, which removes one molecule of $B$ and can only take place if at least two molecules of $B$ are present. Assume that a state $x'$ is reached from $X(0)$, such that $\lambda_{A+B\to B+C}=0$. Then, it must be that $x'_1=0$. There are two cases:
- We have $x_3\geq 1$. Hence, a molecule of $A$ can be created through the occurrence of $C\to A$, and a state is reached where the reaction $A+B\to B+C$ is active.
- We have $x_3=0$. Hence, $2\leq m=x'_2$ which means that the reaction $2B\to B+C$ can take place, and this case is reduced to the previous one.
In conclusion, no extinction sets for $A+B\to B+C$ are reachable from $X(0)$. It can be shown that the same holds for all other reactions by applying Proposition \[prop:consecutio\] succesively.
By imposing more restricitve assumptions, we can formulate new conjectures. For example, one may be tempted to try to prove that the existence of ACR species implies the occurrence of an extinction event (in the stochastic model) for binary mass action systems in which the coefficient of the species in all complexes are either 0 or 1 (in more biological terms, this implies that there is no autocatalytic production). However, we give here an example showing that this is not true. Consider the mass action system $$\label{ex:stoich_1_def_1_cycle}
\begin{split}
\schemestart
A+B\arrow{->[$\kappa_1$]}C+E\arrow{<=>[$\kappa_2$][$\kappa_3$]}B+D
\arrow(@c1.south east--.north east){0}[-90,.25]
C\arrow{->[$\kappa_4$]}A
\arrow(@c4.south east--.north east){0}[-90,.25]
B\arrow(B--D){->[$\kappa_5$]}D
\arrow(@D--E){->[$\kappa_6$]}[-120]E
\arrow(@E--@B){->[$\kappa_7$]}
\schemestop
\end{split}$$ We prove in the Appendix that $A$ is an ACR species, and no extinction event can occur for the stochastic model, provided that $$\begin{aligned}
X_1(0)+X_2(0)+X_3(0)+X_4(0)+X_5(0)&\geq 2, \\
X_2(0)+X_4(0)+X_5(0)&\geq 1. \end{aligned}$$ Moreover, the only assumption of Theorem \[thm:ACR\_stoc\] that is not satisfied by is .
Stationary distributions and equilibria of the associated deterministic model {#sec:stable_boundary}
=============================================================================
Connections between the equilibria of a deterministic mass action system and the stationary distributions of the corresponding stochastic mass action system have been studied for a special class of models, called complex balanced mass action systems [@AC2016; @ACK:poisson; @CW:poisson; @CJ:graphical]. However, in general the existence of positive equilibria for the deterministic mass action system does not imply the positive recurrence of the associated stochastic mass action system, as shown in , , and .
Conversely, it was intuitively thought that the existence of a stationary distribution of the stochastic mass action system would imply the existence of a positive equilibrium of the associated deterministic mass action system. The idea was that a positive equilibrium could be related to the mean of the stationary distribution, or to some sort of weighted average thereof. If that were true, the lack of positive equilibria for the deterministic mass action system would have implied the transience of the states of the associated stochastic model. In particular, for models with a conservative reaction network, the transience of positive states would have implied the absorption at the boundary due to the finiteness of the state space, hence an extinction event.
The fact that lack of positive equilibria in the deterministic mass action system does not imply the transience of the associated stochastic mass action system is, however, shown in . However, the question was still open for models with a conservative network, and we close it here with the following bimolecular example.
Consider the bimolecular mass action system $$\begin{split}
\schemestart
A+B \arrow{->[$\kappa_1$]} B+C \arrow{<=>[$\kappa_2$][$\kappa_3$]} 2B
\arrow(@c1.south east--.north east){0}[-90,.25]
C\arrow{->[$\kappa_4$]}A \arrow{<-[$\kappa_5$]} E
\arrow(@c4.south east--.north east){0}[-90,.25]
A+D \arrow{->[$\kappa_6$]} D+E \arrow{<=>[$\kappa_7$][$\kappa_8$]} 2D
\schemestop
\end{split}$$ The following holds.
The reaction network is conservative.
: Indeed, it can be checked that $(1,1,1,1,1)$ is a positive conservation law.
There is no positive $c$ with $g(c)=0$ for a general choice of rate constants.
: We have $$g(c)=\begin{pmatrix}
-\kappa_1c_1c_2+\kappa_4c_3+\kappa_5c_5-\kappa_6c_1c_4\\
\kappa_2c_2c_3-\kappa_3c_2^2\\
\kappa_1c_1c_2-\kappa_2c_2c_3+\kappa_3c_2^2-\kappa_4c_3\\
\kappa_7c_4c_5-\kappa_8c_4^2\\
-\kappa_5c_5+\kappa_6c_1c_4-\kappa_7c_4c_5+\kappa_8c_4^2
\end{pmatrix}$$ By imposing $c>0$, it follows that $g(c)=0$ is equivalent to the system $$\begin{cases}
c_3=\frac{\kappa_3}{\kappa_2}c_2\\
c_3=\frac{\kappa_1}{\kappa_4}c_1c_2\\
c_5=\frac{\kappa_8}{\kappa_7}c_4\\
c_5=\frac{\kappa_6}{\kappa_5}c_1c_4\\
\end{cases}$$ The system has a positive solution if and only if $$c_1=\frac{\kappa_3\kappa_4}{\kappa_1\kappa_2}=\frac{\kappa_5\kappa_8}{\kappa_6\kappa_7}.$$ Hence, no positive equilibria exist if the rate constants do not satisfy the above equality. It is interesting to note that, when a positive equilibrium exists, the above equation implies that the species $A$ is ACR.
All positive states are positive recurrent.
: We will prove something more: each set of the form $$\Upsilon_m=\{x\in{\mathbb{Z}}_{\geq0}^5\,:\, x_2,x_4\geq 1, \|x\|_1=m\}$$ for some $m\geq 2$ is closed and irreducible. Since the sets $\Upsilon_m$ are finite, it follows that they only contain positive recurrent states. We obtain the desired result by noting that every positive state is contained in some set $\Upsilon_m$, for some $m\geq2$. So, it suffices to show that for a given $m\geq2$ the set $\Upsilon_m$ is closed and irreducible.
We begin by showing that $\Upsilon_m$ is closed. This is equivalent to showing that from a state with a least one molecule of $B$ and one molecule of $D$, it is impossible to reach a state with no molecules of $B$ or no molecules of $D$. This follows from noting that the only reaction decreasing the number of molecules of $B$ is $2B\to B+C$, which decrease the number of molecules of $B$ by one, but it can only occur if at least two molecules of $B$ are present. The same holds for the species $D$, whose molecules can only be decreased through $2D\to D+E$.
To show that $\Upsilon_m$ is irreducible, we can show that for all $x\in\Upsilon_m$, the state $(m-2,1,0,1,0)$ can be reached from $x$, and viceversa. Indeed, if $X(0)=x$ then the reaction $2B\to B+C$ can take place $x_2-1$ times, and the reaction $2D\to D+E$ can take place $x_4-1$ times. Hence, the state $(x_1, 1, x_3+x_2-1, 1, x_5+x_4-1)$ is reached. Now, if all the molecules of $C$ and $E$ are consumed by the reactions $C\to A$ and $E\to A$, then the state $(\|x\|_1-2,1,0,1,0)=(m-2,1,0,1,0)$ is reached.
Conversely, if $X(0)=(m-2,1,0,1,0)$, then the reaction $A+B\to B+C$ can take place $x_2+x_3-1$ times, and the reaction $A+D\to D+E$ can take place $x_4+x_5-1$ times. The state $(x_1, 1, x_2+x_3-1, 1, x_4+x_5-1)$ is reached. From here, the state $x$ can be reached if the reaction $B+C\to 2B$ takes place $x_2-1$ times and $D+E\to 2E$ takes place $x_5-1$ times.
Appendix {#appendix .unnumbered}
========
Analysis of the mass action system {#analysis-of-the-mass-action-system .unnumbered}
-----------------------------------
Consider the bimolecular mass action system , which we repeat here for convenience. $$\begin{split}
\schemestart
A+B\arrow{->[$\kappa_1$]}B+C\arrow{<=>[$\kappa_2$][$\kappa_3$]}2B
\arrow(@c1.south east--.north east){0}[-90,.25]
B\arrow{<=>[$\kappa_4$][$\kappa_5$]}2E\arrow{->[$\kappa_6$]}2D
\arrow(@c4.south east--.north east){0}[-90,.25]
C\arrow{->[$\kappa_7$]}A
\arrow(@c7.south east--.north east){0}[-90,.25]
D\arrow{->[$\kappa_8$]}E
\schemestop
\end{split}$$ The following holds.
The species $A$ is ACR. Moreover, there exists at least one $c>0$ with $g(c)=0$.
: Indeed, we have that $$g(c)=\begin{pmatrix}
-\kappa_1c_1c_2+\kappa_7c_3\\
\kappa_2c_2c_3-\kappa_3c_2^2-\kappa_4c_2+\kappa_5c_5^2\\
\kappa_1c_1c_2-\kappa_2c_2c_3+\kappa_3c_2^2-\kappa_7c_3\\
2\kappa_6c_5^2-\kappa_8c_4\\
\kappa_4c_2-2(\kappa_5+\kappa_6)c_5^2+\kappa_8c_4
\end{pmatrix}$$ is zero if and only if $c_2=c_3=c_4=c_5=0$ or $$c=\left(
\frac{\kappa_3\kappa_7}{\kappa_1\kappa_2},
s,
\frac{\kappa_3}{\kappa_2}s,
\frac{2\kappa_4\kappa_6}{\kappa_5\kappa_8}s,
\sqrt{\frac{\kappa_4}{\kappa_5}s}
\right)\quad\text{for some }s\in{\mathbb{R}}_{>0}.$$
The reaction network has deficiency 2.
: Indeed, $$S=\operatorname{span}_{\mathbb{R}}\left\{
\begin{pmatrix}
-1\\ 0\\ 1\\ 0 \\ 0
\end{pmatrix},
\begin{pmatrix}
0\\ 1\\ -1\\ 0 \\ 0
\end{pmatrix},
\begin{pmatrix}
0 \\ -1 \\ 0 \\ 0 \\ 2
\end{pmatrix},
\begin{pmatrix}
0\\ 0\\ 0\\ 1 \\ -1
\end{pmatrix}
\right\}$$ and $$\delta=|{\mathcal{C}}|-\ell-\dim S=10-4-4=2.$$
There are two non-terminal complexes $y\neq y'$ such that only the first entry of $y'-y$ is different from 0.
: The two complexes $A+B$ and $B$ are non-terminal and their difference is $A$. Note that $A$ is the only ACR species, and $A+B$ and $B$ are the only two non-terminal complexes whose difference has only one entry that is not zero.
The reaction network is conservative.
: Indeed, the vector $(2,2,2,1,1)$ is a positive conservation law.
We will show that no extinction event can take place for the stochastically modeled mass action system, provided that $$\begin{aligned}
2X_1(0)+2X_2(0)+2X_3(0)+X_4(0)+X_5(0)&\geq4,\\
2X_2(0)+X_4(0)+X_5(0)&\geq2.\\\end{aligned}$$
Assume that a state $x'$ with $\lambda_{A+B\to B+C}(x')=0$ is reachable from $X(0)$. Then, there are two cases.
- We have $x'_2=0$. For any $t\geq0$, consider the quantity $$h(t)=2X_2(t)+X_4(t)+X_5(t).$$ The only reaction capable of reducing $h(t)$ is $2B\to B+C$. This reaction decreases $h(t)$ by 2, but it can only take place if at least 2 molecules of $B$ are present, in which case $h(t)\geq4$. Hence, under the assumption that $m(0)\geq 2$, for all $t\geq0$ we necessarily have $h(t)\geq2$. Since $x'$ is reachable from $X(0)$ and $x'_2=0$, we have $x'_4+x'_5\geq2$. By potentially letting the reaction $D\to E$ take place, we may assume that $x'_4\geq2$. Hence, the reaction $2E\to B$ can occur and the number of molecules of $B$ can become positive. At this point, either a state where $A+B\to B+C$ is active is reached, or we consider the following case.
- We have $x'_2>0$ and $x'_1=0$. We have three subcases:
- If $x'_3>0$, then a molecule of $A$ can be created by the occurrence of $C\to A$, which does not modify the number of molecules of $B$. Hence, a state can be reached where $A+B\to B+C$ is active.
- If $x'_3=0$ and $x'_2\geq 2$, then a molecule of $B$ can be transformed into a molecule of $C$ through $2B\to B+C$, which only consumes one molecule of $B$. This subcase is therefore reduced to the previous one.
- If $x'_3=0$ and $x'_2=1$, then $$4\leq 2x'_1+2x'_2+2x'_3+x'_4+x'_5=0+2+0+x'_4+x'_5,$$ which implies that $x'_4+x'_5\geq 2$. By potentially using the reaction $D\to E$, we can assume that $x'_4\geq2$ and a molecule of $B$ can be created by the reaction $2E\to B$. This subcase is therefore reduced to the previous one.
It follows that no extinction set for $A+B\to B+C$ is reachable from $X(0)$. By consecutive applycations of Proposition \[prop:consecutio\], it follows that the same occurs for all the other reactions as well, so no extinction event can occur.
Analysis of the mass action system {#analysis-of-the-mass-action-system-1 .unnumbered}
-----------------------------------
Consider the bimolecular mass action system , which we repeat here for convenience. $$\begin{split}
\schemestart
A+B\arrow{->[$\kappa_1$]}C+E\arrow{<=>[$\kappa_2$][$\kappa_3$]}B+D
\arrow(@c1.south east--.north east){0}[-90,.25]
C\arrow{->[$\kappa_4$]}A
\arrow(@c4.south east--.north east){0}[-90,.25]
B\arrow(B--D){->[$\kappa_5$]}D
\arrow(@D--E){->[$\kappa_6$]}[-120]E
\arrow(@E--@B){->[$\kappa_7$]}
\schemestop
\end{split}$$ We have the following.
The species $A$ is ACR. Moreover, there exists at least one $c>0$ with $g(c)=0$.
: Indeed, it can be checked that $$g(c)=\begin{pmatrix}
-\kappa_1c_1c_2+\kappa_4c_3\\
-\kappa_1c_1c_2+\kappa_2c_3c_5-\kappa_3c_2c_4-\kappa_5c_2+\kappa_7c_5\\
\kappa_1c_1c_2-\kappa_2c_3c_5+\kappa_3c_2c_4-\kappa_4c_3\\
\kappa_2c_3c_5-\kappa_3c_2c_4+\kappa_5c_2-\kappa_6c_4\\
\kappa_1c_1c_2-\kappa_2c_3c_5+\kappa_3c_2c_4+\kappa_6c_4-\kappa_7c_5\\
\end{pmatrix}$$ is zero if and only if $c_2=c_3=c_4=c_5=0$ or $$c=\left(
u,
s,
\frac{\kappa_1 u}{\kappa_4}s,
\frac{\kappa_5}{\kappa_6}s,
\frac{\kappa_5+\kappa_1u}{\kappa_7}s
\right)\quad\text{for some }s\in{\mathbb{R}}_{>0},$$ where $u$ is the unique positive real number satisfying $$\kappa_1^2\kappa_2\kappa_6 u^2+\kappa_1\kappa_2\kappa_5\kappa_6 u - \kappa_3\kappa_4\kappa_5\kappa_7=0,$$ namely $$u=\frac{-\kappa_2\kappa_5\kappa_6+\sqrt{\kappa_2^2\kappa_5^2\kappa_6^2+4\kappa_2\kappa_3\kappa_4\kappa_5\kappa_6\kappa_7}}{2\kappa_1\kappa_2\kappa_6}.$$
The reaction network has deficiency 1.
: Indeed, $$S=\operatorname{span}_{\mathbb{R}}\left\{
\begin{pmatrix}
-1\\ -1\\ 1\\ 0\\1
\end{pmatrix},
\begin{pmatrix}
0\\ 1\\ -1\\1\\-1
\end{pmatrix},
\begin{pmatrix}
1\\ 0\\ -1\\0\\0
\end{pmatrix},
\begin{pmatrix}
0\\ -1\\ 0\\1\\0
\end{pmatrix}
\right\}$$ and $$\delta=|{\mathcal{C}}|-\ell-\dim S=8-3-4=1.$$
There are no non-terminal complexes $y\neq y'$ such that $y'-y$ has only one entry different from 0.
: This can be easily checked, since the only non-terminal complexes are $A+B$ and $C$.
The reaction network is conservative.
: Indeed, the vector $(1,1,1,1,1)$ is a positive conservation law.
We will show that under the assumption $$\begin{aligned}
m=X_1(0)+X_2(0)+X_3(0)+X_4(0)+X_5(0)\geq 2,
X_2(0)+X_4(0)+X_5(0)\geq 1,\end{aligned}$$ no extinction event can occur for the stochastic model. To this aim, first note that the quantity $$h(t)=X_2(t)+X_4(t)+X_5(t)$$ is always greater than or equal to 1. Indeed, the only reaction that can decrease this quantity is $$B+D\to C+E.$$ However, under the action of this reaction $h(t)$ decreases by 1, but the reaction is active only if at least one molecule of $B$ and one molecule of $D$ are present, implying $h(t)=2$. Assume that a state $x'$ is reachable from $X(0)$, with $\lambda_{A+B\to C+E}(x')=0$. This implies that one of the two following cases occurs.
- We have $x'_2=0$. Since $h(t)\geq1$ for all $t\geq0$, it follows that at least one molecule of $D$ or one molecule of $E$ is present. Hence, a molecule of $B$ can be created by the reactions $D\to E$ and $E\to B$. Either a state where $A+B\to C+E$ is active is reached, or we are in the following case.
- We have $x'_1=0$ and $x'_2\geq1$. One of the two following subcases holds.
- We have $x'_3\geq1$. Then, a molecule of $A$ can be created through $C\to A$ and a state is reached where $A+B\to C+E$ is active.
- We have $x'_3=0$. Hence, $$2\leq m=x'_1+x'_2+x'_3+x'_4+x'_5=x'_2+x'_4+x'_5.$$ Thanks to the reactions $B\to D$, $D\to E$, and $E\to B$, we can transform all the molecules of $B$, $D$, and $E$ into at least one molecule of $B$ and one molecule of $D$, so that a state where $B+D\to C+E$ is active is reached. Upon the action of $B+D\to C+E$, a molecule of $C$ is then produced, and this subcase reduces to the previous one.
In conclusion, no extinction set for $A+B\to C+E$ is reachable from $X(0)$. By applying Proposition \[prop:consecutio\], it follows that the same holds for the other reactions. Hence, no extinction event can occur.
|
---
abstract: 'Electron-hydrogen scattering is studied in the Faddeev-Merkuriev integral equation approach. The equations are solved by using the Coulomb-Sturmian separable expansion technique. We present $S$- and $P$-wave scattering and reactions cross sections up to the $H(n=4)$ threshold.'
author:
- 'Z. Papp${}^{1}$ and C-.Y. Hu${}^{2}$'
title: 'Electron-hydrogen scattering in Faddeev-Merkuriev integral equation approach'
---
Introduction {#introduction .unnumbered}
============
The scattering of electrons on hydrogen atom is a fundamental three-body problem in atomic physics. The long-range Coulomb interaction presents the major difficulty. On the other hand, it is a special kind of Coulomb three-body problem as it contains two identical particles. While many studies have been carried out aiming at solving the Schrödinger equation using perturbative, close-coupling, variational or direct numerical methods approaches along to the Faddeev equations are relatively scarce. Here, by solving Faddeev-type integral equations, we present a general numerical method suitable for the treatment of elastic and inelastic processes in three-body Coulombic systems with two identical particles and apply the formalism for the electron-hydrogen system.
For quantum mechanical three-body systems the Faddeev integral equations are the fundamental equations. They possess connected kernels and therefore they are Fredholm-type integral equations of second kind. The Faddeev equations were derived for short range interactions and if we simply plug-in a Coulomb-like potential they become singular. The necessary modification were proposed by Merkuriev [@fm-book]. In Merkuriev’s approach the Coulomb interactions were split into short-range and long-range parts. The long-range parts were included into the ,,free” Green’s operators and the Faddeev procedure were performed only with the short-range potentials. The corresponding modified Faddeev, or Faddeev-Merkuriev equations are mathematically well-behaved. They possess compact kernels even in the case of attractive Coulombic interactions. This means that the Faddeev-Merkuriev equations possess all the nice properties of the original Faddeev equations.
However, the associated three-body Coulomb Green’s operator is not known explicitly. To circumvent the problem the integral equations were cast into differential form and the appropriate boundary conditions were derived from the asymptotic analysis of the three-body Coulomb Green’s operator. These modified Faddeev differential equations were successfully solved for various atomic three-body problems, including electron-hydrogen scattering up to the $H(n=3)$ threshold [@kwh].
A characteristic property of the atomic three body systems is that, due to attractive Coulomb interactions, they have infinitely many two-body channels. If the total energy of the system increases more and more channels open up. The differential equation approach needs boundary conditions for each channels, and becomes intractable if the energy increases beyond a limit. Integral equations do not need boundary conditions, this information is incorporated in the Green’s operators. They need initial conditions, which are much simpler. Therefore an integral equation approach to the three-body Coulomb problem would be very useful, it could provide an unified description of the scattering and reactions processes for all energies.
In the past few years we have developed a new approach to the three-body Coulomb problem. Faddeev-type integral equations were solved by using the Coulomb-Sturmian separable expansion method. The approach was developed first for solving the nuclear three-body scattering problem with repulsive Coulomb interactions [@pzsc], which has been adapted recently for atomic systems with attractive Coulomb interactions [@phhky]. The basic concept in this method is a ,,three-potential” picture, where the $S$ matrix is given in three terms. In this approach we solve the Faddeev-Merkuriev integral equations such that the associated three-body Coulomb Green’s operator is calculated by an independent Lippmann-Schwinger-type integral equation. This Lippmann-Schwinger integral equation contains the channel-distorted Coulomb Green’s operator, which can be calculated as a contour integral of two-body Coulomb Green’s operators. The method were tested in positron-hydrogen scattering for energies up to the $H(n=2)-Ps(n=2)$ gap [@phhky], and good agreements with the configuration-space solution of the Faddeev-Merkuriev equations were found.
In this paper we apply this formalism for the electron-hydrogen scattering problem. In Sec. I we briefly describe the Faddeev-Merkuriev integral equations, the details are given in Ref. [@phhky]. However, the fact that in the electron-hydrogen system we have to deal with identical particles requires some additional considerations: the symmetry simplifies the numerical procedure. In Sec. II the integral equations are solved by the Coulomb-Sturmian separable expansion method. In Sec. III we show some test calculations up to the $H(n=4)$ threshold with total angular momenta $L=0$ and $L=1$. Finally, we draw some conclusions.
Faddeev-Merkuriev integral equations for the $e^{-}+H$ system
=============================================================
In the $e^{-}+H$ system the two electrons are identical. Let us denote them by $1$ and $2$, and the non-identical proton by $3$. The Hamiltonian is given by $$H=H^0 + v_1^C + v_2^C + v_3^C,
\label{H}$$ where $H^0$ is the three-body kinetic energy operator and $v_\alpha^C$ denotes the Coulomb interaction in the subsystem $\alpha$. We use the usual configuration-space Jacobi coordinates $x_\alpha$ and $y_\alpha$, where $x_\alpha$ is the coordinate between the pair $(\beta,\gamma)$ and $y_\alpha$ is the coordinate between the particle $\alpha$ and the center of mass of the pair $(\beta,\gamma)$. Thus the potential $v_\alpha^C$, the interaction of the pair $(\beta,\gamma)$, appears as $v_\alpha^C (x_\alpha)$. The Hamiltonian (\[H\]) is defined in the three-body Hilbert space. So, the two-body potential operators are formally embedded in the three-body Hilbert space, $$v^C = v^C (x) {\bf 1}_{y},
\label{pot0}$$ where ${\bf 1}_{y}$ is a unit operator in the two-body Hilbert space associated with the $y$ coordinate.
The role of a Coulomb potential in a three-body system is twofold. In one hand, it acts like a long-range potential since it modifies the asymptotic motion. On the other hand, however, it acts like a short-range potential, since it correlates strongly the particles and may support bound states. Merkuriev introduced a separation of the three-body configuration space into different asymptotic regions [@fm-book]. The two-body asymptotic region $\Omega$ is defined as a part of the three-body configuration space where the conditions $$|x| < x_0 ( 1 + |y|/ y_0)^{1/\nu},
\label{oma}$$ with $x_0, y_0 >0$ and $\nu > 2$, are satisfied. Merkuriev proposed to split the Coulomb interaction in the three-body configuration space into short-range and long-range terms $$v^C =v^{(s)} +v^{(l)} ,
\label{pot}$$ where the superscripts $s$ and $l$ indicate the short- and long-range attributes, respectively. The splitting is carried out with the help of a splitting function $\zeta$,
$$\begin{aligned}
v^{(s)} (x,y) & = & v^C(x) \zeta (x,y),
\\
v^{(l)} (x,y) & = & v^C(x) \left[1- \zeta (x,y) \right].
\label{potl}\end{aligned}$$
The function $\zeta$ vanishes asymptotically within the three-body sector, where $x\sim y \to \infty$, and approaches one in the two-body asymptotic region $\Omega$, where $x<< y \to \infty$. Consequently in the three-body sector $v^{(s)}$ vanishes and $v^{(l)}$ approaches $v^{C}$. In practice usually the functional form $$\zeta (x,y) =
2/\left\{1+ \exp \left[ {(x/x_0)^\nu}/{(1+y/y_0)} \right] \right\},
\label{oma1}$$ is used.
In the Hamiltonian (\[H\]) the Coulomb potential $v_3^C$, the interaction between the two electrons, is repulsive, and does not support bound states. Consequently, there are no two-body channels associated with this fragmentation. Therefore the entire $v_3^C$ can be considered as long-range potential. Then, the long-range Hamiltonian is defined as $$H^{(l)} = H^0 + v_1^{(l)}+ v_2^{(l)}+ v_3^{C},
\label{hl}$$ and the three-body Hamiltonian takes the form $$H = H^{(l)} + v_1^{(s)}+ v_2^{(s)}.
\label{hll}$$ So, the Hamiltonian (\[hll\]) appears formally as a three-body Hamiltonian with two short-range potentials. The bound-state wave function $|\Psi \rangle$ satisfies the homogeneous Lippmann-Schwinger integral equation $$|\Psi\rangle= G^{(l)} \left[ v_1^{(s)}+ v_2^{(s)} \right] |\Psi\rangle =
G^{(l)} v_1^{(s)} |\Psi\rangle + G^{(l)} v_2^{(s)} |\Psi\rangle,
\label{Psi}$$ where $G^{(l)}(z)=(z-H^{(l)})^{-1}$ is the resolvent operator of $H^{(l)}$. This induce, in the spirit of the Faddeev procedure, the splitting of the wave function $|\Psi\rangle$ into two components $$|\Psi\rangle=|\psi_1\rangle +|\psi_2\rangle,
\label{psi}$$ where the components are defined by $$|\psi_\alpha \rangle= G^{(l)} v_\alpha^{(s)} |\Psi\rangle,
\label{psidef}$$ with $\alpha=1,2$. The components satisfy the set of two-component Faddeev-Merkuriev integral equations
\[fm2comp\] $$\begin{aligned}
| \psi_1 \rangle &= | \Phi_1^{(l)} \rangle + & G_1^{(l)} v_1^{(s)}
| \psi_2 \rangle \label{fm2c1}\\
| \psi_2 \rangle &= \phantom{\ | \phi_1 \rangle + } &
G_2^{(l)} v_2^{(s)} | \psi_1 \rangle,
\label{fm2c2}\end{aligned}$$
where $G_\alpha^{(l)}$ is the resolvent operator of the channel Coulomb Hamiltonian $$H_\alpha^{(l)}=H^{(l)}+v_\alpha^{(s)}$$ and the inhomogeneous term $|\Phi_1^{(l)}\rangle$ is an eigenstate of $H^{(l)}_1$.
Before going further let us examine the spectral properties of the Hamiltonian $$H_1^{(l)}=H^{(l)}+v_1^{(s)}=H^0+v_1^C+v_2^{(l)}+v_3^C.$$ It obviously supports infinitely many two-body channels associated with the bound states of the attractive Coulomb potential $v_1^C$. The potential $v_3^C$ is repulsive and does not have bound states. The three-body potential $v_2^{(l)}$ is attractive and constructed such that $v_2^{(l)}(x_2,y_2)\to 0$ if $y_2 \to \infty$. Therefore, there are no two-body channels associated with fragmentations $2$ and $3$, the Hamiltonian $H_1^{(l)}$ has only $1$-type two-body asymptotic channels. Consequently, the corresponding $G_1^{(l)}$ Green’s operator, acting on the $v_1^{(s)}
| \psi_2 \rangle$ term in (\[fm2c1\]), will generate only $1$-type two-body asymptotic channels in $|\psi_1\rangle$. Similar analysis is valid also for $|\psi_2\rangle$. Thus, the Faddeev-Merkuriev procedure results in the separation of the three-body wave function into components such a way that each component has only one type of two-body asymptotic channels. This is the main advantage of the Faddeev equations and, as this analysis shows, this property remains true also for attractive Coulomb potentials if the Merkuriev splitting is adopted.
In the $e^- e^- p$ system the particles $1$ and $2$, the two electrons, are identical and indistinguishable. Therefore, the Faddeev components $| \psi_1 \rangle$ and $| \psi_2 \rangle$, in their own natural Jacobi coordinates, should have the same functional forms $$\langle x_1 y_1 | \psi_1 \rangle = \langle x_2 y_2 | \psi_2 \rangle
= \langle x y | \psi \rangle.$$ On the other hand, by interchanging the two electrons we have $${\mathcal P} | \psi_1 \rangle = p | \psi_2 \rangle,$$ where the operator ${\mathcal P}$ describes the exchange of particles $1$ and $2$, and $p=\pm 1$ is the eigenvalue of ${\mathcal P}$. Building this information into the formalism results the integral equation $$\label{fmp}
| \psi \rangle = | \Phi_1^{(l)} \rangle + G_1^{(l)} v_1^{(s)} p {\mathcal P}
| \psi \rangle,$$ which is alone sufficient to determine $| \psi \rangle$. We notice that so far no approximation has been made, and although this Faddeev-Merkuriev integral equation has only one component, yet it gives a full account on the asymptotic and symmetry properties of the system.
Coulomb-Sturmian separable expansion approach
=============================================
We solve this integral equation by applying the Coulomb–Sturmian separable expansion approach. This approach has been established in a series of papers for two- [@cspse2] and three-body [@cspse3; @pzsc; @phhky] problems with Coulomb-like potentials. The Coulomb-Sturmian (CS) functions are defined by $$\langle r|n l \rangle =\left[ \frac {n!} {(n+2l+1)!} \right]^{1/2}
(2br)^{l+1} \exp(-b r) L_n^{2l+1}(2b r), \label{basisr}$$ with $n$ and $l$ being the radial and orbital angular momentum quantum numbers, respectively, and $b$ is the size parameter of the basis. The CS functions $\{ |n l\rangle \}$ form a biorthonormal discrete basis in the radial two-body Hilbert space; the biorthogonal partner defined by $\langle r |\widetilde{n l}\rangle=\langle r |{n l}\rangle/r$.
Since the three-body Hilbert space is a direct product of two-body Hilbert spaces an appropriate basis is the bipolar basis, which can be defined as the angular momentum coupled direct product of the two-body bases, $$| n \nu l \lambda \rangle_\alpha =
| n l \rangle_\alpha \otimes |
\nu \lambda \rangle_\alpha, \ \ \ \ (n,\nu=0,1,2,\ldots),
\label{cs3}$$ where $| n l \rangle_\alpha$ and $|\nu \lambda \rangle_\alpha$ are associated with the coordinates $x_\alpha$ and $y_\alpha$, respectively. With this basis the completeness relation takes the form (with angular momentum summation implicitly included) $${\bf 1} =\lim\limits_{N\to\infty} \sum_{n,\nu=0}^N |
\widetilde{n \nu l \lambda } \rangle_\alpha \;\mbox{}_\alpha\langle
{n \nu l \lambda} | =
\lim\limits_{N\to\infty} {\bf 1}^{N}_\alpha,$$ where $\langle x y | \widetilde{ n \nu l \lambda}\rangle =
\langle x y | { n \nu l \lambda}\rangle/(x y)$.
We make the following approximation on the integral equation (\[fmp\]) $$\label{fmpa}
| \psi \rangle = | \Phi_1^{(l)} \rangle + G_1^{(l)}
{\bf 1}^{N}_1 v_1^{(s)} p {\mathcal P} {\bf 1}^{N}_1 | \psi \rangle,$$ i.e. the operator $v_1^{(s)} p {\mathcal P}$ is approximated in the three-body Hilbert space by a separable form, viz. $$\begin{aligned}
v_1^{(s)}p {\mathcal P} & = & \lim_{N\to\infty}
{\bf 1}^{N}_1 v_1^{(s)} p {\mathcal P} {\bf 1}^{N}_1 \nonumber \\
& \approx & {\bf 1}^{N}_1 v_1^{(s)} p {\mathcal P} {\bf 1}^{N}_1 \nonumber \\
& \approx & \sum_{n,\nu ,n', \nu'=0}^N
|\widetilde{n\nu l \lambda}\rangle_1 \; \underline{v}_1^{(s)}
\;\mbox{}_1 \langle \widetilde{n' \nu' l' \lambda'}|, \label{sepfe}\end{aligned}$$ where $\underline{v}_1^{(s)}=\mbox{}_1 \langle n\nu l \lambda|
v_1^{(s)} p {\mathcal P} |n' \nu' l' \lambda' \rangle_1$. Utilizing the properties of the exchange operator ${\mathcal P}$ these matrix elements can be written in the form $\underline{v}_1^{(s)}= p\times (-)^{l'} \; \mbox{}_1 \langle n\nu l \lambda|
v_1^{(s)}|n' \nu' l' \lambda' \rangle_2$, and can be evaluated numerically by using the transformation of the Jacobi coordinates [@bb]. The completeness of the CS basis guarantees the convergence of the method with increasing $N$ and angular momentum channels.
Now, by applying the bra $\langle \widetilde{ n'' \nu'' l'' \lambda''}|$ on Eq. (\[fmpa\]) from left, the solution of the inhomogeneous Faddeev-Merkuriev equation turns into the solution of a matrix equation for the component vector $\underline{\psi}=
\mbox{}_1 \langle \widetilde{ n\nu l\lambda} | \psi \rangle$ $$\underline{\psi} = \underline{\Phi}_1^{(l)} + \underline{G}_1^{(l)}
\underline{v}_1^{(s)} \underline{\psi} , \label{fn-eq1sm}$$ where $$\underline{\Phi}_1^{(l)} = \mbox{}_1 \langle \widetilde{ n\nu l\lambda }
|\Phi_1^{(l)} \rangle$$ and $$\underline{G}_1^{(l)}=\mbox{}_1 \langle \widetilde{
n\nu l\lambda} |G_1^{(l)}|\widetilde{n' \nu' l' \lambda'}\rangle_1.$$ The formal solution of Eq. (\[fn-eq1sm\]) is given by $$\label{fep1}
\underline{\psi }= \lbrack (\underline{G}_1^{(l)})^{-1}-
\underline{v}_1^{(s)}\rbrack^{-1} (\underline{G}_1^{(l)})^{-1}
\underline{\Phi}_1^{(l)}.$$
Unfortunately neither $\underline{G}_1^{(l)}$ nor $\underline{\Phi}_1^{(l)}$ are known. They are related to the Hamiltonian $H_1^{(l)}$, which is still a complicated three-body Coulomb Hamiltonian. The approximation scheme for $\underline{G}_1^{(l)}$ and $\underline{\Phi}_1^{(l)}$ is presented in Ref. [@phhky]. Starting from the resolvent relation $$G_1^{(l)}=\widetilde{G}_1 + \widetilde{G}_1 U_1 G_1^{(l)},
\label{ls1}$$ where $\widetilde{G}_1$ is the resolvent operator of the Hamiltonian $$\label{htilde}
\widetilde{H}_1 = H^{0}+v_1^C$$ and the potential $U_1$ is defined by $$U_1=v_2^{(l)}+v_3^C,$$ for the CS matrix elements $(\underline{G}^{(l)}_1)^{-1}$ we get $$(\underline{G}^{(l)}_1)^{-1}=
(\underline{\widetilde{G}}_1)^{-1} - \underline{U}_1,
\label{gleq}$$ where $$\underline{\widetilde{G}}_{1} =
\mbox{}_1\langle \widetilde{n \nu l \lambda} |
\widetilde{G}_1 | \widetilde{ n' \nu' l' \lambda'} \rangle_1
\label{gtilde}$$ and $$\underline{U}_{1} =
\mbox{}_1\langle n\nu l \lambda | U_1 | n' \nu' l' \lambda' \rangle_1.$$ These latter matrix elements can again be evaluated numerically.
Similarly, also the wave function $|{\Phi}_1^{(l)}\rangle$, a scattering eigenstate of $H_1^{(l)}$, satisfies the Lippmann-Schwinger equation $$|{\Phi}_1^{(l)}\rangle=|\widetilde{\Phi}_1\rangle +
\widetilde{G}_1 U_1 |{\Phi}_1^{(l)}\rangle,
\label{eqlsphil}$$ where $|\widetilde{\Phi}_1 \rangle$ is an eigenstate of $\widetilde{H}_1$. The solution is given by $$\underline{\Phi}_1^{(l)} =
[(\underline{\widetilde{G}}_1)^{-1} - \underline{U}^1]^{-1}
(\underline{\widetilde{G}}_1)^{-1} \underline{\widetilde{\Phi}}_1,
\label{eqphil}$$ where $\underline{\widetilde{\Phi}}_{1} =
\mbox{}_1\langle \widetilde{ n \nu l \lambda }|
\widetilde{\Phi}_1 \rangle$.
The three-particle free Hamiltonian can be written as a sum of two-particle free Hamiltonians $$H^0=h_{x_1}^0+h_{y_1}^0.$$ Consequently the Hamiltonian $\widetilde{H}_1$ of Eq. (\[htilde\]) appears as a sum of two two-body Hamiltonians acting on different coordinates $$\widetilde{H}_1 =h_{x_1}+h_{y_1},$$ with $h_{x_1}=
h_{x_1}^0+v_1^C(x _1)$ and $h_{y_1}=h_{y_1}^0$, which, of course, commute. Therefore the eigenstates of $\widetilde{H}_1$, in CS representation, are given by $$\mbox{}_1\langle \widetilde{ n \nu l \lambda }|
\widetilde{\Phi}_1 \rangle = \mbox{}_1\langle \widetilde{ n l}|
{\phi}_1 \rangle \times \mbox{}_1\langle \widetilde{ \nu \lambda }|
{\chi}_1 \rangle,
\label{phichi}$$ where $|\phi_1 \rangle$ and $|\chi_1 \rangle$ are bound and scattering eigenstates of $h_{x_1}$ and $h_{y_1}$, respectively. The CS matrix elements of the two-body bound and scattering states $\langle \widetilde{ n l}|
{\phi} \rangle$ and $\langle \widetilde{ \nu \lambda }|
{\chi} \rangle$, respectively, are know analytically from the two-body case [@cspse2].
The most crucial point in this procedure is the calculation of the matrix elements $\underline{\widetilde{G}}_1$. The Green’s operator $\widetilde{G}_1$ is a resolvent of the sum of two commuting Hamiltonians. Thus, according to the convolution theorem, the three-body Green’s operator $\widetilde{G}_1$ equates to a convolution integral of two-body Green’s operators, i.e. $$\widetilde{G}_1 (z)=
\frac 1{2\pi {i}}\oint_C dz' \,g_{x_1}(z-z')\;g_{y_1}(z'),
\label{contourint}$$ where $g_{x_1}(z)=(z-h_{x_1})^{-1}$ and $g_{y_1}(z)=(z-h_{y_1})^{-1}$. The contour $C$ should be taken counterclockwise around the singularities of $g_{y_1}$ such a way that $g_{x_1}$ is analytic on the domain encircled by $C$.
In the time-independent scattering theory the Green’s operator has a branch-cut singularity at scattering energies. In our formalism $\widetilde{G}_1 (E)$ should be understand as $\widetilde{G}_1 (E)=\lim_{\varepsilon\to 0} \widetilde{G}_1
(E +{\mathrm{i}}\varepsilon)$, with $\varepsilon > 0$, and $E < 0$, since in this work we are considering scattering below the three-body breakup threshold. To examine the analytic structure of the integrand in Eq. (\[contourint\]) let us take $\varepsilon$ finite. By doing so, the singularities of $g_{x_1}$ and $g_{y_1}$ become well separated. In fact, $g_{y_1}$ is a free Green’s operator with branch-cut on the $[0,\infty)$ interval, while $g_{x_1}(E+{\mathrm{i}}\varepsilon-z')$ is a Coulomb Green’s operator, which, as function of $z'$, has a branch-cut on the $(-\infty,E+{\mathrm{i}}\varepsilon]$ interval and infinitely many poles accumulated at $E+{\mathrm{i}}\varepsilon$. Now, the branch cut of $g_{y_1}$ can easily be encircled such that the singularities of $g_{x_1}$ lie outside the encircled domain (Fig. \[fig1\]). However, this would not be the case in the $\varepsilon\to 0$ limit. Therefore the contour $C$ is deformed analytically such that the upper part descends into the unphysical Riemann sheet of $g_{y_1}$, while the lower part of $C$ is detoured away from the cut (Fig. \[fig2\]). The contour in Fig. \[fig2\] is achieved by deforming analytically the one in Fig. \[fig1\], but now, even in the $\varepsilon\to 0$ limit, the contour in Fig. \[fig2\] avoids the singularities of $g_{x_1}$. Thus, with the contour in Fig. \[fig2\] the mathematical conditions for the contour integral representation of $\widetilde{G}_1$ in Eq. (\[contourint\]) is met also for scattering-state energies. The matrix elements $\underline{\widetilde{G}}_1$ can be cast into the form $$\widetilde{\underline{G}}_1 (E )= \frac 1{2\pi \mathrm{i}}\oint_C
dz' \,\underline{g}_{x_1 }(E -z')\; \underline{g}_{y_1}(z'),
\label{contourint2}$$ where the corresponding CS matrix elements of the two-body Green’s operators in the integrand are known analytically for all complex energies [@cspse2; @phhky].
In the three-potential formalism [@pzsc; @phhky] the $S$ matrix can be decomposed into three terms. The first one describes a single channel Coulomb scattering, the second one is a multichannel two-body-type scattering due to the potential $U$, and the third one is a genuine three-body scattering. In our $e^- + H$ case the target is neutral and the first term is absent. For the on-shell $T$ matrix we have $$T_{f i} = \sqrt{\frac{\mu_f \mu_i}{k_f k_i}} \left(\langle
\widetilde{\Phi }_{1 f}^{(-)}|U_1 |\Phi _{1 i}^{(l)(+)}\rangle +
\langle \Phi _{1 f}^{(l)(-)}|v_1^{(s)} | \psi _{2 i}^{(+)}\rangle\right),
\label{s3}$$ where $i$ and $f$ refer to the initial and the final states, respectively, $\mu$ is the channel reduced mass and $k$ is the channel wave number. Having the solutions $\underline{\psi}$ and $\underline{\Phi}^{(l)}$ and the matrix elements $\underline{U}_1$ and $\underline{v}^{(s)}_1$, the $T$ matrix elements can easily be evaluated. The spin-weighted cross section of the transition $i\to f$ is given by $$\sigma_{f i} = \frac{\pi a_0^2}{k_i^2} \frac{(2 S_{12}+1)(2L+1)}
{(2 l_i +1)} |T_{f i}|^2,$$ where $a_0$ is the Bohr radius, $L$ is the total angular momentum, $S_{12}$ is the total spin of the two electrons and $l_i$ is the angular momentum of the target hydrogen atom.
Results
=======
In the numerical calculations we use atomic units (the mass of the electrons $m_1=m_2=1$ and the mass of the proton $m_3=1836.151527$). In this paper we are concerned with total angular momenta $L=0$ and $L=1$. The formula (\[s3\]) gives some hint for the choice of the parameters in the splitting function $\zeta$. We can expect good convergence if the ”size” of $v_1^{(s)}$ corresponds to the ”size” of $\Phi _{1 f}^{(l)(-)}$. Therefore we may need to adjust the parameters of the splitting function if we consider more and more open channels. Consequently, we also need to adjust the $b$ parameter of the CS basis. We found that the final results and the rate of the convergence does not depend on the choice of $b$, within a rather broad interval around the optimal value.
Having the $T$ matrix we can also calculate the $K$ matrix, whose symmetry, which is equivalent to the unitarity of the $S$ matrix, provides a delicate and independent test of the method. We observed that if either the parameters of the splitting function are too far from the optimum or the convergence with the basis is not achieved the $K$ matrix fails to be symmetric. In the separable expansion we take up to $9$ bipolar angular momentum channels with CS functions up to $N=36$. This requires solution of complex general matrix equations with maximal size of $12321 \times 12321$, a problem which can even be handled on a workstation. We need relatively small basis because in this approach we approximate only short-range type potentials and the correct asymptotic is guaranteed by the Green’s operators.
We present first our $S$-wave results for energies below the $H(n=2)$ threshold. In this energy region we use parameters $\nu=2.1$, $x_0=3$, $y_0=20$ and $b=0.6$. Table \[tab1\] shows elastic phase shifts at several values of electron momenta $k_1$. Our results, which was achieved by using finite proton mass, agree very well with variational calculations of Ref. [@schwarz], $R$-matrix calculations of Ref. [@scholz], finite-element method of Ref. [@botero], as well as with the results of direct numerical solution of the Schrödinger equation of Ref. [@wang], where infinite mass for proton were adopted. We also compare our calculation with the differential equation solution of the modified Faddeev equations [@kwh]. We can observe perfect agreements with all the previous calculations.
In Table \[tab2\] we present $S$-wave partial cross sections and $K$ matrices between the $H(n=2)-H(n=3)$ thresholds at channel energy $E_1=0.81\text{Ry}$ and for $L=0$, where we have $3$ open channels. We used parameters $\nu=2.1$, $x_0=3.5$, $y_0=20$ and $b=0.3$. For comparison we also show the results of a configuration-space Faddeev calculation [@hu]. We can report perfect agreements not only for the cross sections but also for the $K$ matrix (except for an unphysical phase factor). Our cross sections are also in a good agreements with the results of Ref. [@wang].
In Tables \[tab3\] we show the $S$-wave $K$ matrix between the $H(n=3)-H(n=4)$ thresholds at channel energy $E_1=0.93\text{Ry}$, where we have $6$ open channels. We used parameters $\nu=2.1$, $x_0=4$, $y_0=20$ and $b=0.2$. We can see that the $K$ matrix is nearly perfectly symmetric. In Tables \[tab4\] we present $S$-wave partial cross sections between the $H(n=3)-H(n=4)$ thresholds at channel energies $E_1=0.93\text{Ry}$, $E_1=0.91\text{Ry}$ and $E_1=0.89\text{Ry}$, respectively. In Tables \[tab5\]-\[tab8\] we present the corresponding $P$-wave $K$ matrices and cross sections.
Summary
=======
In this work we have studied electron-hydrogen scattering problem by solving the Faddeev-Merkuriev integral equations. In this particular case, where two particles are identical, the Faddeev scheme results in an one-component equation, which, however, gives full account on the asymptotic and symmetry properties of the system. We solved the integral equations by applying the Coulomb-Sturmian separable expansion method.
We calculated $S$- and $P$-wave scattering and reaction cross sections for energies up to the $H(n=4)$ threshold. Our nearly perfectly symmetric $K$ matrices indicate that in our approach all the fine details of the scattering processes are properly taken into account.
This work has been supported by the NSF Grant No.Phy-0088936 and by the OTKA Grants No. T026233 and No. T029003. We are thankful to the Aerospace Engineering Department of CSULB for the generous allocation of computer resources.
[99]{}
S. P. Merkuriev, Ann. Phys. NY, [**130**]{}, 395, (1980); L. D. Faddeev and S. P. Merkuriev, *Quantum Scattering Theory for Several Particle Systems*, (Kluwer, Dordrecht,1993).
A. A. Kvitsinsky, A. Wu, and C.-Y. Hu, J. Phys. B: At. Mol. Opt. Phys. [**28**]{} 275 (1995).
Z. Papp, Phys. Rev. C [**55**]{}, 1080 (1997).
Z. Papp, C.-Y. Hu, Z. T. Hlousek, B. Kónya and S. L. Yakovlev, Phys. Rev. A, (2001).
Z. Papp, J. Phys. A [**20**]{}, 153 (1987); Z. Papp, Phys. Rev. C [**38**]{}, 2457 (1988); Z. Papp, Phys. Rev. A [**46**]{}, 4437 (1992); Z. Papp, Comput. Phys. Commun. [**70**]{}, 426 (1992); Z. Papp, Comput. Phys. Commun. [**70**]{}, 435 (1992); B. Kónya, G. Lévai, and Z. Papp, Phys. Rev. C 61, 034302 (2000).
Z. Papp and W. Plessas, Phys. Rev. C [**54**]{}, 50 (1996); Z. Papp, Few-Body Systems, [**24**]{} 263 (1998).
R. Balian and E. Brézin, Nuovo Cim. B [**2**]{}, 403 (1969).
C. Schwartz, Phys. Rev. [**124**]{}, 553 (1961).
T. Scholz, P. Scott and P. G. Burke, J. Phys. B: At. Mol. Opt. Phys. [**21**]{}, L139 (1988).
J. Botero and J. Shertzer, Phys. Rev. A [**46**]{}, R1155 (1992).
Y. D. Wang and J. Callaway, Phys. Rev. A [**48**]{}, 2058 (1993); Phys. Rev. A [**50**]{}, 2327 (1994).
C-.Y. Hu, J. Phys. B: At. Mol. Opt. Phys. [**32**]{}, 3077 (1999); Phys. Rev. A [**59**]{}, 4813 (1999).
![Analytic structure of $g_{x_1}(E+{\mathrm{i}}\varepsilon-z')\;
g_{y_1}(z')$ as a function of $z'$, $\varepsilon>0$. The Green’s operator $g_{y_1}(z')$ has a branch-cut on the $[0,\infty)$ interval, while $g_{x_1}(E+{\mathrm{i}}\varepsilon-z')$ has a branch-cut on the $(-\infty,E+{\mathrm{i}}\varepsilon]$ interval and infinitely many poles accumulated at $E+{\mathrm{i}}\varepsilon$ (denoted by dots). The contour $C$ encircles the branch-cut of $g_{y_1}$. In the $\varepsilon \to 0$ limit the singularities of $g_{x_1}(E+{\mathrm{i}}\varepsilon -z')$ would penetrate into the area covered by $C$. []{data-label="fig1"}](shfig1){width="45.00000%"}
![The contour of Fig. \[fig1\] is deformed analytically such that a part of it goes on the unphysical Riemann-sheet of $g_{y_1}$ (drawn by broken line) and the other part detoured away from the cut. Now, the contour avoids the singularities of $g_{x_1}(E+{\mathrm{i}}\varepsilon-z')$ even in the $\varepsilon \to 0$ limit.[]{data-label="fig2"}](showbru1){width="45.00000%"}
[lcccccc]{} $k$& Ref. [@schwarz] & Ref. [@scholz] & Ref. [@botero] & Ref. [@wang] & Ref. [@kwh] & This work\
\
0.1 & 2.553 & 2.550 & 2.553 & 2.555 & 2.553 & 2.552\
0.2 & 2.0673& 2.062 & 2.066 & 2.066 & 2.065 & 2.064\
0.3 & 1.6964& 1.691 & 1.695 & 1.695 & 1.694 & 1.693\
0.4 & 1.4146& 1.410 & 1.414 & 1.415 & 1.415 & 1.412\
0.5 & 1.202 & 1.196 & 1.202 & 1.200 & 1.200 & 1.197\
0.6 & 1.041 & 1.035 & 1.040 & 1.041 & 1.040 & 1.037\
0.7 & 0.930 & 0.925 & 0.930 & 0.930 & 0.930 & 0.927\
0.8 & 0.886 & & 0.887 & 0.887 & 0.885 & 0.884\
\
0.1 & 2.9388& 2.939 & 2.938 & 2.939 & 2.939 & 2.938\
0.2 & 2.7171& 2.717 & 2.717 & 2.717 & 2.717 & 2.717\
0.3 & 2.4996& 2.500 & 2.500 & 2.500 & 2.499 & 2.499\
0.4 & 2.2938& 2.294 & 2.294 & 2.294 & 2.294 & 2.294\
0.5 & 2.1046& 2.105 & 2.104 & 2.104 & 2.105 & 2.104\
0.6 & 1.9329& 1.933 & 1.933 & 1.933 & 1.933 & 1.932\
0.7 & 1.7797& 1.780 & 1.780 & 1.780 & 1.779 & 1.779\
0.8 & 1.643 & & 1.645 & 1.644 & 1.641 & 1.643\
[lcccc]{} & Ch.\# & 1 & 2 & 3\
\
\
& 1 & 0.564 & 0.061 & 0.024\
$\sigma_{ij}$ & 2 & 0.817 & 8.373 & 2.588\
& 3 & 0.107 & 0.863 & 1.722\
& 1 & 1.895 & -2.036 & 1.792\
$K_{ij}$ & 2 &-2.043 & 5.230 & -4.114\
& 3 & 1.798 & -4.114 & 2.366\
\
& 1 & 0.568 & 0.061 & 0.024\
$\sigma_{ij}$ & 2 & 0.814 & 8.720 & 2.471\
& 3 & 0.105 & 0.824 & 1.697\
& 1 & 1.864 & 1.971 & -1.671\
$K_{ij}$ & 2 & 1.980 & 5.131 & -3.843\
& 3 & -1.679 & -3.843 & 2.028\
\
\
& 1 & 3.694 & 0.001 & 0.0006\
$\sigma_{ij}$ & 2 & 0.016 & 10.04 & 1.641\
& 3 & 0.003 & 0.547 & 11.85\
& 1 & 21.34 & 0.3255 & 0.6386\
$ K_{ij}$ & 2 & 0.3268 & -0.4404 & -0.4161\
& 3 & 0.6409 & -0.4161 & 1.755\
\
& 1 & 3.696 & 0.001 & 0.0006\
$\sigma_{ij}$ & 2 & 0.016 & 10.20 & 1.678\
& 3 & 0.003 & 0.560 & 11.77\
& 1 & 24.76 & -0.3823 & -0.7510\
$ K_{ij}$ & 2 & -0.3803 & -0.4441 & -0.4167\
& 3 & -0.7453 & -0.4165 & 1.737\
[cllllll]{} Ch.\# & 1 & 2 & 3 & 4 & 5 & 6\
\
1 & 1.076 & -0.647 & -0.160 & 0.229 & 0.180 & 0.074\
2 & -0.652 & 1.541 & -0.028 & 0.129 & 0.531 & 0.265\
3 & -0.160 & -0.029 & 0.766 & 0.314 & -0.757 & -0.385\
4 & 0.230 & 0.130 & 0.314 & -0.566 & -0.525 & -0.284\
5 & 0.180 & 0.534 & -0.757 & -0.526 & 0.237 & 0.760\
6 & 0.074 & 0.266 & -0.385 & -0.285 & 0.760 & 1.342\
\
1 & 9.054 & 0.507 & 0.019 & 0.666 & 0.099 & 0.028\
2 & 0.543 & -1.700 & -0.111 & -1.530 & -0.113 & -0.120\
3 & 0.025 & -0.112 & 0.155 & -0.050 & -0.926 & -0.070\
4 & 0.702 & -1.532 & -0.050 & -0.851 & -0.253 & -0.048\
5 & 0.104 & -0.114 & -0.926 & -0.253 & 0.927 & 0.449\
6 & 0.030 & -0.120 & -0.070 & -0.049 & 0.449 & -0.111\
[cllllll]{} Ch.\# & 1 & 2 & 3 & 4 & 5 & 6\
\
1 &0.44 & 0.48(-1) & 0.67(-2) & 0.28(-1) & 0.86(-2) & 0.20(-2)\
2 &0.25 & 3.02 & 0.19(-1) & 0.10 & 0.12 & 0.40(-1)\
3 &0.15 & 0.83(-1) & 4.68 & 0.71 & 2.41 & 0.86\
4 &0.49(-1) & 0.34(-1) & 0.55(-1) & 0.49 & 0.59(-1) & 0.24(-1)\
5 &0.65(-1) & 0.18 & 0.80 & 0.26 & 1.48 & 0.44\
6 &0.89(-2) & 0.35(-1) & 0.17 & 0.61(-1) & 0.27 & 2.0\
\
1 & 3.18 & 0.22(-2) & 0.43(-4) & 0.21(-2) & 0.26(-4) & 0.14(-5)\
2 & 0.12(-1) & 5.92 & 0.93(-2) & 3.77 & 0.44(-1) & 0.61(-1)\
3 & 0.97(-3) & 0.40(-1) & 7.56 & 0.35 & 11.6 & 3.34\
4 & 0.39(-2) & 1.26 & 0.26(-1) & 0.87 & 0.11(-1) & 0.19(-2)\
5 & 0.23(-3) & 0.63(-1) & 3.87 & 0.48(-1) & 9.14 & 1.07\
6 & 0.79(-5) & 0.53(-1) & 0.67 & 0.49(-2) & 0.64 & 0.34\
\
1 &0.46 & 0.45(-1) & 0.90(-2) & 0.24(-1) & 0.89(-2) & 0.18(-2)\
2 &0.26 & 3.74 & 0.24 & 0.77(-1) & 0.74(-1) & 0.15(-1)\
3 &0.38 & 1.77 & 5.46 & 1.11 & 0.90 & 1.14\
4 &0.46(-1) & 0.26(-1) & 0.50(-1) & 0.49 & 0.86(-1) & 0.30(-1)\
5 &0.13 & 0.18 & 0.30 & 0.64 & 11.5 & 0.37\
6 &0.16(-1) & 0.23(-1) & 0.23 & 0.13 & 0.22 & 1.15\
\
1 & 3.26 & 0.22(-2) & 0.23(-4) & 0.20(-2) & 0.20(-4) & 0.17(-5)\
2 & 0.13(-1) & 7.22 & 0.24(-1) & 4.07 & 0.28(-1) & 0.25(-1)\
3 & 0.96(-3) & 0.18 & 9.11 & 0.33 & 0.27 & 12.25\
4 & 0.37(-2) & 1.36 & 0.15(-1) & 0.98 & 0.10(-1) & 0.31(-2)\
5 & 0.26(-3) & 0.69(-1) & 0.89(-1) & 0.74(-1) & 44.97 & 0.44\
6 & 0.14(-4) & 0.38(-1) & 2.45 & 0.14(-1) & 0.27 & 2.81\
\
1 &0.48 & 0.47(-1) & 0.53(-2) & 0.21(-1) & 0.79(-2) & 0.26(-2)\
2 &0.30 & 4.67 & 0.32(-1) & 0.61(-1) & 0.14 & 0.12\
3 &2.98 & 2.80 & 259.8 & 19.02 & 1.58 & 7.12\
4 &0.45(-1) & 0.20(-1) & 0.72(-1) & 0.51 & 0.77(-1) & 0.13(-1)\
5 &1.48 & 3.40 & 0.53 & 6.78 & 119.8 & 2.67\
6 &0.29 & 2.04 & 1.42 & 0.70 & 1.60 & 38.90\
\
1 & 3.34 & 0.22(-2) & 0.67(-5) & 0.17(-2) & 0.90(-5) & 0.25(-5)\
2 & 0.13(-1) & 8.68 & 0.94(-2) & 4.33 & 0.17(-1) & 0.87(-1)\
3 & 0.37(-3) & 0.83 & 1321.0 & 1.75 & 152.8 & 58.26\
4 & 0.33(-2) & 1.44 & 0.66(-2) & 1.25 & 0.67(-2) & 0.12(-2)\
5 & 0.16(-2) & 0.49 & 50.93 & 0.59 & 124.6 & 56.47\
6 & 0.25(-3) & 0.15 & 11.65 & 0.63(-1) & 33.88 & 218.4\
[clllllllll]{} Ch.\# & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\
\
1 & -1.888 & -7.518 & 13.24 & 9.699 & 8.320 & 7.148 & 1.992 & 4.684 & 30.96\
2 & -7.525 & -29.70 & 51.80 & 38.14 & 32.73 & 28.88 & 7.839 & 18.28 & 121.5\
3 & 13.30 & 51.99 &-89.98 & -67.93 & -56.77 & -50.01 & -13.90 & -29.57 & -216.6\
4 & 9.665 & 37.98 & -67.40 & -48.21 & -42.35 & -36.39 & -9.947 & -24.07 & -156.7\
5 & 8.346 & 32.81 & -56.70 & -42.64 & -36.30 & -31.16 & -9.349 & -19.40 & -136.0\
6 & 7.151 & 28.87 & -49.82 & -36.54 & -31.08 & -28.11 & -7.718 & -17.41 & -117.3\
7 & 2.006 & 7.885 & -13.94 & -10.05 & -9.381 & -7.765 & -2.651 & -4.874 & -34.08\
8 & 4.755 & 18.64 & -29.92 & -24.51 & -19.64 & -17.67 & -4.915 & -8.953 & -73.21\
9 & 31.01 & 121.6 & -215.9 & -157.5 & -135.7 & -117.3 & -33.89 & -72.15 & -510.6\
\
1 & 0.454 & -0.303 & -0.051 & -0.020 & 0.080 & 0.043 & -0.017 & 0.149 & 0.128\
2 & -0.301 & -2.453 & -0.669 & 0.383 & 0.552 & 1.112 & 0.017 & 1.145 & 1.060\
3 & -0.051 & -0.672 & 0.398 & -0.465 & 1.140 & -0.371 & 0.0001 & 0.578 & 0.486\
4 & -0.020 & 0.382 & -0.464 & 0.354 & -1.133 & -0.236 & 0.883 & -0.528 & -0.110\
5 & 0.079 & 0.553 & 1.137 & -1.136 & 3.936 & -0.699 & -3.202 & 1.075 & -0.989\
6 & 0.041 & 1.113 & -0.372 & -0.236 & -0.701 & 0.289 & 0.520 & -0.769 & -0.456\
7 & -0.016 & 0.018 & 0.002 & 0.884 & -3.203 & 0.518 & 1.673 & -1.484 & -0.226\
8 & 0.148 & 1.147 & 0.576 & -0.530 & 1.075 & -0.769 & -1.483 & -0.055 & -0.278\
9 & 0.127 & 1.062 & 0.486 & -0.111 & -0.988 & -0.457 & -0.226 & -0.277 & 0.090\
[clllllllll]{} Ch.\# & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\
\
1 & 0.380(-2) & 0.104(-1) & 0.138(-2) & 0.394(-1) & 0.677(-2) & 0.125(-1) & 0.543(-2) & 0.664(-2) & 0.180(-2)\
2 & 0.530(-1) & 0.208(1) & 0.760(-2) & 0.152(1) & 0.139 & 0.135(1) & 0.284(-1) & 0.450 & 0.103\
3 & 0.319(-1) & 0.321(-1) & 0.311(2) & 0.117(1) & 0.340(1) & 0.170 & 0.459(1) & 0.191(1) & 0.282(1)\
4 & 0.679(-1) & 0.506 & 0.903(-1) & 0.157(1) & 0.104 & 0.151 & 0.213 & 0.169 & 0.796(-1)\
5 & 0.508(-1) & 0.201 & 0.113(1) & 0.450 & 0.415(1) & 0.103(1) & 0.169(1) & 0.113(1) & 0.871(-1)\
6 & 0.219(-1) & 0.448 & 0.131(-1) & 0.150 & 0.235 & 0.164(1) & 0.647(-1) & 0.399(-1) & 0.183(-2)\
7 & 0.412(-1) & 0.415(-1) & 0.153(1) & 0.928 & 0.169(1) & 0.282 & 0.335(1) & 0.105 & 0.233\
8 & 0.296(-1) & 0.391 & 0.383 & 0.440 & 0.679 & 0.105 & 0.620(-1) & 0.800(1) & 0.283\
9 & 0.807(-2) & 0.890(-1) & 0.562 & 0.208 & 0.523(-1) & 0.468(-2) & 0.139 & 0.283 & 0.116(2)\
\
1 & 0.178(1) & 0.484(-1) & 0.853(-2) & 0.158(-1) & 0.603(-2) & 0.167(-1) & 0.457(-2) & 0.191(-2) & 0.625(-3)\
2 & 0.247 & 0.235(2) & 0.390 & 0.109(1) & 0.435 & 0.294(1) & 0.163(1) & 0.171(1) & 0.182(1)\
3 & 0.193 & 0.170(1) & 0.514(2) & 0.892(1) & 0.208(1) & 0.105(2) & 0.167(2) & 0.596 & 0.381(1)\
4 & 0.277(-1) & 0.362 & 0.683 & 0.846 & 0.576 & 0.801 & 0.344(-1) & 0.924(-2) & 0.920(-1)\
5 & 0.453(-1) & 0.633 & 0.695 & 0.251(1) & 0.373(2) & 0.525 & 0.348(1) & 0.295(1) & 0.367(1)\
6 & 0.291(-1) & 0.981 & 0.810 & 0.804 & 0.121 & 0.416(1) & 0.887(-1) & 0.121 & 0.803(-1)\
7 & 0.333(-1) & 0.236(1) & 0.556(1) & 0.151 & 0.348(1) & 0.388 & 0.276(2) & 0.290(1) & 0.186(1)\
8 & 0.831(-2) & 0.149(1) & 0.119 & 0.240(-1) & 0.177(1) & 0.315 & 0.174(1) & 0.448(1) & 0.280(1)\
9 & 0.259(-2) & 0.158(1) & 0.760 & 0.241 & 0.220(1) & 0.208 & 0.111(1) & 0.280(1) & 0.935(1)\
[clllllllll]{} Ch.\# & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\
\
1 & 0.365(-2) & 0.102(-1) & 0.102(-2) & 0.437(-1) & 0.474(-2) & 0.137(-1) & 0.397(-2) & 0.521(-2) & 0.154(-2)\
2 & 0.576(-1) & 0.218(1) & 0.648(-2) & 0.185(1) & 0.395 & 0.122(1) & 0.694(-1) & 0.109 & 0.276(-1)\
3 & 0.428(-1) & 0.485(-1) & 0.963(2) & 0.213(1) & 0.380(1) & 0.262 & 0.107(1) & 0.683(1) & 0.517(1)\
4 & 0.829(-1) & 0.617 & 0.954(-1) & 0.193(1) & 0.155 & 0.245 & 0.206 & 0.989(-1) & 0.279(-1)\
5 & 0.666(-1) & 0.981 & 0.127(1) & 0.115(1) & 0.509(1) & 0.605 & 0.326(1) & 0.168(1) & 0.944\
6 & 0.261(-1) & 0.408 & 0.118(-1) & 0.246 & 0.818(-1) & 0.188(1) & 0.367(-1) & 0.139 & 0.520(-1)\
7 & 0.563(-1) & 0.174 & 0.356 & 0.153(1) & 0.326(1) & 0.271 & 0.570(1) & 0.157(1) & 0.262(1)\
8 & 0.444(-1) & 0.163 & 0.137(1) & 0.443 & 0.101(1) & 0.616 & 0.940 & 0.160(2) & 0.438\
9 & 0.133(-1) & 0.411(-1) & 0.103(1) & 0.125 & 0.567 & 0.232 & 0.157(1) & 0.438 & 0.187(2)\
\
1 & 0.182(1) & 0.438(-1) & 0.788(-2) & 0.1567(-1) & 0.605(-2) & 0.159(-1) & 0.450(-2) & 0.209(-2) & 0.633(-3)\
2 & 0.243 & 0.241(2) & 0.209(1) & 0.170(1) & 0.126(1) & 0.479(1) & 0.106(1) & 0.632 & 0.652\
3 & 0.329 & 0.156(2) & 0.166(3) & 0.888(1) & 0.146(1) & 0.851(1) & 0.334(1) & 0.471(1) & 0.797(1)\
4 & 0.296(-1) & 0.567 & 0.397 & 0.139(1) & 0.464 & 0.102(1) & 0.119 & 0.123 & 0.107\
5 & 0.846(-1) & 0.311(1) & 0.486 & 0.345(1) & 0.814(2) & 0.524 & 0.604(1) & 0.480 & 0.144(1)\
6 & 0.296(-1) & 0.160(1) & 0.382 & 0.102(1) & 0.706(-1) & 0.445(1) & 0.192 & 0.177 & 0.140\
7 & 0.632(-1) & 0.263(1) & 0.111(1) & 0.885 & 0.604(1) & 0.143(1) & 0.540(2) & 0.255 & 0.113(1)\
8 & 0.179(-1) & 0.943 & 0.942 & 0.550 & 0.2895 & 0.793 & 0.153 & 0.633(1) & 0.231(1)\
9 & 0.551(-2) & 0.971 & 0.159(1) & 0.480 & 0.8620 & 0.627 & 0.676 & 0.231(1) & 0.218(2)\
[clllllllll]{} Ch.\# & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\
\
1 & 0.342(-2) & 0.940(-2) & 0.819(-3) & 0.474(-1) & 0.262(-2) & 0.154(-1) & 0.264(-2) & 0.292(-2) & 0.849(-3)\
2 & 0.609(-1) & 0.252(1) & 0.412(-1) & 0.212(1) & 0.693(-1) & 0.108(1) & 0.976(-1) & 0.777(-1) & 0.306(-1)\
3 & 0.466 & 0.361(1) & 0.550(3) & 0.862(1) & 0.384(2) & 0.567(1) & 0.177(3) & 0.672(2) & 0.232(2)\
4 & 0.102 & 0.708 & 0.326(-1) & 0.233(1) & 0.126 & 0.422 & 0.109 & 0.136 & 0.363(-1)\
5 & 0.495 & 0.205(1) & 0.128(2) & 0.111(2) & 0.400(3) & 0.264(1) & 0.166(1) & 0.123(2) & 0.510(2)\
6 & 0.327(-1) & 0.361 & 0.213(-1) & 0.422 & 0.302(-1) & 0.223(1) & 0.468(-1) & 0.366(-1) & 0.133(-1)\
7 & 0.500 & 0.286(1) & 0.591(2) & 0.965(1) & 0.166(1) & 0.415(1) & 0.106(3) & 0.146(1) & 0.291(2)\
8 & 0.331 & 0.138(1) & 0.135(2) & 0.719(1) & 0.735(1) & 0.193(1) & 0.879 & 0.695(2) & 0.302(2)\
9 & 0.965(-1) & 0.538 & 0.464(1) & 0.192(1) & 0.306(2) & 0.706 & 0.175(2) & 0.302(2) & 0.140(3)\
\
1 & 0.186(1) & 0.408(-1) & 0.669(-2) & 0.170(-1) & 0.571(-2) & 0.158(-1) & 0.287(-2) & 0.160(-2) & 0.258(-3)\
2 & 0.267 & 0.277(2) & 0.983(-1) & 0.972 & 0.415 & 0.398(1) & 0.188(1) & 0.309(1) & 0.174(1)\
3 & 0.376(1) & 0.862(1) & 0.175(4) & 0.264(3) & 0.235(3) & 0.266(3) & 0.115(3) & 0.886(2) & 0.392(3)\
4 & 0.360(-1) & 0.324 & 0.996 & 0.345(1) & 0.563 & 0.698 & 0.542(-1) & 0.574(-1) & 0.483(-1)\
5 & 0.108(1) & 0.122(2) & 0.785(2) & 0.498(2) & 0.796(3) & 0.440(2) & 0.186(3) & 0.273(2) & 0.202(2)\
6 & 0.337(-1) & 0.133(1) & 0.101(1) & 0.698 & 0.500 & 0.707(1) & 0.227(-2) & 0.105 & 0.151\
7 & 0.554 & 0.553(2) & 0.382(2) & 0.482(1) & 0.185(3) & 0.193 & 0.233(3) & 0.694(2) & 0.754(2)\
8 & 0.189 & 0.544(2) & 0.177(2) & 0.303(1) & 0.164(2) & 0.557(1) & 0.416(2) & 0.106(3) & 0.980(2)\
9 & 0.314(-1) & 0.307(2) & 0.784(2) & 0.253(1) & 0.121(2) & 0.799(1) & 0.453(2) & 0.981(2)& 0.26(3)\
|
---
abstract: 'We show that quantum dots in photonic nanostructures provide a highly promising platform for deterministic generation of entangled multiphoton states. Our approach utilizes periodic driving of a quantum-dot based emitter and an efficient light-matter interface enabled by a photonic crystal waveguide. We assess the quality of the photonic states produced from a real system by including all experimentally relevant imperfections. Importantly, the protocol is robust against the nuclear spin bath dynamics due to a naturally built-in refocussing method reminiscent to spin echo. We demonstrate the feasibility of producing the Greenberger–Horne–Zeilinger and one-dimensional cluster states with fidelities and generation rates exceeding those achieved with conventional ‘fusion’ methods in current state-of-the-art experiments. The proposed hardware constitutes a scalable and resource-efficient approach towards implementation of measurement-based quantum communication and computing.'
author:
- Konstantin Tiurev
- Martin Hayhurst Appel
- Pol Llopart Mirambell
- Mikkel Bloch Lauritzen
- Alexey Tiranov
- Peter Lodahl
- 'Anders S[ø]{}ndberg S[ø]{}rensen'
bibliography:
- 'reflist.bib'
title: 'High-fidelity multi-photon-entangled cluster state with solid-state quantum emitters in photonic nanostructures'
---
The development of efficient sources of on-demand entangled photons is an ongoing experimental endeavour. Quantum states containing large numbers of entangled photons is a desirable component for many quantum-information processing applications, including photonic quantum computing [@RevModPhys.79.135; @PhysRevLett.93.040503; @PhysRevLett.95.010501; @PhysRevLett.95.010501; @Knill:2001aa; @doi:10.1063/1.5115814], quantum simulations [@Lanyon:2010aa; @Ma:2011aa], entanglement-enhanced metrology [@T_th_2014; @shettell2019graph], and long-distance quantum communication [@Azuma:2015aa; @Li:2019aa; @PhysRevX.10.021071]. Furthermore, access to high-fidelity multiphoton entanglement would have applications for fundamental tests of quantum mechanics [@Pan:2000aa; @Lu.2014; @PhysRevA.61.022109].
The creation of entangled states containing large numbers of photons is, however, a formidable challenge due to the lack of deterministic and scalable methods for the production of such states. Variations of spontaneous parametric downconversion (SPDC) [@PhysRevLett.25.84; @PhysRevLett.75.4337; @PhysRevLett.83.3103] combined with interference between generated pairs and single photon detection [@PhysRevLett.82.1345; @PhysRevA.73.022330; @PhysRevLett.78.3031] have been implemented to scale up the number of entangled photons [@PhysRevLett.95.210502; @Zhang:2019aa; @Lu:2007aa; @Yao:2012aa; @PhysRevLett.117.210502], with a recent state-of-the art experiment demonstrating genuine 12-photon entanglement [@PhysRevLett.121.250505]. Today, scaling up is commercially pursued by multiplexing many probabilistic SPDC sources towards photonic quantum computing [@doi:10.1063/1.4976737; @PhysRevA.95.012304; @doi:10.1063/1.5115814]. An alternative and much less investigated strategy is to apply on-demand photon emission from a single quantum emitter. In this case, a single spin in the emitter serves as the entangler of consecutively emitted photons [@PhysRevA.58.R2627; @Lindner2009; @Lee2019], and combined with photonic nanostructures for enhancing photon-emitter coupling [@Lodahl2015], long strings of highly-entangled photons could potentially be generated. A proof-of-concept experiment with quantum dots (QDs) in bulk samples recently demonstrated three-qubit linear cluster states [@Schwartz434]. However, it is an open question how these deterministic sources can be scaled-up in a real experimental setting. A detailed assessment of the effect of imperfections is thus essential for developing new resource-efficient architectures for photonic quantum computation or photonic quantum networks [@PhysRevX.10.021071; @Azuma:2015aa; @Borregaard2019a].
{width="90.00000%"}
In the present Letter we develop and analyze a protocol for generating multi-photon entangled states with a QD emitter embedded in a photonic nanostructure taking into account all relevant imperfections. We present a complete analysis of how to scale-up the protocol and identify the governing physical processes and figures-of-merit. Our results demonstrate that recent experimental advances make QDs in photonic nanostructes highly promising sources of multiphoton entangled states, enabling deterministic generation of entangled states for a large number of photons.
Self-assembled semiconductor QDs have lately seen remarkable experimental progress, opening new possibilities for photonic quantum technologies. Particularly, spin qubits realized with a single charge injected into the QD enable efficient coherent light-matter interfaces and control over emitted photons due to simultaneously achievable high photon generation rate, good optical and spin coherence properties [@Aharonovich2016; @Atature2018; @Awschalom2018; @Lodahl2015], and near-perfect spin-rotations [@Bodey:2019aa]. Integration of QDs into photonic nanostructures, such as photonic crystal waveguides (PCW), significantly improves the quality of quantum interfaces combining strong light-matter interaction [@Lodahl2015] with high photon collection efficiencies [@Arcari2014; @Somaschi2016]. Experimental advances in fabrication of light-matter interfaces have enabled demonstrations of near-perfect single-photon indistinguishability ($I$) of two subsequently emitted photons exceeding 96% [@Somaschi2016; @Kirsanske2017], an internal efficiency $\beta$ exceeding 98% [@Arcari2014] and on-demand entangled photon sources with higher than 90% state fidelity [@Wang2019]. Recently it was demonstrated that these sources can be scaled up to reach the threshold for quantum advantage [@uppu2020scalable].
The proposed entanglement protocol based on a QD containing a hole spin in a PCW is illustrated in Fig. \[fig:1\]. It relies on encoding photonic qubits in separate time bins corresponding to early $(\left| e \right>)$ or late $(\left| l \right>)$ arrival times. The general idea is to repeatedly apply the pulse sequence of Fig. \[fig:1\](b) to coherently control a ground-state spin in the QD and selectively emit single photons on the targeted optical transition in the designated time bin. Initially the hole spin is placed in a superposition of the two spin states $\ket{\Downarrow}$ and $\ket{\Uparrow}$ using a $\pi/2$ pulse from the Raman field $\Omega_{\mathrm{R}}$. Within each round of the protocol the QD is first excited to the excited trion state $\ket{\downarrow}$ using the optical field $\Omega_{\mathrm{O}}$ if the QD is in $\ket{\Downarrow}$. From the trion state the QD decays emitting an early photon $\ket{e}$. Subsequently the hole spins states are flipped using a Raman $\pi$-pulse followed by excitation with $\Omega_{\mathrm{O}}$ and emission of a late photon $\ket{l}$. This procedure creates an entangled state between the spin and the time bin of the outgoing photon, which can be extended to multiple photons by repeating the protocol with a spin rotation $R$ between each round of the protocol. The nature of the entangled state is defined by the choice of $R$: $R = \pi$ creates the Greenberger–Horne–Zeilinger (GHZ) state [@greenberger2007going] while $R = \pi/2$ creates the one-dimensional cluster state [@Tiurev2019a]. A similar scheme has been partly experimentally realized in Ref. [@Lee2019] using a micropillar cavity system, however, without the interferometric measurements needed to prove entanglement.
The use of a PCW in our scheme offers several important advantages needed for efficient scaling [@Lodahl2015]: (i) the single-photon coupling efficiency to the waveguide can be near-unity, (ii) the photon indistinguishability can be enhanced by the Purcell effect, and (iii) the generally high coupling asymmetry of the two in-plane linear dipole transitions imply that high-quality optical cyclings can be induced on the designated transition while still allowing optical spin rotations [@appel2020coherent]. In the following, we account for all experimental imperfections and evaluate the fidelity of multi-photon GHZ and cluster states. Our results demonstrate that the use of the PCW makes this approach highly promising and identify the governing parameters for further improvements.
{width="90.00000%"}
We assess the quality of the produced spin-multiphoton state by calculating its infidelity [@Tiurev2019a] $\mathcal{E}^{(N)} = 1 - \mathrm{Tr}_{\mathrm{env}} \{ \bra{\Psi} \hat{\rho}^{(N)} \ket{\Psi} \}$, where $\hat{\rho}^{(N)}$ is the density operator of an $N$-photon state affected by imperfections, $\ket{\Psi}$ is the ideal GHZ or cluster state, and $\mathrm{Tr}_{\mathrm{env}}$ denotes a trace over the emission time and unobserved degrees of freedom, such as phonons or photons lost during the operation. Conditioning on the detection of at least one photon in either the early or late time bin, the total infidelity for the generation of an entangled GHZ or cluster state containing $N$ photons and the spin is in first-order perturbation theory given by [@Tiurev2019a] $$\begin{aligned}
\label{eq:infidelity_total}
\mathcal{E}^{(N)}
&=
N\Big{(}
\frac{1 - I}{2}
+
\frac{\sqrt{3}\pi}{8}\frac{\gamma}{\Delta}
+
\frac{1}{2(B+1)}
\Big{)}
-
\frac{1}{4(B+1)}.
\end{aligned}$$ Here the spontaneous emission rate $\gamma$, the branching ratio $B$, the degree of indistinguishability $I$, and detuning of the off-resonant transition $\Delta$ are parameters that will be explained below.
The ideal protocol assumes that only the vertical decay path $\ket{\downarrow} \rightarrow \ket{\Downarrow}$ in Fig. \[fig:1\](a) is allowed, such that the excitation and decay form a closed cycle. A finite probability of the diagonal transition $\ket{\downarrow} \rightarrow \ket{\Uparrow}$ will lead to an incorrect spin configuration and a reduction of the fidelity. We characterize the cyclicity with a branching parameter $B = (\beta_{\parallel} + \beta_{\parallel}^{\prime})/(\beta_{\perp} + \beta_{\perp}^{\prime})$, where $\beta_{\parallel}$($\beta_{\perp}$) and $\beta_{\parallel}^{\prime}$($\beta_{\perp}^{\prime}$) are the probabilities of the vertical(diagonal) transitions into and out of the waveguide mode, respectively. The performance of an experiment will therefore rely on the high selectivity of the vertical transitions, i.e. $B\gg 1$.
We propose applying an in-plane magnetic field (Voigt geometry) which intrinsically provides $B=1$ but, crucially, allows all-optical spin control. $B$ may then be increased by selectively enhancing the desired optical transition with a photonic nanostructure. In recent years, the capability of nanostructures to provide such enhancement has been demonstrated with a variety of systems, including rare-earth ions [@Raha:2020aa] and QDs coupled to photonic crystal cavities [@Carter2013; @Sun2016] or micropillar cavities [@Lee2019; @Wang2019a]. A similar effect can be reached in PCWs with strong polarization dependence of the projected local density of states. This, in combination with the orthogonally polarised linear dipoles of a QD in the Voigt geometry, allows a greatly enhanced $B$. In Fig. \[fig:2\](a) we show calculated $\beta$-factors and branching ratios $B$ based on the simulations published in Ref. [@Javadi2018]. For a realistic group index $n_g=20$, a branching ratio of $B>50$ and an internal efficiency $\beta > 96\%$ are simultaneously achievable by placing a QD in the center of a PCW cell. To further suppress the residual contribution of the off-diagonal transitions, we consider frequency filters which can be implemented using e.g. one or two narrow bandpass cavities, with the latter filtering typically up to 99% of the off-resonant photons. Assuming such high-efficiency filtering, we derive [@Tiurev2019a] the first-order infidelity due to imperfect branching to be ${\mathcal{E}}^{(N)}_{\mathrm{br}} = (N-1/2)/(2(B+1))$, which corresponds to the last two terms in Eq. and is shown in Fig. \[fig:2\](a) for a single emitted photon. For the optimal QD position the single-photon branching infidelity can be as low as $1\%$.
Next, we consider the effect of dephasing omnipresent in solid state systems. Decoherence appears through a variety of different mechanisms characterized by widely different time scales. As discussed below the protocol is remarkably insensitive to slowly varying processes. On the other hand, phonon scattering appears on timescales ($\sim$ ps) shorter than the lifetime of the excited trion state ($\sim$ ns) and limits the indistinguishability of emitted photons [@PhysRevLett.120.257401; @Dree_en_2018]. Considering a pure dephasing model (a valid description of the broadening of the QD zero-phonon line) and vanishing multi-photon contributions, the indistinguishability of emitted photons is $I = \gamma/(\gamma+2\gamma_{\mathrm{d}})$, where $\gamma$ is the photon emission rate and $\gamma_{\mathrm{d}}$ is the phonon-induced dephasing rate. By enhancing the photon emission rate to the waveguide, an indistinguishability of more than $96\%$ has been observed in experiments [@Somaschi2016; @Kirsanske2017; @Lodahl2015]. The same mechanism increases the state infidelity, which can hence be expressed through the experimentally measurable indistinguishability $I$ as $\mathcal{E}_{\mathrm{ph}}^{(N)} = N(1 - I)/2$, corresponding to the first term in Eq. .
Further, we discuss imperfect operations during the driving pulses. Since the excited trion comprises two Zeeman states \[Fig. \[fig:1\](a)\], excitation of undesired transitions have to be suppressed. This is ensured by a large detuning $\Delta$ of the off-resonant transition $\ket{\uparrow} \leftrightarrow \ket{\Uparrow}$ compared to the emission rate $\gamma$ of the $\ket{\downarrow} \leftrightarrow \ket{\Downarrow}$ transition. The detuning can be controlled by a magnetic field, while the spontaneous emission rate $\gamma$ can be controlled via the Purcell effect of the waveguide. The probability of off-resonant excitations is strongly suppressed when the system is driven with long and low-intensity laser pulses. On the other hand for long pulses there is a large probability for the desired $\ket{\downarrow} \leftrightarrow \ket{\Downarrow}$ transition to decay and be re-excited during the pulse. The duration of the pulse should thus be optimized to suppress the errors. We have evaluated [@Tiurev2019a] the infidelities corresponding to the optimal driving regimes for both Gaussian and square-shape pulses. The latter allows for a simple analytical expression, $\mathcal{E}^{(N)}_{\mathrm{exc}}=N\sqrt{3}\pi\gamma/(8\Delta)$, where $\Delta$ is the detuning between two vertical transitions in Fig. \[fig:1\](a) and this also represent a good approximation for Gaussian pulses. Additional errors occur if the excitation laser drives the cross transitions $\ket{\Uparrow} \leftrightarrow \ket{\downarrow}$ and $\ket{\Downarrow} \leftrightarrow \ket{\uparrow}$, which, however, can be completely avoided by correct laser polarisation in side channel excitation. This is readily implementable in the waveguide geometry [@uppu2020scalable] but has not yet been implemented in micropillar [@Wang2019a] or planar cavities [@Carter2013; @Sun2016], which rely on cross excitation schemes.
Finally, as a last source of imperfection we consider dephasing induced by slow drifts of the energy levels. A particular example arises from the hyperfine interaction between the coherent spin and the slowly fluctuating nuclear spin environment, i.e. the Overhauser noise [@PhysRevLett.102.146601; @PhysRevLett.95.076805; @PhysRevLett.88.186802], which manifests itself in relatively short ground-state spin coherence times $T_2^*$ [@HU:2002aa]. Our protocol for time-bin photon generation is highly insensitive to dephasing induced by such mechanism, because the pulse sequence of Fig. \[fig:1\](b) flips the ground states $\ket{\Uparrow}$ and $\ket{\Downarrow}$ between the early and late time bins, effectively introducing a spin echo sequence [@PhysRevLett.100.236802] at each cycle of the protocol. The success of the spin echo sequence is linked to the proposed measurement setup in Fig. \[fig:1\](a). Time-bin qubits are analyzed by interfering pulses delayed by a time equal to the time difference between the two excitation pulses. If the central frequency of the transition is slowly drifting, this will not have any influence on the interference. Furthermore the system spends exactly the same amount of time in the excited states for the early and late parts of the protocol, which corresponds to perfect spin echo conditions. Consequently, either hole or electron spins can be used on an equal basis, even though the latter has a much shorter coherence time $T_2^*$. On longer times, slow fluctuations of environment build up to a so-called $T_2$ noise. This, however, typically happens on time scales [@PhysRevB.97.241413; @Press2010] two orders of magnitude longer than the length of a time bin [@Jayakumar:2014aa] and thus has negligible effect on our generation protocol for modest number of photons.
The insensitivity to slow fluctuations for the measurement setup in Fig. \[fig:1\](a) capture several interesting situations. For instance, the protocol of Pichler *et al.* [@Pichler11362] for universal quantum computation using cluster states relies on the emission from a single emitter. We thus expect a similar insensitivity. Furthermore, the quantum repeater protocol of Borregaard *et al.* [@PhysRevX.10.021071] exploits a single emitter to produce entangled states containing hundreds of photons. Of these, only one photon is interfered with a different emitter, while the remaining $N-1$ photons are measured using the setup in Fig. \[fig:1\](a) and hence fulfill the effective spin echo conditions. For different scenarios, e.g. if attempting to fuse cluster states emitted by different QDs [@Segovia2019; @Economou2010], the insensitivity to slow drifts is no longer applicable.
All error terms in Eq. depend on the group index of the waveguide: a high $n_g$ increases the decay rate $\gamma$ and hence the indistinguishability, but at the same result in stronger driving of off-resonant transition. Furthermore, the branching ratio can also be improved by the enhancement of $n_g$. The waveguide therefore can be used to control the trade-off between errors and optimize the output state. As shown in Fig. \[fig:2\](b), a high $n_g$ becomes beneficial given sufficient Zeeman splitting, i.e. for a strong magnetic field or large $g$-factor [@doi:10.1063/1.3367707; @PhysRevLett.112.107401; @PhysRevB.91.165304]. By engineering the photonic crystal band gap and increasing the group indices to higher values, the single spin-photon infidelity can be reduced to the levels of $\approx 0.1\%$ for sufficiently strong magnetic fields, as shown with dashed lines in Fig. \[fig:2\](b). For more modest magnetic fields, a spin-photon entangled state fidelity above $95\%$ can be reached.
The case of $N=3$ is of special importance since it can potentially serve as a building block for photonic quantum computation [@PhysRevA.95.012304; @PhysRevLett.115.020502; @Rudolph2017]. Such three-photon states can also be realized by fusing six single photons with a total probability of $1/32$ [@PhysRevLett.100.060502]. With state-of-the-art SPDC single photon sources operating at MHz frequencies and an extraction efficiency of $\approx 70$% [@Kanedaeaaw8586], the theoretical three-photon GHZ state generation rate is in the few kHz regime. Alternatively, we estimate that by fusing single photons from the nanophotonic chip of Ref. [@uppu2020scalable], a three-photon state can potentially be produced at a rate of $\approx 3$ MHz. In comparison, using a deterministic source with the parameters of Ref. [@uppu2020scalable] we estimate a direct three-photon production rate of $\approx 20$ MHz \[see Fig. \[fig:2\](c)\], which exceeds the estimate for SPDC-based method by four orders of magnitude. The fidelity of such three-photon states is, cf. Fig. \[fig:2\](c), $\approx 90\%$ for the realistic experimental parameters $\Delta = 2\pi\times 16$ GHz (corresponding to magnetic field of 2 T), $\gamma_{\mathrm{d}} = 0.03$ ns$^{-1}$, $B=50$, and the emission rate of $\gamma = 2.2$ ns$^{-1}$ enhanced from a bulk decay rate $\gamma_{\mathrm{bulk}} = 1.0$ ns$^{-1}$. According to Fig. \[fig:2\](c), the proposed scheme not only benefits from a high generation rate, but also has the potential to outperform the existing state-of-the-art methods [@PhysRevLett.117.210502] in the state fidelity.
In conclusion, we have proposed a realistic experimental protocol for deterministic generation of multiphoton entanglement from solid-state emitters. Our particular implementation relies on the control of photon emission by means of nanophotonic structure, such as PCWs. The provided exhaustive theoretical analysis improves our understanding of the mechanisms governing the quality of the produced quantum states and provides a recipe to optimize the design of multiphoton entanglement sources. Our findings predict near-future feasibility of multiphoton sources with entanglement fidelities, generation rates, and control capabilities exceeding those of fusion-based methods.
We gratefully acknowledge financial support from Danmarks Grundforskningsfond (DNRF 139, Hy-Q Center for Hybrid Quantum Networks), the European Research Council (ERC Advanced Grant ‘SCALE’), and the European Union Horizon 2020 research and innovation programme under grant agreement N^^ 820445 and project name Quantum Internet Alliance.
|
---
abstract: 'We take a new approach to construct Quintessential models. With this approach, we first easily obtain a tracker solution that is different from those discovered before and straightforwardly find a solution of multiple attractors, i.e., a solution with more than one attractor for a given set of parameters. Then we propose a scenario of Quintessence where the field jumps out of the scaling attractor to the de-Sitter-like attractor, by introducing a field whose value changes a certain amount in a short time, leading to the current acceleration. We also calculate the change the field needs for a successful jump and suggest a possible mechanism that involves spontaneous symmetry breaking to realize the sudden change of the field value.'
author:
- 'Shuang-Yong Zhou'
title: A New Approach to Quintessence and a Solution of Multiple Attractors
---
Recent observations and experiments strongly indicate that the universe is spatially flat and currently undergoing accelerated expansion [@snae; @wmap; @sdss]. A negative pressure energy component, termed dark energy, is suggested to be responsible for the acceleration. The simplest candidate for dark energy seems to be a positive cosmological constant, which is conventionally associated with the quantum vacuum energy. However, it is very tiny, compared with typical particle physics scales, which is the so-called *fine-tuning problem* [@fine]. It also suffers the so-called *coincidence problem* [@coin]. Rather than dealing directly with the dark energy a cosmological constant, various alternative routes have been proposed, which usually invoke dynamical scalar fields, such as Quintessence [@Wetterich88; @peebles; @BCN99; @SW99; @albrecht], Phantom [@Caldwell02] and Quintom [@quintom].
Quintessence invokes an evolving canonical scalar field slowly rolling down its potential (to some extent like the inflaton which drives the inflation in the early universe) with equation of state $w_{\phi}>-1$. Motivated from observational data [@Corasaniti:2004sz; @ASSS], the Phantom invokes a negative kinetic energy with effective equation of state $w_{Ph}<-1$, having led to many interesting phenomena [@phantom].
Among the various Quintessential models, *tracker solutions* have attracted a lot of attention. The tracker field has an equation of motion with attractor-like solutions in the sense that a very wide range of initial conditions rapidly converge to a common, cosmic evolutionary track of $\rho_{\phi}(t)$ and $w_{\phi}(t)$. The tracking behavior with $w_{\phi}<w_{m}$ occurs when $\Gamma>1$ and is nearly constant (${\mathrm{d}}(|\Gamma-1|)/{\mathrm{d}}\ln a\ll |\Gamma-1|$), where $\Gamma$ is defined as $VV''/V'^2$, with V the potential and $'$ the derivative w.r.t. the field [@trcksol]. It has been found that the general inverse power-law ($V(\phi)=\sum
c_{\alpha}/\phi^{\alpha}$) and exponential ($V(\phi)=V_0\exp(1/\phi)$) potentials are tracker solutions (we have chosen $\kappa^2=8\pi G=1$).
Another important class of Quintessential models are *scaling solutions* [@CLW; @van; @MP; @NM; @gong] in which the energy density of the scalar field mimics the background fluid energy density. Namely scaling solutions are characterized by the relation $\rho_{\phi}\propto\rho_{m}$, whose simplest realization is the exponential potential $V_0e^{-\mu\phi}$. As long as the scaling solution is the dynamical attractor, for any generic initial conditions, the field would sooner or later enter the scaling regime, being sub-dominant during radiation and matter dominated eras to satisfy the tight constraints from nucleosynthesis and structure formation, thereby opening up a new line of attack on the fine-tuning problem [@dy]. However, exit from the scaling regime is needed so as to give rise to recent acceleration.
The double exponential potential [@BCN99; @doubleexp] of the form $$\label{2exp} V(\phi) = V_0 \left( e^{-\mu\phi} + e^{-\nu\phi}
\right)\,,$$ provides a simple realization of the exit from the scaling regime. Such potentials can arise as a result of compactifications in superstring models. By properly choosing $\mu$, $\nu$ and initial conditions, one term in the potential dominated over the other before nucleosynthesis, giving rise to the scaling solution, while the situation has reversed recently, giving rise to a de-Sitter-like acceleration. However, whether it is possible to obtain required values of $\mu$ and $\nu$ remains a problem. In [@SW99], the authors considered the potential $$\label{coshpot} V(\phi) = V_0 \left[ \cosh( \mu\phi) -1 \right]^n
\,,$$ which has two interesting asymptotical regions. One of these with ($|\mu \phi| \gg 1,~\phi<0$) gives the scaling solution, while the other with ($|\mu \phi| \ll 1$), according to virial theorem, gives current acceleration with average equation of state $\langle
w_{\phi} \rangle=(n-1)/(n+1)$. As current data favor an equation of state close to $-1$ [@wmap], $n$ should be close to $0$, which is mathematically viable, but seems unnatural physically. See [@as99; @um] for another two popular models.
On the other hand, there is an attempt to search for a solution of two scaling regimes by coupling Quintessence to the matter [@cpquin]. Nonetheless, this scenario has faced severe challenges since it has been showed that it cannot be realized for a vast class of scalar field Lagrangians [@chllg].
In this letter, we take a new approach to construct Quintessential models. With this approach, instead of proposing an interesting Quintessential potential directly, we first propose a relation between two quantities, $\Gamma$ and $\lambda$ (defined as $-V'/V$), and then figure out the potential. First, we show that a tracker potential which is different from that discovered before can be easily obtained. Then we find it straightforward to get a solution of multiple attractors, that is, a solution with more than one attractor for a given set of parameters. In the particular case given in this letter, we have a scaling attractor and a de-Sitter-like attractor. We thus propose a model in which the universe first evolves to the scaling attractor, and then, by introducing a field whose value changes a certain amount in a short time, the universe jumps out to the de-Sitter-like attractor to give the current acceleration. We also calculate the change the field needs for a successful jump and justify the introduction of this kind of field.
To start, we consider the action of Quintessence ($\epsilon=1$) (or Phantom ($\epsilon=-1$)) minimally coupled to gravity, $$\label{action}
S = \int\!{\mathrm{d}}^4x\sqrt{-g}\: [-\frac{1}{2}\epsilon(\nabla\!
\phi)^2-V(\phi)]\,,$$ where we use the metric signature $(-,+,+,+)$ and $(\nabla\!
\phi)^2=g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi$. In the flat Friedmann-Robertson-Walker spacetime, the equation of state for the Quintessential field $\phi$ is given by $$w_{\phi}=\frac{p_{\phi}}{\rho_{\phi}}=\frac{\epsilon\dot{\phi}^2 -
2V(\phi)}{\epsilon\dot{\phi}^2 + 2V(\phi)}\,.$$ The variation of the action (\[action\]) with respect to $\phi$ gives $$\label{eom}
\epsilon\ddot{\phi}+3\epsilon H\dot{\phi}+V'=0\,.$$ Since we carry out cosmological dynamics of the Quintessential field $\phi$ in the presence of a barotropic fluid whose equation of state is given by $w_m=p_m/\rho_m$ (in this paper, we assume that $w_m$ is constant), Einstein equations reduce to $$\begin{aligned}
\label{fdme}
&& H^2 = \frac13\; [ \frac12 \epsilon \dot{\phi}^2 + V(\phi)
+ \rho_m ]\,, \\
\label{fdme1}
&& \dot{H} = -\frac12\;[ \epsilon
\dot{\phi}^2+(1+w_m)\rho_m ]\,.\end{aligned}$$ Introducing the following dimensionless variables $$\begin{aligned}
&& x \equiv \frac{\dot{\phi}}{\sqrt{6}H}\,,~~
y \equiv \frac{\sqrt{V}}{\sqrt{3}H}\,,\nonumber \\
\label{lamGam}
&& \lambda \equiv -\frac{V'}{V}\,,~~\;
\Gamma \equiv \frac{VV''}{V'^2}\,,\end{aligned}$$ Eq. (\[eom\]), (\[fdme\]), (\[fdme1\]) can be recast in the following form [@CLW; @Ng; @dy]: $$\begin{aligned}
\label{autoquin1} \hspace*{-1.5em} \frac{{\mathrm{d}}x}{{\mathrm{d}}N} &=&
-3x+\frac{\sqrt{6}}{2} \epsilon \lambda y^2 \nonumber \\
& & +\frac32 x[(1-w_m)\epsilon x^2 +(1+w_m)(1-y^2)]\,, \\
\label{autoquin2}
\hspace*{-1.5em} \frac{{\mathrm{d}}y}{{\mathrm{d}}N} &=&
-\frac{\sqrt{6}}{2}\lambda xy \nonumber \\
& & +\frac32 y[(1-w_m)\epsilon x^2 +(1+w_m)(1-y^2)]\,, \\
\label{autoquin3}
\hspace*{-1.5em}\frac{{\mathrm{d}}\lambda}{{\mathrm{d}}N} &=&
-\sqrt{6} \lambda^2 (\Gamma-1)x\,,\end{aligned}$$ where $N=\ln a$ ($a$ is the scale factor), together with a constraint equation $$\label{confine} \epsilon x^2+y^2+\frac{\rho_{m}}{3H^2}=1\,.$$ The equation of state $w_{\phi}$ and the fraction of the energy density $\Omega_{\phi}$ for the field $\phi$ are, respectively, \[wphiquin\] & & w\_ =,\
& & \[Omephiquin\] \_ =x\^2+y\^2. To warm up, we note that for many Quintessential (or Phantom) potentials $\Gamma$ can be written as a function of $\lambda$. Let us take the case of Phantom potential of the form $$V(\phi)=\frac{V_0}{[\cosh(\sigma\phi)]^n}$$ for example. It is found \[phangam\] =1 + 1n - . Substituting Eq. (\[phangam\]) into Eq. (\[autoquin3\]), we can perform three-dimension dynamical analysis of the autonomous system. For a barotropic fluid background, there is a unique stable fixed point $(x=0,\;y=1,\;\lambda=0)$, which is a de-Sitter-like dominant attractor. For the case $n=1$, it confirms the results of [@SSN]. Note that we neglect the cases with $y<0$, as the system is symmetric under the reflection $(x,y)\to(x,-y)$ and time reversal $t\to-t$.
The direct way to get a Quintessential (or Phantom) model is to conceive (usually fairly carefully) a potential that meets constraints from observations and experiments. However, encouraged by what has been showed above, let us take another route.
We note that the dynamical system (\[autoquin1\], \[autoquin2\], \[autoquin3\]), is obviously autonomous except for $\Gamma$. In fact, since the potential $V(\phi)$ is only a function of the field $\phi$, by the definition (\[lamGam\]), $\lambda$ and $\Gamma$ can be written as =P(), =Q(). If the inverse function of $P(\phi)$ exists, then we have \[para\] =Q(P\^[-1]{}())f(). It is noteworthy that in principle we can figure out the potential as a function of the field $\phi$. Using the definition of $\lambda$ and $\Gamma$, Eq. (\[para\]) can be can rewritten as V”=f(-)F(V,V’). Let $h=V'$, then we get =1h F(V,h). Having figured out $h(V)$, we can solve $V'(\phi)=h(V(\phi))$ to obtain the potential $V(\phi)$. Thus we can perform three-dimension dynamical analysis of the system (\[autoquin1\], \[autoquin2\], \[autoquin3\]) with a fairly large amount of potentials beyond the exponential case where the dynamical system reduces to two-dimension autonomous system.
In the viewpoint of $\Gamma$ as a function of $\lambda$ and considering the powerful theorem presented by [@trcksol], it is easy to obtain a tracker field. As an example, we write $\Gamma$ as =1+, which can be solved to give the potential V()=V\_0e\^[(+)]{}, where $V_0(>0)$ and $\beta$ are integral constants. We note that it is different from the general inverse power-law ($V(\phi)=\sum
c_{\alpha}/\phi^{\alpha}$) or exponential ($V(\phi)=V_0\exp(1/\phi)$) potentials.
Obviously, $\Gamma>1$ if $\alpha>0$. To confirm this is a real tracker solution, we perform the condition ${\mathrm{d}}(|\Gamma-1|)/{\mathrm{d}}N\ll |\Gamma-1|$, and we get \[trck1\] |2|1. Substituting Eq. (\[autoquin3\]) for Eq. (\[trck1\]), we obtain |x|||. Considering the tracking condition $|\lambda|\sim|1/x|$ [@trcksol], we finally get 1. Tracking behaviors exist for a wide range parameters and initial conditions, which solves the fine-tuning problem. However, due to the $w$-$\Omega$ relation that alleviates the coincidence problem, it is difficult to obtain current equation of state $w_0<-0.8$. A numerical solution of the cosmic dynamical evolution with tracking behavior, where for simplicity we have neglected the matter-dominated era, is given by Fig. \[tracker\].
![Evolution of $w_{\phi}$ (red dashed line) and $\Omega_{\phi}$ (green solid line) of $\Gamma=1+\alpha/\lambda^2$ ($V(\phi)=V_0e^{\alpha\phi(\phi+\beta)/2}$, $\alpha$ is chosen as $2.8$) with respect to $N=\ln a$ in the background fluid with $w_m=1/3$. We choose initial conditions as $x_i=0.1$, $y_i=0.36$ and $\lambda_i=17.8$. For simplicity we have neglected the matter-dominated era.[]{data-label="tracker"}](tracker.eps){height="2.3in" width="3.3in"}
Having witnessed the utility of this approach, we would like to go further. As showed below, we find it straightforward to parameter $\Gamma$ as function of $\lambda$ to get a solution of multiple attractors, i.e., a solution with more than one attractor for a given set of parameters. In the particular case given below, we have a scaling attractor and a de-Sitter-like attractor and it is worth noting that two scaling solutions are problematic [@chllg]. Thus we are encouraged to consider a scenario that the initial conditions of the cosmic scalar field are in the basin of a scaling solution and first the field evolves toward the scaling attractor. Then recently, the field *jumps* out to the basin of a de-Sitter-like dominant attractor, giving rise to the current acceleration. In this scenario, the mechanism of exit from the scaling regime is different from those mentioned in the introduction, which typically invoke fairly carefully conceived potentials with two asymptotical behaviors corresponding to the scaling case and the de-Sitter-like case respectively. Therefore attractors in those models are not exact. On the contrary, the two attractors considered below are exact and we suggest some other physical reason to realize the exit from the scaling regime, rather than connect the two interesting regimes with more or less contrived potentials. The physical reason is formulated as the sudden change of the field value.
Considering Eq. (\[autoquin3\]), we parameter $\Gamma$ as =1+1/+. There are at least the following two fixed points:
- Point(*a*):\
($x=-\sqrt{6}\alpha\beta,\;y=\sqrt{1-\alpha^2\beta^2/6},\;
\lambda=-\alpha\beta$) is a de-Sitter-like dominant attractor, in which $w_{\phi}=-1+\alpha^2\beta^2/3$. The eigenvalues of the Jacobi matrix of the dynamical system are $$\begin{aligned}
&&\mu_1=-\alpha^2\beta\,, \nonumber \\
&&\mu_2=-3+\frac{\alpha^2\beta^2}{2}\,, \nonumber \\
&&\mu_3=-3(1+w_m)+\alpha^2\beta^2\,. \nonumber
\end{aligned}$$ It exists if $\alpha^2\beta^2<6$ and is stable if $\alpha^2\beta^2<3(1+w_m)$ and $\beta>0$.
- Point(*b*):\
($x=\!-\sqrt3(1\!+\!w_m)/\sqrt2\alpha\beta,\;y=\!\!\sqrt{3(1\!-\!w_m^2)/2\alpha^2\beta^2},\linebreak[4]
\lambda=-\alpha\beta$) is a scaling attractor, in which $w_{\phi}=w_m$ and $\Omega_{\phi}=3(1+w_m)/\alpha^2\beta^2$. The eigenvalues of the Jacobi matrix are $$\begin{aligned}
&& \mu_1=-\frac{3(1+w_m)}{\beta}\,, \nonumber \\
&& \mu_{2,3}=-\frac34(1-w_m)(1\pm\sqrt{\frac{-7-9w_m+24(1+w_m)^2}{(1-w_m)\alpha^2\beta^2}})\,. \nonumber
\end{aligned}$$ It exists if $\alpha^2\beta^2>3(1+w_m)$ and is stable if $\alpha^2\beta^2>3(1+w_m)$ and $\beta>0$.
![Evolution of $x$ (red solid line), $y$ (green dot-dashed line), $\lambda$ (blue dashed line) of $\Gamma=1+1/\beta+\alpha/\lambda$ with respect to $N=\ln a$ in the background fluid with $w_m=0$. $\alpha$ is chosen as $-2.6$, $\beta$ chosen as $2$. We choose initial conditions as $x_i=0.2$, $y_i=0.36$ and $\lambda_i=-1.3$ for the thin lines, and $x_i=0$, $y_i=0.06$ and $\lambda_i=-1.4$ for the thick lines. Note that the attractor is a stable spiral.[]{data-label="mulattr"}](mulattr.eps){height="2.3in" width="3.3in"}
For the stabilities of the fixed points *a* or *b*, we choose $\beta>0$. We note that the fixed points *a* and *b* are typical for general scaling solutions and cannot be both stable for a given set of parameters [@st; @dy].
Besides, we find that the de-Sitter-like dominant fixed point ($x=0,\;y=1,\;\lambda=0$) is stable, i.e., a de-Sitter-like dominant attractor which we denote as Point(*c*). It cannot be simply seen from the eigenvalues of the Jacobi matrix ($\mu_1=-3(1+w_m),\;\mu_2=-3,\;\mu_3=0$), since $\mu_3=0$. However, it can be seen from numerical simulation of the dynamical system, see Fig. \[mulattr\] for a working example. It is found that when $\alpha<0$ $(\alpha>0)$, the region of $\lambda<0$ $(\lambda>0)$ in the phase space of the dynamical system is the basin of the attractor ($x=0,\;y=1,\;\lambda=0$), while the rest region is the basin of the attractor with $\lambda=-\alpha\beta$, i.e., (*a*) or (*b*). This is desirable, since the basins are divided by a plane of equal $\lambda$ in the phase space, having nothing to do with $x$ and $y$. Thus the initial values of $x$ and $y$ can be arbitrary.
The corresponding potential of this case is V()=, where $V_0(>0)$ and $\eta$ are integral constants. For the stability of the potential, that is, the potential should be bounded below, we should choose $\beta$ as $2,4,6,...$ and to obtain interesting cases we choose $\eta<0$.
Now we come to consider the scenario that the field exits the scaling regime to the de-Sitter-like regime due to a sudden change of the field value. To this end, we conceive $\phi$ as \[ansatz\] =f(t), with \[chg\] f(t)={
[ll]{} 1 t<t\_j &\
f\_j tt\_j,
. where $t$ is the cosmic time and $f_j$ is a constant with the subscript $j$ representing some recent time when the field jumps. Note that $f(t)$ is not necessarily of the form above, but it should have a certain amount of change of its value in a short time so that the field will not evolve back to the scaling attractor. We will first calculate the change the field needs for a successful jump and then suggest a possible mechanism to realize the sudden change of the field value.
In order to calculate the change the field needs to realize the jump, we choose, without losing generality, $\alpha>0$ so that the region of $\lambda>0$ is the basin of the de-Sitter-like dominant attractor (*c*). To meet the constraints from nucleosynthesis and structure formation, we require that the Quintessential field have well scaled with the background before nucleosynthesis. So we choose $\alpha^2\beta^2>20$ for (${\Omega_{\phi}} ({T \sim 1 {\rm
MeV}}) < 0.2$). At some recent time just before the jump the field $\phi=\phi_j=\varphi_j$ and then $\phi$ rapidly changes from $\phi_j$ to $\phi_j+\delta \phi$ (or from $\varphi_j$ to $f_j\varphi_j$). For jumping from the basin of the scaling attractor (*b*) to that of the de-Sitter-like dominant attractor (*c*) happening, we have >-1(-)-\_j=1 where $\lambda_j=-\alpha\beta/(1+\eta e^{\alpha\phi_j})$, or \[sgm\] f\_j\_j>-1(-). Now we shall justify the introduction of the field whose value has a sudden change. Below, we suggest a possible mechanism, which resorts to spontaneous symmetry breaking, to realize the sudden change of the field value. We first note that scalar fields are ubiquitous in supersymmetric theory of particle physics. Thus it is reasonable to assume that a few of them are relevant to the cosmic evolution. Considering $\phi$ as an effective field, we involve three fields with the Lagrangian \[hyb\] &=&-12f\^2() g\^\_\_-\
&&-12 g\^\_\_-12 g\^\_\_-V(,), where V(,)&=&V’\_0-12m\_\^2\^2+14\^4+12m\^2\^2 +12’\^2\^2\
&=&14(M\^2-\^2)\^2+12m\^2\^2+12’\^2\^2. We conceive $f(\sigma)$ as \[chg\] f()={
[ll]{} 1 \^2<\_s\^2 &\
f\_j \^2\_s\^2,
. so that $\varphi$ decouples from $\sigma$ and $\theta$ except at the points $\sigma=\pm\sigma_s$ (they surely couple to each other through Friedmann equation; nevertheless, when the radiation or matter dominates the universe, the coupling through gravity is neglectable.). Since $f(\sigma)$ is more reasonable to be a continuous function, a better choice of $f(\sigma)$ might be f()=2-2, a1. We note that $V(\sigma,\theta)$ is famous for its realization of Hybrid inflation models [@hyb]. In these models, first, $\sigma$ is held at the origin, with $\theta$ slow rolling down the potential, giving the inflation, and then, when $\theta$ rolls down a critical value $\theta_c$, $\sigma$ is destabilized and quickly rolls down from $0$ to $\pm M$, ending the inflation. Comparing two ways of writing the potential, we obtain m\_\^2&=&M\^2,\
V’\_0&=&14M\^4. And the critical value of $\theta$ is \^2\_c=m\_\^2/’=M\^2/’. For this potential to be viable for current purpose, $\theta$ does not necessarily slow roll. Yet we do need $\sigma$ quickly roll down from $0$ to $¡ÀM$ after $\theta$ rolls down $\theta_c$, which implies m\^2\_V’\_0, and we require $0<\sigma_s^2<M^2$ so that when $\sigma$ rolls down from $0$ to $\pm M$, $f(\sigma)$ changes from $1$ to $f_j$.
Note that when the field $\phi$ jumps to the de-Sitter-like regime, $V(\phi=\varphi f(\sigma))$ will increase and the kinetic term of $\varphi$ will also change. At the same time, $V(\sigma,\theta)$ should decrease so as to vanish when Quintessence begins to dominate the universe. For this scenario to be viable, we require that the decrease of $V(\sigma,\theta)$ be larger than the increase of $V(\phi=\varphi f(\sigma))$ and the kinetic term (note that when $|f_j|<1$, the kinetic term will decrease; however, it is easy to show that the decline of the kinetic term in this case is small, compared to the increase of the $V(\phi=\varphi f(\sigma))$.). One may worry that this might spoil the analysis of the dynamics of $\phi$ above, as this require the energy associated with $\sigma$ and $\theta$ to be comparable with that associated with $\varphi$ around the jump point. However, we argue that it will not, because $\varphi$ almost decouples to $\sigma$ and $\theta$, and the energy associated with $\sigma$ and $\theta$ is only comparable with that associated with $\varphi$ when the radiation or matter dominates the universe and vanishes when the dark energy begins to dominate the universe.
In summary, we suggest a new approach to construct Quintessential dark energy models, with which we first propose a relation $\Gamma=f(\lambda)$ between $\Gamma=VV''/(V')^2$ and $\lambda=-V'/V$, and then figure out the potential $V(\phi)$. It is showed that a tracker solution that is different from those discovered before can be easily obtained and a solution of multiple attractors is also found straightforwardly. Then we suggest a scenario that the initial conditions of the cosmic scalar field are in the basin of a scaling attractor and first, the field evolves toward the scaling attractor and then recently, the field jumps out to the basin of a de-Sitter-like dominant attractor, giving rise to current acceleration. For this scenario to be realized, we introduce a field whose value changes a certain amount in a short time. Then we calculate the change the field needs for a successful jump and invoke a mechanism that is similar to the case of Hybrid inflation to justify the introduction of this kind of field.
We thank Yun-Song Piao, Yi Wang, Zuo-Tang Liang, Jian-Hua Gao and Ye Chen for useful discussions.
[99]{}
A. G. Riess [*et al.*]{}, Astron. J. [**116**]{}, 1009 (1998); S. Perlmutter [*et al.*]{}, Astrophys. J. [**517**]{}, 565 (1999). C. L. Bennett [*et al.*]{}, Astrophys. J. Suppl. [**148**]{}, 1 (2003); D. N. Spergel [*et al.*]{}, Astrophys. J. Suppl. [**148**]{}, 175 (2003); D. N. Spergel [*et al.*]{}, arXiv:astro-ph/0603449.
M. Tegmark [*et al.*]{}, Phys. Rev. D [**69**]{}, 103501 (2004); K. Abazajian [*et al.*]{}, Astron. J. [**128**]{}, 502 (2004); E. Hawkins [*et al.*]{}, Mon. Not. Roy. Astron. Soc. [**346**]{}, 78 (2003). S. Weinberg, Rev. Mod. Phys. [**61**]{}, 1 (1989).
I. Zlatev, L. -M. Wang, and P. J. Steinhardt, Phys. Rev. Lett., 896 (1999).
C. Wetterich, Nucl. Phys. B [**302**]{}, 668 (1988).
B. Ratra and J. Peebles, Phys. Rev. D [**37**]{}, 321 (1988).
T. Barreiro, E. J. Copeland and N. J. Nunes, Phys. Rev. D [**61**]{}, 127301 (2000).
V. Sahni and L. M. Wang, Phys. Rev. D [**62**]{}, 103517 (2000).
A. Albrecht and C. Skordis, Phys. Rev. Lett. [**84**]{}, 2076 (1999).
R. R. Caldwell, Phys. Lett. B [**545**]{}, 23-29 (2002).
B. Feng, X. L. Wang and X. M. Zhang, Phys. Lett. B [**607**]{}, 35 (2005); Z. K. Guo, Y. S. Piao, X. M. Zhang and Y. Z. Zhang, Phys.Lett. B [**608**]{}, 177 (2005); H. Wei, R. G. Cai and D. F. Zeng, Class. Quant. Grav. [**22**]{}, 3189 (2005); X. F. Zhang, H. Li, Y. S. Piao and X. M. Zhang, Mod. Phys. Lett. A [**21**]{}, 231 (2006); H. Wei and R. G. Cai, Phys. Lett. B [**634**]{}, 9 (2006).
P. S. Corasaniti, M. Kunz, D. Parkinson, E. J. Copeland and B. A. Bassett, Phys. Rev. D [**70**]{}, 083006 (2004).
U. Alam, V. Sahni, T. D. Saini and A. A. Starobinsky, Mon. Not. Roy. Astron. Soc. [**354**]{}, 275 (2004).
Z. K. Guo, Y. S. Piao and Y. Z. Zhang, Phys. Lett. B [**594**]{}, 247 (2004); T. Chiba, JCAP [**0503**]{}, 008 (2005);J. Q. Xia, B. Feng and X. M. Zhang, Mod. Phys. Lett. A [**20**]{}, 2409 (2005);
P. J. Steinhardt, L. M. Wang and I. Zlatev, Phys. Rev. D [**59**]{}, 123504 (1999).
E. J. Copeland, A. R. Liddle and D. Wands, Phys. Rev. D [**57**]{}, 4686 (1998).
R. J. van den Hoogen, A. A. Coley and D. Wands, Class. Quant. Grav. [**16**]{}, 1843 (1999).
A. de la Macorra and G. Piccinelli, Phys. Rev. D [**61**]{}, 123503 (2000).
A. Nunes and J. P. Mimoso, Phys. Lett. B [**488**]{}, 423 (2000).
Y. Gong, A. Wang and Y-Z. Zhang, Phys. Lett. B [**636**]{}, 286 (2006), arXiv: gr-qc/0603050.
Edmund J. Copeland, M. Sami and Shinji Tsujikawa, Int. J. Mod. Phys. D [**15**]{}, 1753 (2006), arXiv: hep-th/0603057.
A. A. Sen and S. Sethi, Phys. Lett. B [**532**]{}, 159 (2002); I. P. Neupane, Class. Quant. Grav. [**21**]{}, 4383 (2004); I. P. Neupane, Mod. Phys. Lett. A [**19**]{}, 1093 (2004); L. Jarv, T. Mohaupt and F. Saueressig, JCAP [**0408**]{}, 016 (2004).
L. A. Urena-Lopez and T. Matos, Phys. Rev. D [**62**]{}, 081302 (2000).
A. Albrecht and C. Skordis, Phys. Rev. Lett. [**84**]{}, 2076 (1999).
L. Amendola, Phys. Rev. D [**62**]{}, 043511 (2000).
L. Amendola, M. Quartin, S. Tsujikawa and I. Waga, Phys. Rev. D [**74**]{} (2006) 023525, arXiv:astro-ph/0605488.
S. C. C. Ng, N. J. Nunes and F. Rosati, Phys. Rev. D [**64**]{}, 083510 (2001).
P. Singh, M. Sami and N. Dadhich, Phys. Rev. D [**68**]{} 023522 (2003).
S. Tsujikawa, Phys. Rev. D [**73**]{}, 103504 (2006).
A. D. Linde, Phys. Lett. B [**259**]{}, 38 (1991).
|
---
abstract: 'A dynamic logic ${\mathbf B}$ can be assigned to every automaton ${\mathcal A}$ without regard if ${\mathcal A}$ is deterministic or nondeterministic. This logic enables us to formulate observations on ${\mathcal A}$ in the form of composed propositions and, due to a transition functor $T$, it captures the dynamic behaviour of ${\mathcal A}$. There are formulated conditions under which the automaton ${\mathcal A}$ can be recovered by means of ${\mathbf B}$ and $T$.'
author:
- 'Ivan Chajda[^1]'
- Jan Paseka
title: Dynamic logic assigned to automata
---
Introduction
============
The aim of the paper is to assign a certain logic to a given automaton without regard to whether it is deterministic or nondeterministic. This logic has to be dynamic in the sense to capture dynamicity of working automaton. We consider an [*automaton*]{} as ${\mathcal A}=(X,S,R)$, where $X$ is a non-empty set of [*inputs*]{}, $S$ is a non-empty set of [*states*]{} and $R\subseteq X\times S\times S$ is the set of [*labelled transitions*]{}. In this case we say that $R$ is a [*state-transition relation*]{} and it is considered as a dynamics of ${\mathcal A}$. Hence, the automaton ${\mathcal A}$ can be visualized as a graph whose vertices are states and edges denote (possibly multiple) transitions $s\xrightarrow{x} t$ from one state $s$ to another state $t$ provided an input $x$ is coming; this is visualized by a label $x$ on the edge $(s,t)$. In particular, motivated by the above considerations and e.g. by the paper [@perinotti] where a denumerable set of vertices is used in studying quantum automata to recover the Weyl, Dirac and Maxwell dynamics in the relativistic limit we have to assume that the sets $X$ and $S$ can have arbitrarily large cardinality.
Any physical system can be in some sense considered as an automaton. Its states are then states of the automaton and the transitition relation is a transition of a physical system from a given state to an admissible one. It should be noted that a quantum physical system is nondeterministic since particles can pass through a so-called superposition, i.e., they may randomly select a state from the set of admissible states.
On the other hand, we often formulate certain propositions on an automaton ${\mathcal A}$ and deduce conclusions about the behaviour of ${\mathcal A}$ in the present (i.e., a [*description*]{}) or in a (near) future (i.e., a [*forecast*]{}). It is apparent that for this aim we need a certain logic which is derived from a given automaton and which enables us to formulate propositions on ${\mathcal A}$ and to deduce conclusions and consequences. Due to the mentioned dynamics of ${\mathcal A}$, our logic ${\mathbf B}$ should contain a tool for a certain dynamics. This tool will be called a [*transition functor*]{}. This transition functor will assign to every proposition $p\in {\mathbf B}$ and input $x\in X$ another proposition $q$. In a certain case, this functor can be considered as a modal functor with one more input from $X$. The above mentioned approach has a sense if our logic ${\mathbf B}$ with a transition functor $T$ enables us to reconstruct the dynamics of a given automaton ${\mathcal A}$. One can compare our approach with the approach from [@perinotti] where an automaton can be represented by an operator over a Hilbert space or with the approach from [@yongming] or [@mendivil] where the role of the transition functor is played by a map from $S$ to $({\mathbf M}^{S})^{X}$ where ${\mathbf M}$ is a bounded lattice of truth-values or by a map from $S$ to $({[0,1]}^{S})^{X}$.
In what follows, we are going to involve a systematic approach how to reach such a transition functor and the logic ${\mathbf B}$ such that the reconstruction of the state-transition relation $R$ is possible. Since the conditions of our approach are formulated in a pure algebraic way, we need to develop an algebraic background (see e.g. also in [@Blyth]). It is worth noticing that the transition functor will be constructed formally in a similar way as tense operators introduced by J. Burgess [@burges] for the classical logic and developped by the authors for several non-classical logics, see [@dyn], [@dem] and [@doa], and also the monograph [@monochapa]. Because we are not interested in outputs of the automaton ${\mathcal A}$, we will consider ${\mathcal A}$ as the so-called [*acceptor*]{} only.
It is worth noticing that certain (temporal) logics assigned to automata were already investigated by several authors, see e.g. the seminal papers on temporal logics for programs by Vardi [@vardibuchi], [@vardilinear], the papers [@dixon; @konur] and the monograph [@fisher] for additional results and references. However, our approach is different. Namely, our logic assigned to an automaton is equipped with the so-called transition operator which makes the logic to be dynamic.
Besides of the previous, the observer or a user of an automaton can formulate propositions revealing our knowledge about it depending on the input. The truth-values of these propositions depend on states and inputs and let us assume that these propositions can acquire only two values, namely either TRUE of FALSE. For example, if we fix an input $x\in X$, the proposition $p/x$ can be true if the automaton ${\mathcal A}$ is in the state $s$ but false if ${\mathcal A}$ is not in the state $s$. Hence, for each state $s\in S$ we can evaluate the truth-value of $p/x$, it is denoted by $p/x(s)$. As mentioned above, $p/x(s)\in \{0, 1\}$ where $0$ indicates the truth-value FALSE and $1$ indicates TRUE. Denote by $B$ the set of propositions about the automaton ${\mathcal A}$ formulated by the observer. We can introduce the order $\leq$ on $B$ as follows: $$\text{for}\ p,q\in B, p\leq q\ \text{if and only if}\
p(s)\leq q(s)\ \text{for all}\ s\in S.$$ One can immediately check that the contradiction, i.e., the proposition with constant truth-value $0$, is the least element and the tautology, i.e., the proposition with the constant truth-value $1$ is the greatest element of the partially ordered set $(B;\leq)$; this fact will be expressed by the notation ${\mathbf B}=(B;\leq, 0, 1)$ for the bounded partially ordered set of propositions about the automaton ${\mathcal A}$.
We summarize our description as follows:
1. every automaton ${\mathcal A}$ will be identified with the triple $(B,X, S)$, where $B$ is the set of propositions about ${\mathcal A}$, $X$ is the set of possible inputs and $S$ is the set of states on ${\mathcal A}$;
2. we are given a set of labelled transitions $R\subseteq X\times S\times S$ such that, for an input $x\in X$, ${\mathcal A}$ can go from $s$ to $t$ provided $(x, s,t)\in R$;
3. the set $B$ is partially ordered by values of propositions as shown above.
If $s\xrightarrow{x} t_1$ and $s\xrightarrow{x} t_2$ yields $t_1=t_2$ for all $s, t_1, t_2\in S$ and $x\in X$ we say that ${\mathcal A}$ is a [*deterministic automaton*]{}. If ${\mathcal A}$ is not deterministic we say that it is [*nondeterministic*]{}.
To shed light on the previous concepts, let us present the following example.
\[firef\]\[expend1\] At first, let us present a very simple automaton ${\mathcal A}$ describing a SkyLine Terminal Transfer Service at an airport between Terminals 1 and 2. The SkyLine train is housed, repaired and maintained in the engine shed and the only way how to get there is through Terminal 2.
The observer can distinguish three states as follows:
1. $s_1$ means that the SkyLine train is in Terminal 1,
2. $s_2$ means that the SkyLine train is in Terminal 2,
3. $s_3$ means that the SkyLine train is in the engine shed.
There are two possible actions:
1. $x_1$ means that the passengers entered the SkyLine train,
2. $x_2$ means that the SkyLine train has to be moved to the engine shed.
If the SkyLine train is in Terminal 1 or in Terminal 2 then, after the passengers entered it, it moves to the other terminal. If the SkyLine train is in Terminal 2 then, after the request that the SkyLine train has to be moved to the engine shed is issued, it moves to the engine shed. If the SkyLine train is in the engine shed then, regardless of what action is requested, it stays there.
The set $R$ of labelled transitions on the set $S=\{s_1, s_2, s_3\}$ of states under actions from the set $X=\{x_1, x_2\}$ is of the form $$R=\{(x_1,s_1, s_2), (x_1,s_2, s_1), (x_1,s_3,s_3), (x_2,s_2, s_3),
(x_2,s_3, s_3)\}$$ and it can be vizualized as follows.
(s\_1) at (0,0) [$s_1$]{}; (s\_2) \[right of=s\_1\] [$s_2$]{}; (s\_3) \[right of=s\_2\] [$s_3$]{}; (s\_1) – (s\_2) node\[pos=.5,sloped,above\] [$x_1$]{}; (s\_2) – (s\_3) node\[pos=.5,sloped,above\] [$x_2$]{}; (s\_2) .. controls +(up:1.5cm) .. (s\_1) node\[pos=.5,sloped,above\] [$x_1$]{}; (s\_3) edge \[loop above\] node [$x_1$]{} (s\_3); (s\_3) edge \[loop below\] node [$x_2$]{} (s\_3);
The set $B=\{0, p, q, r, p', q', r', 1\}$ of possible propositions $B$ about the automaton ${\mathcal A}$ is as follows:
1. $0$ means that the SkyLine train is in no state of $S$,
2. $p$ means that the SkyLine train is in Terminal 1,
3. $q$ means that the SkyLine train is in Terminal 2,
4. $r$ means that the SkyLine train is in the engine shed,
5. $1$ means that the SkyLine train is in at least one state of $S$.
Considering ${\mathbf B}$ as a classical logic (represented by a Boolean algebra $(B; \vee, \wedge, ', 0, 1)$), we can apply logical connectives conjunction $\wedge$, disjunction $\vee$, negation $'$ and implication ${\Longrightarrow}$ to create new propositions about ${\mathcal A}$. In our case, we can get e.g. $p'=q\vee r$ which means that the SkyLine train is either in Terminal 2 or in the engine shed, etc. Altogether, we obtain eight propositions. We may identify $\mathbf B$ with the Boolean algebra $\{0, 1\}^S$ as follows:
---------------- ----------------- ----------------- ----------------
$0=(0,0,0)$, $p=(1, 0, 0)$, $q=(0, 1, 0)$, $r=(0, 0, 1)$,
$p'=(0,1, 1)$, $q'=(1, 0, 1)$, $r'=(1, 1, 0)$, $1=(1,1,1)$.
---------------- ----------------- ----------------- ----------------
The interpretation of propositions from $B$ is as follows: for any $\alpha\in B$, $\alpha$ is true in the state $s_i$ of the automaton ${\mathcal A}$ if and only if $\alpha(s_i)=1$.
Algebraic tools
===============
For the above mentioned construction of a suitable logic with a transition functor and the reconverse of the given relation, we recall the following necessary algebraic tools and results in this section.
Let $S$ be a non-empty set. Every subset $R\subseteq S\times S$ is called a [*relation on $S$*]{} and we say that the couple $(S, R)$ is a [*transition frame*]{}. The fact that $(s, t)\in R$ for $s, t\in S$ is expressed by the notation $s \mathrel{R} t$.
Let $A$ be a non-empty set. A relation on $A$ is called a [*partial order*]{} if it is reflexive, antisymmetric and transitive. In what follows, partial order will be denoted by the symbol $\leq$ and the pair $\mathbf A=(A;\leq)$ will be referred to as a [*partially ordered set*]{} (shortly a [*poset*]{}).
Let $(A;\leq)$ and $(B;\leq)$ be partially ordered sets, $f, g\colon A\to B$ mappings. We write $f\leq g$ if $f(a)\leq g(a)$, for all $a\in A$. A mapping $f$ is called [*order-preserving*]{} or [*monotone*]{} if $a, b \in A$ and $a \leq b$ together imply $f(a) \leq f(b)$ and [*order-reflecting*]{} if $a, b \in A$ and $f(a) \leq f(b)$ together imply $a \leq b$. A bijective order-preserving and order-reflecting mapping $f\colon A\to B$ is called an [*isomorphism*]{} and then we say that the partially ordered sets $(A;\leq)$ and $(B;\leq)$ are [*isomorphic*]{}.
Let $(A;\leq)$ and $(B;\leq)$ be partially ordered sets. A mapping $f\colon A\to B$ is called [*residuated*]{} if there exists a mapping $g\colon B\to A$ such that $f(a)\leq b\ \text{if and only if}\ a\leq g(b)$ for all $a\in A$ and $b\in B$. In this situation, we say that $f$ and $g$ form a [*residuated pair*]{} or that the pair $(f,g)$ is a (monotone) [*Galois connection*]{}. The role of Galois connections is essential for our constructions.
If a partially ordered set $\mathbf A$ has both a bottom and a top element, it will be called [*bounded*]{}; the appropriate notation for a bounded partially ordered set is $(A;\leq,0,1)$. Let $(A;\leq,0,1)$ and $(B;\leq,0,1)$ be bounded partially ordered sets. A [*morphism*]{} $f\colon A\to B$ [*of bounded partially ordered sets*]{} is an order, top element and bottom element preserving map.
We can take the following useful result from [@dyn Observation 1].
\[obsik\] Let $\mathbf A$ and $\mathbf M$ be bounded partially ordered sets, $S$ a non-empty set, and $h_{s}\colon A\to M, s\in S$, morphisms of bounded partially ordered sets. The following conditions are equivalent:
1. $((\forall s \in S)\, h_{s}(a)\leq h_{s}(b))\implies a\leq
b$ for any elements $a,b\in A$;
2. The map $i_{{}{\mathbf A}}^{S}\colon A \to M^{S}$ defined by $i_{{}{\mathbf A}}^{S}(a)=(h_s(a))_{s\in S}$ for all $a\in A$ is order reflecting.
We then say that $\{h_{s}\colon A\to M; s\in S\}$ is a [*full set of order-preserving maps with respect to*]{} $M$. Note that we may in this case identify $\mathbf A$ with a bounded subposet of $\mathbf{M}^S$ since $i_{{}{\mathbf A}}^{S}$ is an order reflecting morphism alias [*embedding*]{} of bounded partially ordered sets. For any $s\in S$ and any $p=(p_t)_{t\in S}\in {M}^S$ we denote by $p(s)$ the $s$-th projection $p_s$. Note that $i_{{}{\mathbf A}}^{S}(a)(s)=h_s(a)$ for all $a\in A$ and all $s\in S$.
Transition frames and transition operators
==========================================
The aim of this section is to recall a construction of two operators on partially ordered sets derived by means of a given relation and a construction of relations induced by these operators. For more details see the paper [@transop].
In what follows, let $\mathbf{M}=(M;\leq,0, 1)$ be a bounded partially ordered set and the bounded subposets ${\mathbf{A}}=(A;\leq,0, 1)$ and ${\mathbf{B}}=(B;\leq,0, 1)$ of $\mathbf{M}^S$ will play the role of possibly different logics of propositions pertaining to our automaton ${\mathcal A}$, a corresponding set of states $S$, and a state-transition relation $R$ on $S$. The operator $T_R\colon B\to {M}^S$ will prescribe to a proposition $b\in B$ about ${\mathcal A}$ a new proposition $T_R(b)\in {M}^S$ such that the truth value of $T_R(b)$ in state $s\in S$ is the greatest truth value that is smaller or equal than the corresponding truth values of $b$ in all states that can be reached from $s$. If there is no such state the truth value of $T_R(b)$ in state $s$ will be $1$. Similarly, the operator $P_R\colon A\to {M}^S$ will prescribe to a proposition $a\in A$ about ${\mathcal A}$ a new proposition $P_R(a)\in {M}^S$ such that the truth value of $P_R(a)$ in state $t\in S$ is the smallest truth value that is greater or equal than the corresponding truth values of $b$ in all states such that $t$ can be reached from them. If there is no such state the truth value of $P_R(a)$ in state $t$ will be $0$.
Specifically, if $M=\{ 0,1\}$ then $T_R(b)$ is true in state $s$ if and only if there is no state $t\in S$ that can be reached from $s$ and $b$ is false in $t$, and $P_R(a)$ is false in state $t$ if and only if there is no state $s\in S$ such that $t$ can be reached from $s$ and $b$ is true in $s$.
Consider a complete lattice $\mathbf M=(M;\leq,{}0, 1)$ and let $\mathbf{A}=({A};\leq, 0,1)$ and $\mathbf{B}=({B};\leq,$ $0,1)$ be bounded partially ordered sets with a full set $S$ of morphisms of bounded partially ordered sets into a non-trivial complete lattice $\mathbf{M}$. We may assume that $\mathbf{A}$ and $\mathbf{B}$ are bounded subposets of $\mathbf{M}^{S}$. Further, let $(S,R)$ be a transition frame.
Define mappings $P_R:A\to {M}^S$ and $T_R:B\to {M}^S$ as follows: For all $b\in B$ and all $s\in S$,
$$\begin{array}{c}\mbox{$T_R(b)(s)=\bigwedge_{M}\{b(t)\mid s R t\} $}\phantom{.} \tag{$\star$}
\end{array}
\label{eqn:RTD}$$
and, for all $a\in A$ and all $t\in S$,
$$\begin{array}{c}
\mbox{${P}_R(a)(t)=\bigvee_{M}\{a(s)\mid s R t\} $}{.} \tag{$\star\star$}
\end{array}
\label{eqn:RPD}$$
Then we say that ${T}_R$ ($P_R$) is an [*upper transition functor*]{} ([*lower transition functor*]{}) [*constructed by means of the transition frame*]{} $(S,R)$, respectively. We have that ${T}_R$ is an order-preserving map such that $T_R(1)=1$ and similarly, ${P}_R$ is an order-preserving map such that $P_R(0)=0$.
As an illustration of our approach we present the following example.
\[expend2\] Consider the automaton ${\mathcal A}$ and the set of propositions $B$ of Example \[firef\]. Then $R=\{x_1\}\times R_{x_1}\cup \{x_2\}\times R_{x_2}$ where $R_{x_1}=\{(s_1, s_2), (s_2, s_1), (s_3,s_3)\}\ \text{and}\
R_{x_2}=\{(s_2, s_3), (s_3, s_3)\}. $
Using our formulas $(\star)$ and $(\star\star)$, we can compute the upper transition functors $T_{R_{x_1}}$, $T_{R_{x_2}}\colon B\to 2^{S}$ and the lower transition functors $P_{R_{x_1}}$, $P_{R_{x_2}}\colon B\to 2^{S}$ as follows:
[c c]{}
--------------------- -----------------------
$T_{R_{x_1}}(0)=0$, $T_{R_{x_1}}(1)=1$,
$T_{R_{x_1}}(p)=q$, $T_{R_{x_1}}(p')=q'$,
$T_{R_{x_1}}(q)=p$, $T_{R_{x_1}}(q')=p'$,
$T_{R_{x_1}}(r)=r$, $T_{R_{x_1}}(r')=r'$,
--------------------- -----------------------
&
--------------------- ----------------------
$T_{R_{x_2}}(0)=p$, $T_{R_{x_2}}(1)=1$,
$T_{R_{x_2}}(p)=p$, $T_{R_{x_2}}(p')=1$,
$T_{R_{x_2}}(q)=p$, $T_{R_{x_2}}(q')=1$,
$T_{R_{x_2}}(r)=1$, $T_{R_{x_2}}(r')=p$,
--------------------- ----------------------
\
&\
--------------------- -----------------------
$P_{R_{x_1}}(0)=0$, $P_{R_{x_1}}(1)=1$,
$P_{R_{x_1}}(p)=q$, $P_{R_{x_1}}(p')=q'$,
$P_{R_{x_1}}(q)=p$, $P_{R_{x_1}}(q')=p'$,
$P_{R_{x_1}}(r)=r$, $P_{R_{x_1}}(r')=r'$,
--------------------- -----------------------
&
--------------------- ----------------------
$P_{R_{x_2}}(0)=0$, $P_{R_{x_2}}(1)=r$,
$P_{R_{x_2}}(p)=0$, $P_{R_{x_2}}(p')=r$,
$P_{R_{x_2}}(q)=r$, $P_{R_{x_2}}(q')=r$,
$P_{R_{x_2}}(r)=r$, $P_{R_{x_2}}(r')=r$.
--------------------- ----------------------
E.g., $T_{R_{x_1}}(q)=p$ means that if the Skyline train is in Terminal 1 then, after any possible transition under the action that the passengers entered the Skyline train, it will change to Terminal 2, and $T_{R_{x_1}}(q')=p'$ means that if the Skyline train is in Terminal 2 or in the engine shed then, after any possible transition under the action that the passengers entered the Skyline train, it will be in Terminal 1 or in the engine shed. Similarly, $T_{R_{x_2}}(1)=1$ means that if the Skyline train is in at least one state of $S$ then, after any possible transition under the action that the SkyLine train has to be moved to the engine shed, it will be in at least one state of $S$, and $T_{R_{x_2}}(p)=p$ means that if the Skyline train is in Terminal 1 then, after any possible transition under the action that the SkyLine train has to be moved to the engine shed (which can be done only at Terminal 2 or at the engine shed), it will stay in Terminal 1.
Let $P:A\to B$ and $T:B\to A$ be morphisms of partially ordered sets, $(A;\leq)$ and $(B;\leq)$ subposets of $\mathbf{M}^{S}$. Let us define the relations $$R_T=\{(s, t)\in S\times S\mid (\forall b\in B) (T(b)(s)\leq b(t))\} \tag{$\dagger$}
\label{eqn:RT}$$ and $$R^{P}=\{(s, t)\in S\times S\mid (\forall a\in A) (a(s)\leq P(a)(t))\}.\tag{$\dagger\dagger$}
\label{eqn:RP}$$
The relations $R_T$ and $R^{P}$ on $S$ will be called the [*upper $T$-induced relation by ${\mathbf M}$*]{} (shortly [*$T$-induced relation by ${\mathbf M}$*]{}) and [*lower $P$-induced relation by ${\mathbf M}$*]{} (shortly [*$P$-induced relation by ${\mathbf M}$*]{}), respectively.
\[expend3\] Consider the automaton ${\mathcal A}$ of Example \[expend1\]. Let $P$ be a restriction of the operator $P_{R_{x_2}}$ of Example \[expend2\] and let $T$ be a restriction of the operator $T_{R_{x_2}}$ of the same example. Let us compute $R_T$ and $R^{P}$. We have $R_T=R^{P}=\{(s_2, s_3), (s_3, s_3)\}$. Hence the transition relation $R_{x_2}$ of Example \[expend2\] coincides with our induced transitions relations $R_T$ and $R^{P}$. We can see from above that the operator $T_{R_{x_2}}$ bears the maximal amount of information about the transition relation $R_{x_2}$ on the subposet of all fixpoints of $P_{R_{x_2}}\circ T_{R_{x_2}}$. The same conclusion holds for the operator $P_{R_{x_2}}$.
Now, let let $(S, R)$ be a transition frame and $T_R$, $P_R$ functors constructed by means of the transition frame $(S,R)$. We can ask under what conditions the relation $R$ coincides with the relation $R_{T_R}$ constructed as in ($\dagger$) or with the relation $R^{P_R}$ constructed as in ($\dagger\dagger$). If this is the case we say that $R$ [*is recoverable from*]{} $T_R$ or that $R$ [*is recoverable from*]{} $P_R$. We say that $R$ is [*recoverable*]{} if it is recoverable both from $T_R$ and $P_R$.
\[expend4\] Consider the automaton ${\mathcal A}$ of Example \[expend1\]. Let us put $A=B=\{0, 1\}^S$. Let $P\colon \{0, 1\}^S\to \{0, 1\}^S$ and $T\colon \{0, 1\}^S\to \{0, 1\}^S$ be morphisms of partially ordered sets given as follows:
----------- ----------- ----------- ----------- ------------- ------------- ------------- -----------
$T(0)=0$, $T(p)=q$, $T(q)=p$, $T(r)=r$, $T(p')=q'$, $T(q')=p'$, $T(r')=r'$, $T(1)=1$,
$P(0)=0$, $P(p)=q$, $P(q)=p$, $P(r)=r$, $P(p')=q'$, $P(q')=p'$, $P(r')=r'$, $P(1)=1$.
----------- ----------- ----------- ----------- ------------- ------------- ------------- -----------
Note that $P$ coincides with the operator $P_{R_{x_1}}$ of Example \[expend2\], and $T$ coincides with the operator $T_{R_{x_1}}$ of the same example. We have $R_T=R^{P}=\{(s_1, s_2), (s_2, s_1),(s_3, s_3)\}$. The transition relation $R_{x_1}$ of Example \[expend1\] coincides with our induced transitions relations $R_T$ and $R^{P}$.
The connection between relations induced by means of transition functors $T$ and $P$ is shown in the following lemma and theorem.
[@transop]\[xreldreprest\] Let $\mathbf{M}$ be a non-trivial complete lattice and $S$ a non-empty set such that $\mathbf{A}$ and $\mathbf{B}$ are bounded subposets of $\mathbf{M}^{S}$. Let $P:A\to {M}^{S}$ and $T:B\to {M}^{S}$ be morphisms of partially ordered sets such that, for all $a\in A$ and all $b\in B$, $$P(a)\leq b\ {\Longleftrightarrow}\ a\leq T(b).$$
1. If $P(A)\subseteq B$ then $R_T\subseteq R^{P}$.
2. If $T(B)\subseteq A$ then $R^{P}\subseteq R_T$.
3. If $P(A)\subseteq B$ and $T(B)\subseteq A$ then $R_T= R^{P}$.
Among other things, the following theorem shows that if a given transition relation $R$ can be recovered by the upper transition functor then, under natural conditions, it can be recovered by the lower transition functor and vice versa.
[@transop]\[reldreprest\] Let $\mathbf{M}$ be a non-trivial complete lattice and $(S,R)$ a transition frame. Let $\mathbf{A}$ and $\mathbf{B}$ be bounded subposets of $\mathbf{M}^{S}$. Let $P_R:A\to {M}^{S}$ and $T_R:B\to {M}^{S}$ be functors [constructed by means of the transition frame]{} $(S,R)$. Then, for all $a\in A$ and all $b\in B$, $$P_R(a)\leq b\ {\Longleftrightarrow}\ a\leq T_R(b).$$ Moreover, the following holds.
1. Let for all $t\in S$ exist an element $b^t\in B$ such that, for all $s\in S$, $(s,t)\notin R$, we have $\bigwedge_{M}\{u(b^{t})\mid s R u\}\not\leq t(b^{t})\not =1$. Then $R=R_{T_R}$.
2. Let for all $s\in S$ exist an element $a^s\in A$ such that, for all $t\in S$, $(s,t)\notin R$, we have $\bigvee_{M}\{u(a_{s})\mid u R t\}\not\geq s(a^{s})\not =0$. Then $R=R^{P_R}$.
3. If $R=R_{T_R}$ and $T_R(B)\subseteq A$ then $R=R_{T_R}=R^{P_R}$.
4. If $R=R^{P_R}$ and $P_R(A)\subseteq B$ then $R=R_{T_R}=R^{P_R}$.
The following corollary of Theorem \[reldreprest\] shows that if the set $B$ of propositions on the system $(B,S)$ is large enough, i.e., if it contains the full set $\{0,1\}^S$ then the transition relation $R$ can be recovered by each of the transition functors.
[@transop]\[fcorreldreprest\] Let $\mathbf{M}$ be a non-trivial complete lattice and $(S,R)$ a transition frame. Let $\mathbf{B}$ be a bounded subposet of $\mathbf{M}^{S}$ such that $\{0,1\}^{S}\subseteq B$. Let $P_R:B\to {M}^{S}$ and $T_R:B\to {M}^{S}$ be functors [constructed by means of the transition frame]{} $(S,R)$. Then $R=R^{P_R}=R_{T_R}$.
The labelled transition functor characterizing the automaton
============================================================
The aim of this section is to derive the logic $\mathbf B$ with transition functors corresponding to a given automaton ${\mathcal A}=(X,S,R)$. This logic $\mathbf B$ will be represented via the partially ordered set of its propositions. In the rest of the paper, truth-values of our logic $\mathbf B$ will be considered to be from the complete lattice $\mathbf M$. Thus $\mathbf B$ will be a bounded subposet of ${\mathbf M}^S$ for the complete lattice ${\mathbf M}$ of truth-values.
Let us consider an automaton ${\mathcal A}=(X,S,R)$. Clearly, $R$ can be written in the following form $$R=\bigcup_{x\in X}\{x\}\times R_{x}$$ where $R_x\subseteq S\times S$ for all $x\in X$. Hence, for all $x\in X$, using our formulas $(\star)$ and $(\star\star)$, we obtain the upper transition functor $T_{R_x}\colon B\to M^{S}$ and the lower transition functor $P_{R_x}\colon B\to M^{S}$. It follows that we have functors $T_R=(T_{R_{x}})_{x\in X}\colon B\to (M^{S})^{X}$ and $P_R=(P_{R_{x}})_{x\in X}\colon B\to (M^{S})^{X}$. We say that $T_R$ is the [*labelled upper transition functor constructed by means of ${\mathcal A}$*]{} and $P_R$ is the [*labelled lower transition functor constructed by means of ${\mathcal A}$*]{}. Note that any mapping $T\colon B\to (M^{S})^{X}$ corresponds uniquely to a mapping $\widetilde{T}\colon X\times B\to M^{S}$ such that, for all $x\in X$, $T=(\widetilde{T}(x,-))_{x\in X}$. Hence, $T_R$ and $P_R$ will play the role of our transition functor.
Now, let $P=(P_x)_{x\in X}:B\to ({M}^{S})^{X}$ and $T=(T_x)_{x\in X}:B\to ({M}^{S})^{X}$ be morphisms of partially ordered sets. For all $x\in X$, let $R^{P_x}$ be the lower $P_x$-induced relation by $\mathbf{M}$ and $R_{T_x}$ be the upper $T_x$-induced relation by $\mathbf{M}$. Then $R^{P}=\bigcup_{x\in X}\{ x\}\times R^{P_x}$ is called the [*lower $P$-induced state-transition relation*]{} and $R_{T}=\bigcup_{x\in X}\{ x\}\times R_{T_x}$ is called the [*upper $T$-induced state-transition relation*]{}. The automaton ${\mathcal A}^{P}=(X,S,R^{P})$ is said to be the [*lower $P$-induced automaton*]{} and the automaton ${\mathcal A}_{T}=(X,S,R_{T})$ is said to be the [*upper $T$-induced automaton*]{}.
We say that the automaton ${\mathcal A}$ [*is recoverable from*]{} $T_R$ ($P_R$) if, for all $x\in X$, $R_x$ [is recoverable from]{} $T_{R_x}$ ($P_{R_x}$), i.e., if ${\mathcal A}={\mathcal A}_{T_R}$ (${\mathcal A}={\mathcal A}^{P_R}$).
The following results follow immediately from Lemma \[xreldreprest\], Theorem \[reldreprest\] and Corollary \[fcorreldreprest\].
\[labxreldreprest\] Let $\mathbf{M}$ be a non-trivial complete lattice and $S, X$ non-empty sets such that $\mathbf{B}$ is a bounded subposet of $\mathbf{M}^{S}$. Let $P:B\to ({M}^{S})^{X}$ and $T:B\to ({M}^{S})^{X}$ be morphisms of partially ordered sets such that, for all $a, b\in B$ and all $x\in X$, $$P_{x}(a)\leq b\ {\Longleftrightarrow}\ a\leq T_{x}(b).$$
1. If $P(B)\subseteq B^{X}$ then $R_T\subseteq R^{P}$.
2. If $T(B)\subseteq B^{X}$ then $R^{P}\subseteq R_T$.
3. If $P(B)\subseteq B^{X}$ and $T(B)\subseteq B^{X}$ then $R_T= R^{P}$ and ${\mathcal A}_{T}={\mathcal A}^{P}$.
Hence, using Theorem \[labxreldreprest\], we can ask whether the functors computed by $(\star)$ and $(\star\star)$ can recover a given relation $R$ on the set of states. The answer is in the following theorem.
\[relxxxdreprest\] Let $\mathbf{M}$ be a non-trivial complete lattice and $S, X$ non-empty sets equipped with a set of labelled transitions $R\subseteq X\times S\times S$. Let $\mathbf{B}$ be a bounded subposet of $\mathbf{M}^{S}$. Let $P_R\colon B\to (M^{S})^{X}$ and $T_R:B\to (M^{S})^{X}$ be labelled transition functors [constructed by means of]{} $R$. Then, for all $a, b\in B$ and all $x\in X$, $$P_{R_x}(a)\leq b\ {\Longleftrightarrow}\ a\leq T_{R_x}(b).$$ Moreover, the following holds.
1. If $R=R_{T_R}$ and $T_R(B)\subseteq B^{X}$ then $R=R_{T_R}=R^{P_R}$.
2. If $R=R^{P_R}$ and $P_R(B)\subseteq B^{X}$ then $R=R_{T_R}=R^{P_R}$.
The following corollary illustrates the situation in the case when our partially ordered set $\mathbf{B}$ of propositions is large enough, i.e., the case when $\{0,1\}^{S}\subseteq B$.
\[labfcorreldreprest\] Let $\mathbf{M}$ be a non-trivial complete lattice and ${\mathcal A}=(X,S,R)$ an automaton. Let $\mathbf{B}$ be a bounded subposet of $\mathbf{M}^{S}$ such that $\{0,1\}^{S}\subseteq B$. Then the automaton ${\mathcal A}$ is recoverable both from $P_R$ and ${T_R}$.
We can illustrate previous results in the following example.
\[example2\]\[ex2\] Consider the automaton ${\mathcal A}$, the set of propositions $B$ and the state-transition relation $R$ of Example \[firef\]. From Example \[expend2\] we know the labelled upper transition functor $T_R=(T_{R_{x_1}}, T_{R_{x_2}})$ and the labelled lower transition functor $P_R=(P_{R_{x_1}}, P_{R_{x_2}})$ from $B$ to $(2^{S})^{X}$. Since $B=2^{S}$ we have $T_{R_{x_1}}(B)\cup T_{R_{x_2}}(B) \subseteq B$ and $P_{R_{x_1}}(B)\cup P_{R_{x_2}}(B) \subseteq B$.
Now, we use $T_R$ for computing the transition relations $R_{T_{R_{x_1}}}$ and $R_{T_{R_{x_2}}}$ (by the formula $(\dagger)$ and Example \[expend4\]) and $P_R$ for computing the transition relations $R^{P_{R_{x_1}}}$ and $R^{P_{R_{x_2}}}$ (by the formula $(\dagger\dagger)$ and Example \[expend4\]). We obtain by Corollary \[fcorreldreprest\] that $R_{T_{R_{x_1}}}=R^{P_{R_{x_1}}}=R_{x_1}$ and $R_{T_{R_{x_2}}}=R^{P_{R_{x_2}}}=R_{x_2}$. It follows that $R_{T_R}=R^{P_{R}}=\{x_1\}\times R_{T_{R_{x_1}}}\cup \{x_2\}\times R_{T_{R_{x_2}}}=R$, i.e., our given state-transition relation $R$ is simultaneously recoverable by the transition functors $T_R$ and $P_R$. Hence these functors are characteristics of the triple $(B,X,S)$.
Constructions of automata
=========================
By a [*synthesis*]{} in Theory of Systems is usually meant the task to construct an automaton ${\mathcal A}$ which realizes a dynamic process at least partially known to the user. Hence, we are given a description of this dynamic process and we know the set $X$ of inputs. Our task is to set up the set $S$ of states and a relation $R$ on $S$ labelled by elements from $X$ such that the constructed automaton $(X, S, R)$ induces the logic, i.e., the partially ordered set of propositions, which corresponds to the original description.
The algebraic tools collected in previous sections enable us to solve the mentioned task. In what follows we involve a construction of $S$ and $R$ provided our logic with the transition functor representing the dynamics of our system is given. As in the previous section, our logic ${\mathbf B}$ will be considered to be a bounded subposet $\mathbf B$ of a power ${\mathbf M}^S$ where ${\mathbf M}$ is a complete lattice of truth-values. Our logic ${\mathbf B}$ is equipped with a transition functor $T:B\to (M^{S})^{X}$ where $X$ is a set of possible inputs. We ask that either $T=T_{R}$ or $T=P^{R}$. Depending on the respective type of our considered logic and of the properties of $T$ we will present some partial solutions to this task.
Automata via partially ordered sets {#autopres}
-----------------------------------
Recall that (see e.g. [@Markowsky]), for any bounded partially ordered set $\mathbf{B}=({B};\leq, 0,1)$, we have a full set $S_{\mathbf B}$ of morphisms of bounded partially ordered set into the two-element Boolean algebra considered as a bounded partially ordered set ${\mathbf 2}=(\{0, 1\}; \leq, 0, 1))$. The elements $h_D: B\to \{0, 1\}$ of $S_{\mathbf B}$ (indexed by proper down-sets $D$ of $\mathbf{B}$) are morphisms of bounded partially ordered sets defined by the prescription ${h_{D}}(a)=0$ iff $a\in D$.
In other words, every bounded partially ordered set ${}{\mathbf B}$ can be embedded into a Boolean algebra ${\mathbf 2}^{S}$ for a certain set $S$ via the mapping $i_{{}{\mathbf B}}^{S}$.
Hence, it looks hopeful to use the bounded partially ordered set ${\mathbf 2}=(\{0, 1\}; \leq, 0, 1)$ for the construction of our state-transition relation $R_T\subseteq X\times S_{\mathbf B} \times S_{\mathbf B}$. As mentioned in the beginning of this section, we are interested in a construction of an automaton ${\mathcal A}=(X,S,R)$ for a given set $X$ of inputs and determined by a certain partially ordered set of propositions. We cannot assume that this set of propositions is necessarily a Boolean algebra. In the previous part we supposed that this logic ${\mathbf B}$ is a bounded partially ordered set ${\mathbf B}=(B,\leq,0,1)$. Now, we are going to solve the situation when it is only a subset $C$ of $B$.
\[boolgaldreprest\] Let $\mathbf{B}=({B};\leq, 0,1)$ be a bounded partially ordered set such that $\mathbf{B}$ is a bounded subposet of $2^{S_{\mathbf B}}$. Let $(C;\leq, 1)$ be a subposet of $\mathbf{B}$ containing $1$, and $X$ a non-empty set. Let $T=(T_x)_{x\in X}$ where $T_{x}\colon{}C\to 2^{S_{\mathbf B}}$ are morphisms of partially ordered sets such that $T_x(1)=1$ for all $x\in X$. Let $R_T$ be the upper $T$-induced state-transition relation and $T_{R_T}\colon{}B\to (2^{S_{\mathbf B}})^{X}$ be the labelled upper transition functor constructed by means of the upper T-induced automaton ${\mathcal A}_T=(X, S_{\mathbf B},R_T)$. Then, for all $b\in C$, $$T(b)=T_{R_T}(b).$$
Clearly, $T_{R_T}=((T_{R_T})_{x})_{x\in X}$ where $(T_{R_T})_{x}:B\to 2^{S_{\mathbf B}}$ are morphisms of partially ordered sets for all $x\in X$. We write $R_{T}=\bigcup_{x\in X}\{ x\}\times R_{T_x}$ where $R_{T_x}$, $x\in X$ are the upper $T_x$-induced relation by $\mathbf{2}$.
Let us choose $b\in C$ and $x\in X$ arbitrarily, but fixed. We have to check that $T_x(b)=(T_{R_T})_{x}(b)$. Assume that $s\in S_{\mathbf B}$. It is enough to verify that $T_x(b)(s)= \bigwedge\{b(t)\mid s R_{T_x} t\}$.
Evidently, for all $t\in S_{\mathbf B}$ such that $s R_{T_x} t$, $T_x(b)(s) \leq b(t)$. Hence $T_x(b)(s)\leq \bigwedge\{b(t)\mid s R_{T_x} t\}$. To get the other inequality assume that $T_x(b)(s)< \bigwedge\{b(t)\mid s R_{T_x} t\}$. Then $T_x(b)(s)=0$ and $\bigwedge\{b(t)\mid s R_{T_x} t\}=1$. Put $V_{x}=\{z\in B\mid (\exists y\in C)(T_x(y)(s)=1\ \text{and}\ y\leq z)\}$. It follows that $b\notin V_x$ and $V_x$ is an upper set of ${\mathbf B}$ such that $1\in V_x$ (since $T_x(1)(s)=1(s)=1$). Let $W_x$ be a maximal proper upper set of ${\mathbf B}$ including $V_x$ such that $b\notin W_x$. Put $U_x=B\setminus W_x$. Then $U_x$ is a proper down-set, $0\in U_x$, ${h_{U_x}}(b)=0$ and ${h_{U_x}}(z)=1$ for all $z\in V_x$, i.e., ${h_{U_x}}\in S_{\mathbf B}$ such that $T_x(a)(s)\leq a({h_{U_x}})$ for all $a\in C$. But this yields that $s R_{T_x} h_{U_x}$, i.e., $1=\bigwedge\{b(t)\mid s R_{T_x} t\}\leq b({h_{U_x}})={h_{U_x}}(b)=0$, a contradiction.
Using the relation $R^P$ instead of $R_T$, we can obtain a statement dual to Theorem \[boolgaldreprest\].
Automata via Boolean algebras {#autoboolpres}
-----------------------------
As for bounded partially ordered sets we have that, for any Boolean algebra ${\mathbf B}=(B;\vee, \wedge, {}{'}, 0,$ $1)$, there is a full set $S_{\mathbf B}^{\text{bool}}$ of morphisms of Boolean algebras into the two-element Boolean algebra $\mathbf{2}=(\{0, 1\};\vee, \wedge, {}{'}, 0, 1)$.
In what follows, we will modify our Theorem \[boolgaldreprest\] for the more special case when the considered subposet ${\mathbf C}$ is closed under finite infima.
We are now ready to show under which conditions our transition functor can be recovered.
\[fullbooldreprest\] Let $\mathbf{B}=({B};\vee, \wedge, {}{'}, 0,1)$ be a Boolean algebra such that $\mathbf{B}$ is a sub-Boolean algebra of ${\mathbf 2}^{S_{\mathbf B}^{\text{bool}}}$. Let ${\mathbf C}=(C;\leq, 1)$ be a subposet of $\mathbf{B}$ containing $1$ such that $x, y\in C$ implies $x\wedge y\in C$, and $X$ a non-empty set. Let $T=(T_x)_{x\in X}$ where $T_{x}:C\to 2^{S_{\mathbf B}^{\text{bool}}}$ are mappings preserving finite meets such that $T_x(1)=1$ for all $x\in X$. Let $R_T$ be the upper $T$-induced state-transition relation and $T_{R_T}\colon{}B\to (2^{S_{\mathbf B}})^{X}$ be the labelled upper transition functor constructed by means of the upper T-induced automaton ${\mathcal A}_T=(X, S_{\mathbf B}^{\text{bool}},R_T)$. Then, for all $b\in C$, $$T(b)=T_{R_T}(b).$$
Let us choose $b\in C$ and $x\in X$ arbitrarily, but fixed. Assume that $s\in S_{\mathbf B}^{\text{bool}}$. As in Theorem \[boolgaldreprest\] it is enough to verify that $T_x(b)(s)= \bigwedge\{b(t)\mid s R_{T_x} t\}$.
By the same considerations as in the proof of Theorem \[boolgaldreprest\] we have $T_x(b)(s)\leq \bigwedge\{b(t)\mid s R_{T_x} t\}$. To get the other inequality assume that $T_x(b)(s)< \bigwedge\{b(t)\mid s R_{T_x} t\}$. Then $T_x(b)(s)=0$ and $\bigwedge\{b(t)\mid s R_{T_x} t\}=1$. Put $V_{x}=\{z\in B\mid (\exists y\in C)(T_x(y)(s)=1\ \text{and}\ y\leq z)\}$. It follows that $b\notin V_x$ and $V_x$ is a filter of ${\mathbf B}$ such that $1\in V_x$ (since $y, z\in V_x\cap C$ implies $T_x(y\wedge z)(s)=(T_x(y)\wedge T_x(z))(s)=T_x(y)(s)\wedge T_x(z)(s)=1\wedge 1=1$ and $T_x(1)(s)=1(s)=1$). Let $W_x$ be a maximal proper filter of ${\mathbf B}$ including $V_x$ such that $b\notin W_x$. Then $W_x$ is an ultrafilter of ${\mathbf B}$. The ultrafilter $W_x$ determines a map $g_{W_x}\in S_{{\mathbf B}}^{\text{bool}}$ such that ${g_{W_x}}(b)=0$ and ${g_{W_x}}(z)=1$ for all $z\in V_x$, i.e., ${g_{W_x}}\in S_{{\mathbf B}}^{\text{bool}}$ is such that $T_x(a)(s)\leq {g_{W_x}}(a)=a({g_{W_x}})$ for all $a\in C$. This yields that $s R_{T_x} g_{W_x}$, i.e., $1=\bigwedge\{b(t)\mid s R_{T_x} t\}\leq b({g_{W_x}})={g_{W_x}}(b)=0$, a contradiction.
The example below shows an application of Theorem \[fullbooldreprest\].
\[apthbool\] Consider again the set $S=\{s_1, s_2, s_3\}$ of states, the set $X=\{x_1, x_2\}$, and the set of propositions $B=2^{S}$ of Example \[firef\]. Recall that in this case $S=S_{\mathbf B}^{\text{bool}}$.
Assume that $C=\{0, r, p', q', 1\}\subseteq B$ from the logic ${\mathbf B}$ of Example \[expend1\].
Assume further that our partially known transition operator $T$ from $C$ to $(2^{S})^{X}$ is given as follows:
[c c]{}
------------------- ---------------------
$T_{{x_1}}(0)=0$, $T_{{x_1}}(1)=1$,
$T_{{x_1}}(r)=r$, $T_{{x_1}}(p')=q'$,
$T_{{x_1}}(q')=p'$,
------------------- ---------------------
&
------------------- --------------------
$T_{{x_2}}(0)=p$, $T_{{x_2}}(1)=1$,
$T_{{x_2}}(r)=1$, $T_{{x_2}}(p')=1$,
$T_{{x_2}}(q')=1$.
------------------- --------------------
\
Note that $T$ was chosen as a restriction of the operator $T_R$ from Example \[expend2\] on the set $C$.
Then, by an easy computation, we obtain from ($\dagger$) that $R_{T}=\{x_1\}\times R_{T_{x_1}}\cup \{x_2\}\times R_{T_{x_2}}$ where $$R_{T_{x_1}}=\{(s_1, s_2), (s_2, s_1), (s_3,s_3)\}\ \text{and}\
R_{T_{x_2}}=\{(s_2, s_3), (s_3, s_3)\}.$$
From Theorem \[fullbooldreprest\] we have that $T$ is a restriction of the operator $T_{R_T}$ on the set $C$.
Moreover, we can see that our state-transition relation $R$ from Example \[firef\] coincides with the induced state-transition relation $R_T$, i.e., our partially known transition operator $T$ gives us a full information about the automaton ${\mathcal A}$ from Example \[firef\].
Conclusion
==========
We have shown in our paper that to every automaton considered as an acceptor a certain dynamic logic can be assigned. The dynamic nature of an automaton is expressed via its transition relation labelled by inputs. The logic consists of propositions on the given automaton and its dynamic nature is expressed by means of the so-called transition functor. However, this logic enables us to derive again a certain relation on the set of states which is labelled by inputs. The main task is whether the relation derived from the logic and the transition functor is faithful, i.e., whether it coincides with the original transition relation of the automaton.
In fact, we have shown that if our set of propositions is large enough this recovering of the transition relation is possible. Several examples are included.
Conversely, having a set $B$ of propositions that describe behaviour of our intended automaton and the transition functor which express the dynamicity of this process together with the set $X$ of inputs (going from environment), we presented a construction of a set of states $S$ and of a state-transition relation $R$ on $S$ such the constructed automaton $(X,S,R)$ realizes the description given by the propositions. It is shown that for every large enough set of states the induced transition functor coincides with the original one.
We believe that this theory enables us to consider automata from a different point of view which is more close to logical treatment and which enables us to make estimations and forecasts of the behaviour of automaton particularly in a nondeterministic mode. The next task will be to testify which type of automaton is determined by a suitable sort of logic.
Acknowledgement {#acknowledgement .unnumbered}
===============
This is a pre-print of an article published in International Journal of Theoretical Physics. The final authenticated version of the article is available online at: https://link.springer.com/article/10.1007/s10773-017-3311-0.
[99]{}
BISIO, A.— D’ARIANO G.M.— PERINOTTI P.—TOSINI A.: *Free Quantum Field Theory from Quantum Cellular Automata*, Foundations of Physics **45**, (2015), 1137–1152.
*Lattices and ordered algebraic structures*, Springer-Verlag London Limited, 2005.
BURGESS, J.: *Basic tense logic*, in: Handbook of Philosophical Logic, vol. II (D. M. Gabbay, F. Günther, eds.), D. Reidel Publ. Comp., 1984, pp. 89–139.
CHAJDA, I.—PASEKA, J.: *Dynamic Effect Algebras and their Representations*, Soft Computing **16**, (2012), 1733–1741.
CHAJDA, I.—PASEKA, J.: *Tense Operators and Dynamic De Morgan Algebras*, In: Proc. 2013 IEEE 43rd Internat. Symp. Multiple-Valued Logic, Springer, (2013), 219–224.
CHAJDA, I.—PASEKA, J.: [*Dynamic Order Algebras as an Axiomatization of Modal and Tense Logics*]{}, [International Journal of Theoretical Physics]{}, **54** (2015), 4327–4340. CHAJDA, I.—PASEKA, J.: [*Algebraic Approach to Tense Operators*]{}, Heldermann Verlag, Lemgo, 2015.
CHAJDA, I.—PASEKA, J.: *Transition operators assigned to physical systems*, Reports on Mathematical Physics, **78** (2016), 259–280.
DIXON, C.—BOLOTOV, A.—FISHER, M.: *Alternating automata and temporal logic normal forms*, Annals of Pure and Applied Logic, **135** (2005), 263–285.
FISHER, M.: *An Introduction to Practical Formal Methods Using Temporal Logic*, John Wiley & Sons, 2011.
GONZÁLEZ DE MENDÍVIL, J. R.—GARITAGOITIA, J. R.: *Determinization of fuzzy automata via factorization of fuzzy states*, Information Sciences **283** (2014), 165–179.
KONUR, S.—FISHER, M.—SCHEWE, S.: *Combined model checking for temporal, probabilistic, and real-time logics*, Theoretical Computer Science **503** (2013), 61–88.
<span style="font-variant:small-caps;">MARKOWSKY, G.:</span> *The representation of posets and lattices by sets*, Algebra Universalis [**11**]{} (1980), 173–192.
, *The complementation problem for Büchi automata with applications to temporal logic*, Theoretical Computer Science, **49** (1987), 217–237.
*An automata-theoretic approach to linear temporal logic*, in: Proceedings of the VIII Banff Higher Order Workshop, in: Lecture Notes in Computer Science, vol. 1043, Springer-Verlag, 1996, pp. 238–266.
YONGMING LI: *Finite automata theory with membership values in lattices*, Information Sciences **181** (2011), 1003–1017.
[^1]: [Both authors acknowledge the support by a bilateral project New Perspectives on Residuated Posets financed by Austrian Science Fund (FWF): project I 1923-N25, and the Czech Science Foundation (GAČR): project 15-34697L]{}.
|
---
abstract: 'We propose a novel adversarial multi-task learning scheme, aiming at actively curtailing the inter-talker feature variability while maximizing its senone discriminability so as to enhance the performance of a deep neural network (DNN) based ASR system. We call the scheme speaker-invariant training (SIT). In SIT, a DNN acoustic model and a speaker classifier network are jointly optimized to minimize the senone (tied triphone state) classification loss, and simultaneously mini-maximize the speaker classification loss. A speaker-invariant and senone-discriminative deep feature is learned through this adversarial multi-task learning. With SIT, a canonical DNN acoustic model with significantly reduced variance in its output probabilities is learned with no explicit speaker-independent (SI) transformations or speaker-specific representations used in training or testing. Evaluated on the CHiME-3 dataset, the SIT achieves 4.99% relative word error rate (WER) improvement over the conventional SI acoustic model. With additional unsupervised speaker adaptation, the speaker-adapted (SA) SIT model achieves 4.86% relative WER gain over the SA SI acoustic model.'
address: |
$^{1}$ Microsoft AI and Research, Redmond, WA, USA\
$^{2}$ Georgia Institute of Technology, Atlanta, GA, USA
title: 'Speaker-Invariant Training via Adversarial Learning'
---
speaker-invariant training, adversarial learning, speech recognition, deep neural networks
Introduction {#sec:intro}
============
The deep neural network (DNN) based acoustic models have been widely used in automatic speech recognition (ASR) and have achieved extraordinary performance improvement [@DNN4ASR-hinton2012; @yu2017recent]. However, the performance of a speaker-independent (SI) acoustic model trained with speech data from a large number of speakers is still affected by the spectral variations in each speech unit caused by the inter-speaker variability. Therefore, speaker adaptation methods are widely used to boost the recognition system performance [@saon2013speaker; @svd_xue_2; @xue2014fast; @miao2015speaker; @wu2015multi; @svd_zhao; @map_huang; @multi_huang; @lhuc_pawel_1; @tan2016cluster; @smarakoon2016factorized].
Recently, adversarial learning has captured great attention of deep learning community given its remarkable success in estimating generative models [@gan]. In speech, it has been applied to noise-robust [@grl_shinohara; @grl_serdyuk; @grl_sun; @dsn_meng; @meng2018adversarial] and conversational ASR [@saon2017english] using gradient reversal layer [@grl_ganin] or domain separation network [@dsn]. Inspired by this, we propose *speaker-invariant training (SIT)* via adversarial learning to reduce the effect of speaker variability in acoustic modeling. In SIT, a DNN acoustic model and a DNN speaker classifier are jointly trained to simultaneously optimize the primary task of minimizing the senone classification loss and the secondary task of mini-maximizing the speaker classification loss. Through this adversarial multi-task learning procedure, a feature extractor is learned as the bottom layers of the DNN acoustic model that maps the input speech frames from different speakers into *speaker-invariant* and senone-discriminative deep hidden features, so that further senone classification is based on representations with the speaker factor already normalized out. The DNN acoustic model with SIT can be directly used to generate word transcription for unseen test speakers through *one-pass online* decoding. On top of the SIT DNN, further adaptation can be performed to adjust the model towards the test speakers, achieving even higher ASR accuracy. We evaluate SIT with ASR experiments on CHiME-3 dataset, the SIT DNN acoustic model achieves 4.99% relative WER improvement over the baseline SI DNN. Further, SIT achieves 4.86% relative WER gain over the SI DNN when the same unsupervised speaker adaptation process is performed on both models. With t-distributed stochastic neighbor embedding (t-SNE) [@maaten2008visualizing] visualization, we show that, after SIT, the deep feature distributions of different speakers are well aligned with each other, which demonstrates the strong capability of SIT in reducing speaker-variability.
Related Work {#sec:relate}
============
Speaker-adaptive training (SAT) is proposed to generate canonical acoustic models coupled with speaker adaptation. For Gaussian mixture model (GMM)-hidden Markov model (HMM) acoustic model, SAT applies unconstrained [@anastasakos1996compact] or constrained [@gales1998maximum] model-space linear transformations that separately model the speaker-specific characteristics and are jointly estimated with the GMM-HMM parameters to maximize the likelihood of the training data. Cluster-adaptive training (CAT) [@gales2000cluster] is then proposed to use a linear interpolation of all the cluster means as the mean of the particular speaker instead of a single cluster as representative of a particular speaker. However, SAT of GMM-HMM needs to have two sets of models, the SI model and canonical model. During testing, the SI model is used to generate the first pass decoding transcription, and the canonical model is combined with speaker-specific transformation to adapt to the new speaker.
For DNN-HMM acoustic model, CAT [@tan2016cluster] and multi-basis adaptive neural networks [@wu2015multi] are proposed to represent the weight and/or the bias of the speaker-dependent (SD) affine transformation in each hidden layer of a DNN acoustic model as a linear combination of SI bases, where the combination weights are low-dimensional SD speaker representations. The canonical SI bases with reduced variances are jointly optimized with the SD speaker representations during the SAT to minimize the cross-entropy loss. During unsupervised adaptation, the test speaker representations are re-estimated using alignments from the first-pass decoding of the test data with SI DNN as the supervisions and are used in the second-pass decoding to generate the transcription. Factorized hidden layer [@smarakoon2016factorized] is similar to [@tan2016cluster; @wu2015multi], but includes SI DNN weights as part of the linear combination. In [@xue2014fast], SD speaker codes are transformed by a set of SI matrices and then directly added to the biases of the hidden-layer affine transformations. The speaker codes and SI transformations are jointly estimated during SAT. For these methods, two passes of decoding are required to generate the final transcription in unsupervised adaption setup, which increases the computational complexity of the system. In [@miao2015speaker; @saon2013speaker], an SI adaptation network is learned to derive speaker-normalized features from i-vectors to train the canonical DNN acoustic model. The i-vectors for the test speakers are then estimated and used for decoding after going through the SI adaptation network. In [@saon2017english], a reconstruction network is trained to predict the input i-vector given the speech feature and its corresponding i-vector are at the input of the acoustic model. The mean-squared error loss of the i-vector reconstruction and the cross-entropy loss of the DNN acoustic model are jointly optimized through adversarial multi-task learning. Although these methods generate the final transcription with one-pass of decoding, they need to go through the entire test utterances in order to estimate the i-vectors, making it impossible to perform online decoding. Moreover, the accuracy of i-vectors estimation are limited by the duration of the test utterances. The estimation of i-vector for each utterance also increases the computational complexity of the system.
SIT directly minimizes the speaker variations by optimizing an adversarial multi-task objective other than the most basic cross entropy object as in SAT. It forgoes the need of estimating any additional SI bases or speaker representations during training or testing. The direct use of SIT DNN acoustic model in testing enables the generation of word transcription for unseen test speakers through *one-pass online* decoding. Moreover, it effectively suppresses the inter-speaker variability via a lightweight system with much reduced training parameters and computational complexity. To achieve additional gain, unsupervised speaker adaptation can also be further conducted on the SIT model with one extra pass of decoding.
Speaker-Invariant Training {#sec:sit}
==========================
To perform SIT, we need a sequence of speech frames $X=\{x_{1}, \ldots, x_{N}\}$, a sequence of senone labels $Y=\{y_{1}, \ldots, y_{N}\}$ aligned with $X$ and a sequence of speaker labels $S=\{s_{1}, \ldots, s_{N}\}$ aligned with $X$. The goal of SIT is to reduce the variances of hidden and output units distributions of the DNN acoustic model that are caused by the inherent inter-speaker variability in the speech signal. To achieve speaker-robustness, we learn a *speaker-invariant* and *senone-discriminative* deep hidden feature in the DNN acoustic model through adversarial multi-task learning and make senone posterior predictions based on the learned deep feature. In order to do so, we view the first few layers of the acoustic model as a feature extractor network $M_f$ with parameters $\theta_f$ that maps $X$ from different speakers to deep hidden features $F=\{f_1, \ldots, f_N\}$ (see Fig. \[fig:sit\]) and the upper layers of the acoustic model as a senone classifier $M_y$ with parameters $\theta_y$ that maps the intermediate features $F$ to the senone posteriors $p_y(q|f_i; \theta_y), q\in
\mathcal{Q}$ as follows:
![The framework of speaker-invariant training via adversarial learning for unsupervised adaptation of the acoustic models[]{data-label="fig:sit"}](sit.png){width="0.9\columnwidth"}
$$\begin{aligned}
M_y(f_i) = M_y(M_f(x_i)) = p_y(q | x_i; \theta_f,
\theta_y)
\label{eqn:senone_classify}\end{aligned}$$
where $\mathcal{Q}$ is the set of all senones modeled by the acoustic model.
We further introduce a speaker classifier network $M_s$ which maps the deep features $F$ to the speaker posteriors $p_s(a |
f_i; \theta_s), a \in \mathcal{A}$ as follows: $$\begin{aligned}
M_s(M_f(x_i)) & = p_s(a | x_i; \theta_s, \theta_f)
\label{eqn:speaker_classify}\end{aligned}$$ where $a$ is one speaker in the set of all speakers $\mathcal{A}$.
To make the deep features $F$ speaker-invariant, the distributions of the features from different speakers should be as close to each other as possible. Therefore, the $M_f$ and $M_s$ are jointly trained with an adversarial objective, in which $\theta_f$ is adjusted to *maximize* the frame-level speaker classification loss $\mathcal{L}_{\text{speaker}}(\theta_f, \theta_s)$ while $\theta_s$ is adjusted to *minimize* $\mathcal{L}_{\text{speaker}}(\theta_f, \theta_s)$ below: $$\begin{aligned}
& \mathcal{L}_{\text{speaker}}(\theta_f, \theta_s) = - \sum_{i = 1}^{N} \log
p_s(s_i | x_i; \theta_f, \theta_s)\nonumber \\
& \quad \quad \quad \quad \quad \quad = - \sum_{i = 1}^{N} \sum_{a\in
\mathcal{A}} \mathbbm{1}[a =
s_i] \log M_s(M_f(x_i)) \label{eqn:loss_cond1}
$$ where $s_i$ denote the speaker label for the input frame $x_i$.
This minimax competition will first increase the discriminativity of $M_s$ and the speaker-invariance of the features generated by $M_f$, and will eventually converge to the point where $M_f$ generates extremely confusing features that $M_s$ is unable to distinguish.
At the same time, we want to make the deep features senone-discriminative by minimizing the cross-entropy loss between the predicted senone posteriors and the senone labels as follows: $$\begin{aligned}
\mathcal{L}_{\text{senone}}(\theta_f, \theta_y) & = -\sum_{i = 1}^N
\log p_y(y_i|x_i;\theta_f, \theta_y) \nonumber \\
&=-\sum_{i = 1}^N \sum_{q\in
\mathcal{Q}} \mathbbm{1}[q =
y_i] \log M_y(M_f(x_i))
\label{eqn:loss_senone}\end{aligned}$$
In SIT, the acoustic model network and the condition classifier network are trained to jointly optimize the primary task of senone classification and the secondary task of speaker classification with an adversarial objective function. Therefore, the total loss is constructed as $$\begin{aligned}
&\mathcal{L}_{\text{total}}(\theta_f, \theta_y, \theta_s) =
\mathcal{L}_{\text{senone}}(\theta_f, \theta_y) -
\lambda\mathcal{L}_{\text{speaker}}(\theta_s, \theta_f)
\label{eqn:loss_total}\end{aligned}$$ where $\lambda$ controls the trade-off between the senone loss and the speaker classification loss in Eq. and Eq. respectively.
We need to find the optimal parameters $\hat{\theta}_y, \hat{\theta}_f$ and $\hat{\theta}_s$ such that $$\begin{aligned}
(\hat{\theta}_f, \hat{\theta}_y) = \operatorname*{arg\,min}_{\theta_y, \theta_f} \mathcal{L}_{\text{total}}(\theta_f, \theta_y, \hat{\theta}_s) \\
\hat{\theta}_s = \operatorname*{arg\,max}_{\theta_s} \mathcal{L}_{\text{total}}(\hat{\theta}_f, \hat{\theta}_y, \theta_s) \end{aligned}$$
The parameters are updated as follows via back propagation with stochastic gradient descent (SGD): $$\begin{aligned}
& \theta_f \leftarrow \theta_f - \mu \left[ \frac{\partial
\mathcal{L}_{\text{senone}}}{\partial \theta_f} - \lambda \frac{\partial
\mathcal{L}_{\text{speaker}}}{\partial
\theta_f}
\right]
\label{eqn:grad_f} \\
& \theta_s \leftarrow \theta_s - \mu \frac{\partial
\mathcal{L}_{\text{speaker}}}{\partial \theta_s}
\label{eqn:grad_s} \\
& \theta_y \leftarrow \theta_y - \mu \frac{\partial
\mathcal{L}_{\text{senone}}}{\partial \theta_y}
\label{eqn:grad_y}\end{aligned}$$ where $\mu$ is the learning rate.
Note that the negative coefficient $-\lambda$ in Eq. induces reversed gradient that maximizes $\mathcal{L}_{\text{speaker}}(\theta_f, \theta_s)$ in Eq. and makes the deep feature speaker-invariant. For easy implementation, gradient reversal layer is introduced in [@grl_ganin], which acts as an identity transform in the forward propagation and multiplies the gradient by $-\lambda$ during the backward propagation.
The optimized network consisting of $M_f$ and $M_y$ is used as the SIT acoustic model for ASR on test data.
Experiments {#sec:experiment}
===========
In this work, we perform SIT on a DNN-hidden Markov model (HMM) acoustic model for ASR on CHiME-3 dataset.
CHiME-3 Dataset
---------------
The CHiME-3 dataset is released with the 3rd CHiME speech Separation and Recognition Challenge [@chime3_barker], which incorporates the Wall Street Journal corpus sentences spoken in challenging noisy environments, recorded using a 6-channel tablet based microphone array. CHiME-3 dataset consists of both real and simulated data. The real speech data was recorded in five real noisy environments (on buses (BUS), in cafés (CAF), in pedestrian areas (PED), at street junctions (STR) and in booth (BTH)). To generate the simulated data, the clean speech is first convolved with the estimated impulse response of the environment and then mixed with the background noise separately recorded in that environment [@chime3_hori]. The noisy training data consists of 1999 real noisy utterances from 4 speakers, and 7138 simulated noisy utterances from 83 speakers in the WSJ0 SI-84 training set recorded in 4 noisy environments. There are 3280 utterances in the development set including 410 real and 410 simulated utterances for each of the 4 environments. There are 2640 utterances in the test set including 330 real and 330 simulated utterances for each of the 4 environments. The speakers in training set, development set and the test set are mutually different (i.e., 12 different speakers in the CHiME-3 dataset). The training, development and test data sets are all recorded in 6 different channels.
In the experiments, we use 9137 noisy training utterances in the CHiME-3 dataset as the training data. The real and simulated development data in CHiME-3 are used as the test data. Both the training and test data are far-field speech from the 5th microphone channel. The WSJ 5K word 3-gram language model (LM) is used for decoding.
Baseline System {#sec:baseline}
---------------
In the baseline system, we first train an SI DNN-HMM acoustic model using 9137 noisy training utterances with cross-entropy criterion.
The 29-dimensional log Mel filterbank features together with 1st and 2nd order delta features (totally 87-dimensional) for both the clean and noisy utterances are extracted by following the process in [@li2012improving]. Each frame is spliced together with 5 left and 5 right context frames to form a 957-dimensional feature. The spliced features are fed as the input of the feed-forward DNN after global mean and variance normalization. The DNN has 7 hidden layers with 2048 hidden units for each layer. The output layer of the DNN has 3012 output units corresponding to 3012 senone labels. Senone-level forced alignment of the clean data is generated using a Gaussian mixture model-HMM system. As shown in Table \[table:asr\_wer\], the WERs for the SI DNN are 17.84% and 17.72% respectively on real and simulated test data respectively. Note that our experimental setup does not achieve the state-of-the-art performance on CHiME-3 dataset (e.g., we did not perform beamforming, sequence training or use recurrent neural network language model for decoding.) since our goal is to simply verify the effectiveness of SIT in reducing inter-speaker variability.
Speaker-Invariant Training for Robust Speech Recognition
--------------------------------------------------------
We further perform SIT on the baseline noisy DNN acoustic model with 9137 noisy training utterances in CHiME-3. The feature extractor $M_f$ is initialized with the first $N_h$ layers of the DNN and the senone classifier is initialized with the rest $(7-N_h)$ hidden layers plus the output layer. $N_h$ indicates the position of the deep hidden feature in the acoustic model. The speaker classifier $M_s$ is a feedforward DNN with 2 hidden layers and 512 hidden units for each layer. The output layer of $M_s$ has 87 units predicting the posteriors of 87 speakers in the training set. $M_f$, $M_y$ and $M_s$ are jointly trained with an adversarial multi-task objective as described in Section \[sec:sit\]. $N_h$ and $\lambda$ are fixed at $2$ and $3.0$ in our experiments. The SIT DNN acoustic model achieves 16.95% and 16.54% WER on the real and simulated test data respectively, which are 4.99% and 6.66% relative improvements over the SI DNN baseline.
System Data BUS CAF PED STR Avg.
-------- ------ ------- ------- ------- ------- -----------
Real 24.77 16.12 13.39 17.27 17.84
Simu 18.07 21.44 14.68 16.70 17.72
Real 22.91 15.63 12.77 16.66 **16.95**
Simu 16.64 20.23 13.53 15.96 **16.54**
: The ASR WER (%) performance of SI and SIT DNN acoustic models on real and simulated development set of CHiME-3.[]{data-label="table:asr_wer"}
Visualization of Deep Features
------------------------------
We randomly select two male speakers and two female speakers from the noisy training set and extract speech frames aligned with the phoneme “ah” for each of the four speakers. In Figs. \[fig:tsne\_si\] and \[fig:tsne\_sit\], we visualize the deep features $F$ generated by the SI and SIT DNN acoustic models when the “ah” frames of the four speakers are given as the input using t-SNE. In Fig. \[fig:tsne\_si\], the deep feature distributions in the SI model for the male (in red and green) and female speakers (in back and blue) are far away from each other and even the distributions for the speakers of the same gender are separated from each other. While after SIT, the deep feature distributions for all the male and female speakers are well aligned with each other as shown in Fig. \[fig:tsne\_sit\]. The significant increase in the overlap among distributions of different speakers justifies that the SIT remarkably enhances the speaker-invariance of the deep features $F$. The adversarial optimization of the speaker classification loss does not just serve as a regularization term to achieve better generalization on the test data.
![t-SNE visualization of the deep features $F$ generated by the SI DNN acoustic model when speech frames aligned with phoneme “ah” from two male and two female speakers in CHiME-3 training set are fed as the input.[]{data-label="fig:tsne_si"}](tsne_si.png){width="0.95\columnwidth"}
![t-SNE visualization of the deep features $F$ generated by the SIT DNN acoustic model when the same speech frames as in Fig. \[fig:tsne\_si\] are fed as the input.[]{data-label="fig:tsne_sit"}](tsne_sit.png){width="0.95\columnwidth"}
Unsupervised Speaker Adaptation
-------------------------------
SIT aims at suppressing the effect of inter-speaker variability on DNN acoustic model so that the acoustic model is more compact and has stronger discriminative power. When adapted to the same test speakers, the SIT DNN is expected to achieve higher ASR performance than the baseline SI DNN due to the smaller overlaps among the distributions of different speech units.
In our experiment, we adapt the SI and SIT DNNs to each of the 4 speakers in the test set in an unsupervised fashion. The constrained re-training (CRT) [@erdogan2016multi] method is used for adaptation, where we re-estimate the DNN parameters of only a subset of layers while holding the remaining parameters fixed during cross-entropy training. The adaptation target (1-best alignment) is obtained through the first-pass decoding of the test data, and the second-pass decoding is performed using the SA SI and SI DNNs.
The WER results for unsupervised speaker adaptation is shown in Table \[table:crt\_wer\], in which only the bottom 2 layers of the SI and SIT DNNs are adapted during CRT. The speaker-adapted (SA) SIT DNN achieves 15.46% WER which is 4.86% relatively higher than the SA SI DNN. The CRT adaptation provides 8.91% and 8.79% relative WER gains over the unadapted SI and SIT models respectively. The lower WER after speaker adaptation indicates that SIT has effectively reduced the high variance and overlap in an SI acoustic model caused by the inter-speaker variability.
System BUS CAF PED STR Avg.
-------- ------- ------- ------- ------- -----------
22.76 15.56 11.52 15.37 16.25
21.42 14.79 11.11 14.70 **15.46**
: The ASR WER (%) performance of SA SI and SA SIT DNN acoustic models after CRT unsupervised speaker adaptation on real development set of CHiME-3.[]{data-label="table:crt_wer"}
Conclusions and Future Works
============================
In this work, SIT is proposed to suppress the effect of inter-speaker variability on the SI DNN acoustic model. In SIT, a DNN acoustic model and a speaker classifier network are jointly optimized to minimize the senone classification loss, and simultaneously mini-maximize the speaker classification loss. Through this adversarial multi-task learning procedure, a feature extractor network is learned to map the input frames from different speakers to deep hidden features that are both *speaker-invariant* and senone-discriminative.
Evaluated on CHiME-3 dataset, the SIT DNN acoustic model achieves 4.99% relative WER improvement over the baseline SI DNN. With the unsupervised adaptation towards the test speakers using CRT, the SA SIT DNN achieves additional 8.79% relative WER gain, which is 4.86% relatively improved over the SA SI DNN. With t-SNE visualization, we show that, after SIT, the deep feature distributions of different speakers are well aligned with each other, which verifies the strong capability of SIT in reducing speaker-variability.
SIT forgoes the need of estimating any additional SI bases or speaker representations which are necessary in other conventional approaches such as SAT. The SIT trained DNN acoustic model can be directly used to generate the transcription for unseen test speakers through *one-pass online* decoding. It enables a lightweight speaker-invariant ASR system with reduced number of parameters for both training and testing. Additional gains are achievable by performing further unsupervised speaker adaptation on top of the SIT model.
In the future, we will evaluate the performance of the i-vector based speaker-adversarial multi-task learning [@saon2017english] on CHiME-3 dataset and compare it with the proposed SIT. We will perform SIT on long short-term memory-recurrent neural networks acoustic models [@sak2014long; @meng2017deep] and compare the improvement with feedforward DNNs. Moreover, we will perform SIT on thousands of hours of data to verify its scalability to large dataset.
|
---
abstract: 'Energy is at best defined quasilocally in general relativity. Quasilocal energy definitions depend on the conditions one imposes on the boundary Hamiltonian, i.e., how a finite region of spacetime is “isolated.” Here, we propose a method to define and investigate systems in terms of their matter plus gravitational energy content. We adopt a generic construction, that involves embedding of an arbitrary dimensional world sheet into an arbitrary dimensional spacetime, to a $2+2$ picture. In our case, the closed 2-dimensional spacelike surface $\mathbb{S}$, that is orthogonal to the 2-dimensional timelike world sheet $\mathbb{T}$ at every point, encloses the system in question. The integrability conditions of $\mathbb{T}$ and $\mathbb{S}$ correspond to three null tetrad gauge conditions once we transform our notation to the one of the null cone observables. We interpret the Raychaudhuri equation of $\mathbb{T}$ as a work-energy relation for systems that are not in equilibrium with their surroundings. We achieve this by identifying the quasilocal charge densities corresponding to rotational and nonrotational degrees of freedom, in addition to a relative work density associated with tidal fields. We define the corresponding quasilocal charges that appear in our work-energy relation and which can potentially be exchanged with the surroundings. These charges and our tetrad conditions are invariant under type-III Lorentz transformations, i.e., the boosting of the observers in the directions orthogonal to $\mathbb{S}$. We apply our construction to a radiating Vaidya spacetime, a $C$-metric and the interior of a Lanczos-van Stockum dust metric. The delicate issues related to the axially symmetric stationary spacetimes and possible extensions to the Kerr geometry are also discussed.'
address: 'Department of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch 8140, New Zealand'
author:
- Nezihe Uzun
bibliography:
- 'references.bib'
title: Quasilocal energy exchange and the null cone
---
Introduction
============
In general relativity, there is no unique definition of matter plus gravitational energy exchange definition for a system. For the case of pure gravity, for example, gravitational radiation and the energy loss associated with it, can be identified unambiguously only at null infinity, $\mathfrak{I}^+$, of an isolated body [@Bondietal:1962]. Essentially it is assumed that observers are sufficiently far away from the body in question so that the asymptotic metric is flat and the perturbations around it correspond to the gravitational radiation. Also it is assumed that the spacetime admits the peeling property, i.e., the Weyl scalars behave asymptotically and outgoing null hypersurfaces are assumed to intersect $\mathfrak{I}^+$ through closed spacelike 2-surfaces whose departure from the unit sphere is small [@Lehner_Moreschi:2007]. It is known that the wave extraction and the interpretation of the physically meaningful quantities are often challenging for numerical relativity simulations based on those asymptotic regions.
On the other hand, for astrophysical and larger scale investigations, we would like to know how systems behave in the strong field regime. We would like to understand the behavior of binary black hole or neutron star mergers and how those objects affect their close environment. Considering the fact that gravitational energy cannot be localized due to the equivalence principle, there have been a considerable number of attempts to understand the energy exchange mechanisms of arbitrary gravitating systems quasilocally (see [@Szabados:2004] for a detailed review), on top of the earlier global investigations [@Thorne_Hartle:1984; @Purdue:1999; @Favata:2000]. However, not all of the quasilocal energy investigations are constructed on, or translated into, the formalism that the numerical relativity community uses. In the present paper, we aim to present a method with which one can investigate the quasilocal energy exchange of a system. This involves the observables of timelike congruences, however, we present the corresponding null cone observables as well once we perform a transformation between the two formalisms.
In [@Capovilla_Guven:1994] Capovilla and Guven (CG) generalize the Raychaudhuri equation which gives the focusing of an arbitrary dimensional timelike world sheet that is embedded in an arbitrary dimensional spacetime. Previously, in [@Uzun_Wiltshire:2015], we applied their formalism to a 2-dimensional timelike world sheet, $\mathbb{T}$, embedded in a 4-dimensional spherically symmetric spacetime. This allowed us to define quasilocal thermodynamic equilibrium conditions and the corresponding quasilocal thermodynamic potentials in a natural way.
In the present paper, we will consider more generic systems, which are not in equilibrium with their surroundings. Also the systems we consider here are not necessarily spherically symmetric. Our main aim is to present a method for the calculation of the energylike quantities of these systems which can be exchanged quasilocally. While doing so, we will switch from Capovilla and Guven’s notation to the notation of Newman-Penrose (NP) formalism [@Newman_Penrose:1961]. Firstly, this will ease our calculations. Secondly, the transformation of the original formalism of CG to NP poses basic questions about the null tetrad gauge invariance of numerical relativity in terms of quasilocal concerns. Namely, if one wants to investigate a system quasilocally one needs to define it consistently throughout its evolution by keeping the boost invariance of the quasilocal observers. This fixes a gauge for the complex null tetrad constructed through their local double dyad in our 2+2 approach.
The construction of the paper is as follows. In Sec. \[Mass-energy exchange\], we survey some of the local, global and quasilocal approaches in the literature to investigate matter plus gravitational mass-energy exchange. We will show just how broad the literature is in terms of energy exchange investigations. In Sec. \[Null tetrad gauge\] we start to question how to best define a $ quasilocal \, system$ and introduce our choice of system definition. Section \[RaychaudhuriCG\] gives a concise summary of Capovilla and Guven’s formalism which is used to derive the Raychaudhuri equation of a world sheet [@Capovilla_Guven:1994]. In Sec. \[RaychaudhuriNP\] we present the contracted Raychaudhuri equation in the NP formalism and demonstrate how our gauge conditions affect it. Later, in Sec. \[Work-energy\], we give physical interpretations to the variables of the contracted Raychaudhuri equation in terms of the quasilocal charge densities. We define the associated quasilocal charges and end up with a work-energy relation. According to our interpretation, the contracted Raychaudhuri equation of the world sheet of the quasilocal observers gives information about how much rotational and nonrotational quasilocal energy the system possesses, in addition to the work that should be done by the tidal fields to create such a system. In Sec. \[Applications\] we present applications of our method to a radiating Vaidya spacetime, $C$-metric and interior of a Lanczos-van Stockum dust source. We present the delicate issues related to our construction in Sec. \[Delicate\] and give a summary and a discussion in Sec. \[Discussion\]. Our derivations, together with the relevant equations of the NP formalism, are presented in Appendices \[Appendix:A\], \[Appendix:B\] and \[Otherderivations\].
We use $\left(-,+,+,+\right)$ signature for our spacetime metric. Therefore one has to be careful about the definitions of the spin coefficients and curvature scalars when comparing them to Newman and Penrose’s original construction in [@Newman_Penrose:1961]. However, that is not a complication for our contracted Raychaudhuri equation as it is independent of the metric signature. Also note that we use natural units through out the paper so that $c,G,h,k_B$ are set to 1.
Mass-energy exchange: local, global and quasilocal {#Mass-energy exchange}
==================================================
Local approaches
----------------
For local investigations of the gravitational energy flux, the Weyl tensor plays the central role. Newman and Penrose introduce five complex Weyl curvature scalars which incorporate all of the information of the Weyl tensor by [@Newman_Penrose:1961] $$\begin{aligned}
\psi _0&=&\t {C}{_\mu _\nu _\alpha _\beta}l^\mu m^\nu l^\alpha m^\beta \label{Psi0},\\
\psi _1&=&\t {C}{_\mu _\nu _\alpha _\beta}l^\mu n^\nu l^\alpha m^\beta \label{Psi1},\\
\psi _2&=&\t {C}{_\mu _\nu _\alpha _\beta}l^\mu m^\nu \c{m}^\alpha n^\beta \label{Psi2},\\
\psi _3&=&\t {C}{_\mu _\nu _\alpha _\beta}l^\mu n^\nu \c{m}^\alpha n^\beta \label{Psi3},\\
\psi _4&=&\t {C}{_\mu _\nu _\alpha _\beta}n^\mu \c{m}^\nu n^\alpha \c{m}^\beta \label{Psi4},\end{aligned}$$ where $C_{\mu \nu \alpha \beta}$ is the Weyl tensor of the spacetime, $\{l_{\mu}, n_{\mu}, m_{\mu}, \c{m}_{\mu}\}$ is the NP complex null tetrad and the only surviving inner products of the null vectors with each other are $\inner{\b{l},\b{n}}=-1$ and $\inner{\b{m},\c{\b{m}}}=1$.
The dynamics of timelike observers, who live in different Petrov-type spacetimes, was investigated by Szekeres previously [@Szekeres:1965]. In this method, one can assign physical meanings to the Weyl scalars. However, we note that this is only possible once we adapt our NP tetrad to the principal null direction(s) of the spacetime in question. Once we relax this condition, Weyl curvature scalars cannot be interpreted as the way it was done in Szekeres’ work.
Let us decompose the Weyl tensor into its electric and magnetic parts. One can define a super-Poynting vector through them via [@Maartens_Bassett:1997] $
\mathcal{P}_{\mu}=\t {\epsilon} {_{\mu}_{\alpha}_{\beta}}\t {\mathcal{E}}{^{\alpha}_{\nu}}\t {\mathcal{B}}{^{\beta}^{\nu}},
$ where $\t {\mathcal{E}}{_{\mu}_{\nu}}=\t {h}{^{\alpha}_{\mu}}\t {h}{^{\beta}_{\nu}}\t {C}{_{\alpha}_{\sigma}_{\beta}_{\gamma}}t^{\sigma}t^{\gamma}$ is its electric part, $\t {\mathcal{B}}{_{\mu}_{\nu}}=-\frac{1}{2}\t {h}{^{\alpha}_{\mu}}\t {h}{^{\beta}_{\nu}}\t {\epsilon}{_{\alpha}_{\sigma}_{\gamma}_{\kappa}}\t {C}{^{\gamma}^{\kappa}_{\beta}_{\rho}}t^{\sigma}t^{\rho}$ is the magnetic part, $t^{\mu}$ is the timelike vector orthogonal to the 3-dimensional spacelike hypersurfaces, $\t {h}{^{\mu}_{\nu}}$ is the corresponding projection operator and $\epsilon _{{\mu}{\nu}{\alpha}{\beta}}$ is the Levi-Civita tensor. The super-Poynting vector represents the gravitational energy flux density following its electromagnetic analogy. In [@Zhangetal:2012] it is shown that choosing a transverse tetrad, rather than a principal tetrad, aligns the gravitational wave propagation direction with the super-Poynting vector. Authors indicate that if we have a device which in principle works like Szekeres’ “gravitational compass” [@Szekeres:1965] we can detect the gravitational waves locally.[^1] This is of course applicable for a purely gravitational case.
Global approaches
-----------------
For gravitational waves, Bondi mass loss [@Bondietal:1962] is one of the most widely used expressions to determine the energy lost by the system via gravitational radiation at null infinity. For an asymptotically flat spacetime, with NP variables, the Bondi mass reads as [@Szabados:2004] $$\begin{aligned}
M_B = -\frac{1}{4\pi}\int_{\mathcal{S}}\left(\psi ^{\left(0\right)}_2 +\sigma ^{\left(0\right)} \dot{\c{\sigma}}^{\left(0\right)} \right)d\mathcal{S},\end{aligned}$$ where $\mathcal{S}$ is the closed spacelike surface located at null infinity, $\sigma = -\inner{\b{m},D_\b{m}\b{l}}$ is one of the NP spin coefficients and the superscript “$\left(0 \right)$” represents the leading order part of the object with respect to a radial expansion. The mass loss associated with the gravitational waves is determined once the “time” derivative, denoted by the overdot, of the Bondi mass is calculated in Bondi coordinates. Note that in the tetrad formalism approach of Bondi, the null tetrad is required to satisfy certain conditions. In the Bondi-Metzner-Sachs gauge one has $$\kappa=\pi=\varepsilon=0, \qquad \rho=\overline{\rho}, \qquad \tau=\overline{\alpha}+\beta,$$ which gives the symmetry group of the conformal boundary at null infinity.
In terms of other global investigations, the energy loss of a relativistic body through its interaction with the external field can be traced back to Misner, Thorne and Wheeler’s mass definition [@MTW:1974] constructed via an effective energy-momentum pseudotensor. Developed by many, including [@Thorne_Hartle:1984; @Thorne:1980; @Zhang:1986; @Flanagan:1997], the methodology for calculation of the mass-energy loss of an isolated relativistic body via its interaction with an external field is in fact very similar to the Newtonian analysis [@Purdue:1999].
One can calculate the mass-energy loss via [@Thorne_Hartle:1984; @Purdue:1999] $$-\frac{d\mathcal{M}_\mathcal{S}}{dt}=\int_{\partial \mathcal{S}} \left(-g\right)t^{0J}n_J r^2 d\Omega ,$$ where $M_{\mathcal{S}}$ is the mass inside the 3-sphere $\mathcal{S}$ which gives the mass of the isolated object, $M$, to leading order under the slow rotation assumption; $\partial \mathcal{S}$ is the 2-dimensional boundary of $\mathcal{S}$, $-g$ is the square of the 4-metric density, $t^{\alpha \beta}$ is the Landau-Lifshitz pseudotensor [@Landau_Lifshitz:1975], $n^J =x^J/r$ are the radial vector components and $d\Omega$ is the 2-dimensional volume element. If one keeps only the $\mathcal{E} \mathcal{I}$ cross terms, where $\mathcal{E} _{JK}=\t {R}{_{J}_{0}_{K}_{0}}$, $\t {R}{_{\mu} _{\nu} _{\alpha} _{\beta}}$ is the Riemann tensor of the external field and $\mathcal{I} _{JK}$ is the mass quadrupole moment of the isolated body, one gets $$-\frac{d\mathcal{M}_\mathcal{S}}{dt}=\frac{d}{dt}\left(\frac{1}{10}\mathcal{E} ^{JK}I_{JK}\right)+\frac{1}{2}\mathcal{E} ^{JK}\frac{d \mathcal{I}_{JK}}{dt},$$ in which only the zeroth and first order time derivatives and the leading order term in the perturbative expansion are considered. In this approach, the first term on the right-hand side is interpreted as the rate of change of the interaction energy of the body and the external field, whereas the second term is interpreted as the rate of work done by the external field on the body. Therefore, $$\label{dWdt_global}
\frac{dW}{dt}= -\frac{1}{2}\mathcal{E} ^{JK}\frac{d \mathcal{I} _{JK}}{dt}$$ is sometimes referred to as $\it {tidal \, heating}$ even though the energy loss/gain is not solely via the cooling/heating of the body in question [@Purdue:1999].
There have been debates about whether or not the total mass of the body, which is taken as the sum of the self energy and the interaction energy, is ambiguous in this picture [@Thorne_Hartle:1984; @Purdue:1999; @Favata:2000] [^2]. For the time being, let us bear in mind that results obtained in this approach are true up to the leading order of the energy calculations of an external field and of an asymptotically flat spacetime which models a slowly rotating body at null infinity. Also, in general, one should be careful about using energy-momentum pseudotensors to calculate the mass-energy of a system since not all of them satisfy the conservation law with correct weight [@Szabados:2004].[^3]
Quasilocal approaches
---------------------
When quasilocal calculations of the mass-energy exchange of generic systems are considered, it is seen that the effective matter plus gravitational energy, momentum and stress energy densities can be attributed to the extrinsic or intrinsic geometry of a closed, spacelike, 2-dimensional surface in many applications. These spacelike 2-surfaces can be considered as the $t$-constant surfaces of the (2+1) timelike boundary of the spacetime. Alternatively, they can be considered as the embedded surfaces of spacelike 3-hypersurfaces or embedded surfaces of the spacetime itself [@Hawking:1968; @Hayward:1993Q; @Brown_York:1992; @Kijowski:1997; @Booth_Mann:1998; @Epp:2000; @Liu_Yau:2003].
For example, suppose $\mathcal{B}$ is a (2+1) dimensional timelike boundary of a finite spacetime domain. Brown and York [@Brown_York:1992] define $\tau _{\mu \nu}=\left(\Theta \gamma _{\mu \nu}-\Theta _{\mu \nu}\right)/\left(8 \pi \right)$ as the object that carries information about the matter plus gravitational energy content of a given system by following a Hamiltonian approach. Here $\Theta _{\mu \nu}$ is the extrinsic curvature of the world tube and $\gamma _{\mu \nu}$ is the 3-metric induced on it that is fixed. Then the matter plus gravitational energy flux density, $f_{BY}$, follows from the world tube derivative of the matter plus gravitational energy tensor, i.e., $$\label{flux_BY}
f_{BY}=\t {\gamma} {_\mu ^\alpha} D_\alpha \left(\tau ^{\mu \nu} t_\nu \right),$$ where $t^\mu$ is a timelike vector field which is not necessarily orthogonal to the $t$-constant spacelike surfaces $\mathcal{S}_t$, $\t {\gamma} {_\mu ^\alpha}$ is the projection operator on to the world tube and $D_\alpha $ is the spacetime covariant derivative.
In [@Booth_Creighton:2000], the authors define the rate of work done on a quasilocal system via Eq. (\[flux\_BY\]) by specifically choosing $t^\mu$ not to be a timelike Killing vector field of the world tube metric. According to Booth and Creighton, in vacuum, the rate of work done on the system by its environment is given by $$\label{dWdt_Booth&Creighton}
\frac{dW}{dt}=-\frac{1}{2}\int _{\mathcal{S}_t} d^2x \sqrt{-\gamma}\tau ^{\mu \nu} \$_t \gamma _{\mu \nu},$$ where $\$_t$ is the operator that is obtained by projecting the covariant derivative operator defined by the induced metric of $\mathcal{B}$ on the spacelike 2-surface. Equation (\[dWdt\_Booth&Creighton\]) is used to calculate the tidal heating quasilocally in the weak field limit, which serves as an excellent example to compare the quasilocal formalisms with the global ones. Their results show that the leading terms of the rate of work done is not exactly equal to the one given by the global method, eq. (\[dWdt\_global\]). It is only the so-called *irreversible* part, the portion that is expended to deform the body, that is equal to $\frac{1}{2}\mathcal{E}^{JK}d\mathcal{I}_{JK}/dt$ and hence attributed to tidal heating. However, there exists an additional portion which is stored as the potential energy in the system, called the *reversible part*, which differs from the results of the global method.
In [@Eppetal:2008], Epp *et al.* take one step further and come up with a more concrete definition of matter plus gravitational energy flux between the initial, $\mathcal{S}_i$, and final, $\mathcal{S}_f$, slices of a world tube. This approach is more concrete in the sense that the 2-surfaces have certain conditions on them. The authors define a rigid quasilocal frame by demanding the 2-surfaces to have zero expansion and shear when they are considered to be embedded in the world tube. In this approach, the energy flux density in vacuum is calculated as $\alpha_ \mu \mathcal{P}^\mu$. Here $\alpha _\mu$ is the proper acceleration of the observers projected on the 2-surface, $\mathcal{P}^\mu$ are constructed via the normal and tangential projections of $\tau _{\mu \nu}$, as defined by Brown and York [@Brown_York:1992]. On the spacelike 2-surfaces $\mathcal{P}^\mu = \sigma ^{\mu \nu} u^\rho \tau _{\nu \rho}$ and $\sigma _{\mu \nu}$ is the metric induced on the 2-surfaces. This is a coordinate approach. However, the conditions they impose on the spacelike 2-surface can be translated into null tetrad gauge conditions once a change of formalism is applied. In the next section, we will see that our definition of a system is not as restrictive as the one of Epp *et al.*.
Null tetrad gauge conditions and the quasilocal calculations {#Null tetrad gauge}
============================================================
In the present paper, we have no intention to discuss the advantages and disadvantages of numerical relativity calculations at finite distances.[^4] However, we would like to keep track of the quasilocal observables and the null cone observables simultaneously as they are not always investigated in tandem in numerical relativity simulations.
Consider the case of a perturbed rotating black hole. In real astrophysical cases, our ultimate goal is to get information about the properties –such as the mass, angular momentum and their dissipation rates– of this black hole via the gravitational radiation we detect. In such a case, we have the freedom to choose a null tetrad for gravitational radiation calculations and a corresponding orthonormal tetrad for the quasilocal energy calculations. One of our aims, in this paper, is to check whether or not those tetrad choices are consistent with each other when the different formalisms are considered.
For example, there is a geometrically motivated transverse tetrad, the so-called quasi-Kinnersley tetrad [@Kinnersley:1969], which is considered to be one of the best choices to study the gravitational wave extraction from a perturbed Kerr black hole [@Nerozzietal:2005qK; @Nerozzietal:2005BS; @Nerozzi:2006aj]. In [@Zhangetal:2012], Zhang *et al.* investigate the directions of energy flow using the super-Poynting vector and show that the wave fronts of passing radiation are aligned with the quasi-Kinnersley tetrad. However, in the current section, we introduce certain null tetrad gauge conditions for a quasilocal system which are not satisfied by the quasi-Kinnersley tetrad. This might mean that even though one can measure the gravitational radiation emitted from a region properly, one might not be able to extract the quasilocal properties of its source consistently. What we mean by this sentence will be more clear once we introduce our formalism and give a detailed discussion of this specific issue in Sec. \[Delicate\].
When the quasilocal properties are taken into consideration, one has to start the investigation with a proper definition of a *system*. This is the missing ingredient in many quasilocal approaches in the literature. In the present paper, we use a purely geometrical method to define our system. We will mainly consider a 2-dimensional timelike world sheet embedded into a 4-dimensional spacetime. The instantaneously defined 2-dimensional spacelike surface orthogonal to the world sheet at every point, encloses the system in question.
The motivation behind the choice of such a geometric construction comes from the fact that the well-defined quasilocal energy definitions, which are made by following a Hamiltonian approach, rely on the $\it {mean \,extrinsic\,curvature}$ of a spacelike 2-surface. It is a measure of boost-invariant matter plus gravitational energy density of the system [@Lau:1996; @Kijowski:1997; @Epp:2000; @Liu_Yau:2003]. Hence the extrinsic geometry of this 2-surface, when it is embedded directly into a generic spacetime for example, is thought to have a more fundamental importance in terms of the quasilocal energy and energy exchange calculations.
In order to see how we define a system in the present paper, let us follow [@Capovilla_Guven:1994] and consider an embedding of an oriented world sheet with an induced metric, $\t {\eta}{_a_b}$, written in terms of orthonormal basis tangent vectors, $\{ \t {E}{_a} \}$, $$g({\t {E}{_a}},{\t {E}{_b}})={\t {\eta}{_a_b}},$$ where $g_{\mu \nu}$ is the 4-dimensional spacetime metric. Now consider the two unit normal vectors, $\{ \t {N}{_i} \}$, of the world sheet which are defined up to a local rotation by $$\begin{aligned}
g({\t {N}{_i}},{\t {N}{_j}})&=&{\t {\delta}{_i_j}},\\
g({\t {N}{^i}},{\t {E}{_a}})&=&0,\end{aligned}$$ where $\{a,b\}=\{\hat{0},\hat{1}\}$ and $\{i,j\}=\{\hat{2},\hat{3}\}$ are the dyad indices and the Greek indices will refer to 4-dimensional spacetime coordinates. Also note that to raise (or lower) the indices of tangential and normal dyad indices of an object, one should use ${\t {\eta}{^a^b}}$ (or ${\t {\eta}{_a_b}}$) and $\t {\delta}{^i^j}$ (or $\t {\delta}{_i_j}$) respectively, where in an orthonormal basis $\eta _{ab}=\left(-1,1\right)$ and $\delta _{ij}=\left(1,1\right)$.
Let us call this embedded timelike world sheet $\mathbb{T}$, and the spacelike surface which is orthogonal to $\mathbb{T}$ at every point, $\mathbb{S}$. For a physically meaningful construction, we want the tangent spaces of these embedded surfaces to be integrable [@Capovilla_Guven:1994].
According to Frobenius theorem, involutivity is a sufficient condition for the existence of an integral manifold through each point [@Lee:2003]. In other words, let $D^k$ be a $k$-dimensional distribution on a manifold $M$, which is required to be $C^\infty$. $D^k$ is involutive if for the vector fields $\b{X}$,$\b{Y}\in D^k$ their Lie bracket satisfies $\left[\b{X},\b{Y}\right]\in D^k$ [@Szekeres:2004].
Therefore our tangent basis vectors $\{E_a,N_i\}$ need to satisfy $$\begin{aligned}
\left[E_a,E_b\right]=\t {f}{^c _a_b}E_c, \label{Frob_T}\\
\left[N_i,N_j\right]=\t {h}{^k _i_j}N_k.\label{Frob_S}\end{aligned}$$ Note that one can construct a complex null tetrad, $\{\b{l},\b{n},\b{m},\c{\b{m}}\}$, via an orthonormal double dyad and vice versa according to $$\begin{aligned}
\t {E}{^\mu _{\hat{0}}} &=& \frac{1}{\sqrt{2}}\left(l^{\mu}+n^{\mu}\right),
\label{eq:null_to_t-like1}\\
\t {E}{^\mu _{\hat{1}}} &=& \frac{1}{\sqrt{2}}\left(l^{\mu}-n^{\mu}\right),
\label{eq:null_to_t-like2}\\
\t {N}{^\mu _{\hat{2}}} &=& \frac{1}{\sqrt{2}} \left(m^{\mu}+\c{m}^{\mu}\right),
\label{eq:null_to_t-like3}\\
\t {N}{^\mu _{\hat{3}}} &=& -\frac{i}{\sqrt{2}}\left(m^{\mu}-\c{m}^{\mu}\right).
\label{eq:null_to_t-like4}\end{aligned}$$ Now let us see the gauge conditions that the Frobenius theorem, when applied to the tangent spaces of $\mathbb{T}$ and $\mathbb{S}$, imposes on a null tetrad constructed via the tangent vectors of $\mathbb{T}$ and $\mathbb{S}$. We can rewrite Eq. (\[Frob\_T\]) as $$\begin{aligned}
\t {E}{^\mu _a}D_{\mu}\t {E}{^\nu _b}-\t {E}{^\mu _b}D_{\mu}\t {E}{^\nu _a} = \t {f}{^c _a_b}\t {E}{^\nu _c} := \t {F}{^{\nu} _a _b}.\end{aligned}$$ Considering the only nonzero component of $ F_{ab}$, i.e., $ F_{\hat{0}\hat{1}}=-F_{\hat{1}\hat{0}}$ and expressions (\[eq:null\_to\_t-like1\])–(\[eq:null\_to\_t-like2\]) we can write $$\begin{aligned}
\t {F}{^{\nu}_{\hat{0}}_{\hat{1}}} &=& \t {E}{^\mu _{\hat{0}}}D_{\mu}\t {E}{^\nu _{\hat{1}}}-\t {E}{^\mu _{\hat{1}}}D_{\mu}\t {E}{^\nu _{\hat{0}}} \nonumber \\
&=&\t {f}{^{\hat{0}}_{\hat{0}}_{\hat{1}}}\t {E}{^\nu _{\hat{0}}} +
\t {f}{^{\hat{1}}_{\hat{0}}_{\hat{1}}}\t {E}{^\nu _{\hat{1}}}\nonumber \\
&=&\frac{1}{2}\left[\left(l^{\mu}+n^{\mu}\right) D_{\mu}\left(l^{\nu}-n^{\nu}\right) \right. \nonumber \\
&& \qquad {} \left.
- \left(l^{\mu}-n^{\mu}\right) D_{\mu}\left(l^{\nu}+n^{\nu}\right)\right]\nonumber \\
&=&\frac{1}{\sqrt{2}}\left[
\t {f}{^{\hat{0}}_{\hat{0}}_{\hat{1}}}\left(l^{\nu}+n^{\nu}\right) +
\t {f}{^{\hat{1}}_{\hat{0}}_{\hat{1}}}\left(l^{\nu}-n^{\nu}\right)\right].\end{aligned}$$ Thus, $$\begin{aligned}
\left(D_{\b{l}}n^{\nu}-D_{\b{n}}l^{\nu}\right)
&=&
-\frac{1}{\sqrt{2}}
\left[
\left(\t {f}{^{\hat{0}}_{\hat{0}}_{\hat{1}}}+
\t {f}{^{\hat{1}}_{\hat{0}}_{\hat{1}}}\right)l^{\nu}
\right. \nonumber \\
&&\qquad \left.
+\left(\t {f}{^{\hat{0}}_{\hat{0}}_{\hat{1}}}-
\t {f}{^{\hat{1}}_{\hat{0}}_{\hat{1}}}\right)n^{\nu}
\right] \label{F^nu_01}.\end{aligned}$$ Now if we take the inner product of both sides of Eq. (\[F\^nu\_01\]) with the null vector $\b{m}$ we get $$\inner{\b{m},D_{\b{l}}\b{n}}-\inner{\b{m},D_{\b{n}}\b{l}}=\c{\pi}-
\left(-\tau \right)= 0,$$ which follows from the propagation equations (\[Dnl\]) and (\[Dln\]) of the spin coefficients of the Newman-Penrose formalism [@Newman_Penrose:1961].
Likewise when we rewrite Eq. (\[Frob\_S\]) we get $$\begin{aligned}
\t {N}{^\mu _i}D_{\mu}\t {N}{^\nu _j}-\t {N}{^\mu _j}D_{\mu}\t {N}{^\nu _i}=\t {h}{^k_i_j}\t {N}{^\nu _k} := \t {H}{^\nu _i_j}.\end{aligned}$$ If we consider the nonvanishing component $H_{\hat{2}\hat{3}}$ with the expressions (\[eq:null\_to\_t-like3\])-(\[eq:null\_to\_t-like4\]) we can write $$\begin{aligned}
\t {H}{^{\nu}_{\hat{2}}_{\hat{3}}} &= \t {N}{^\mu _{\hat{2}}}D_{\mu}\t {N}{^\nu _{\hat{3}}}-\t {N}{^\mu _{\hat{3}}}D_{\mu}\t {N}{^\nu _{\hat{2}}}\nonumber \\
&=\t {h}{^{\hat{2}}_{\hat{2}}_{\hat{3}}}\t {N}{^\nu _{\hat{2}}}
+\t {h}{^{\hat{3}}_{\hat{2}}_{\hat{3}}}\t {N}{^\nu _{\hat{3}}}\nonumber \\
&= -\frac{i}{2}\left(m^\mu +\c{m}^\mu \right)D_{\mu}\left(m^\nu -\c{m}^\nu \right)
\nonumber \\
&\qquad {}
+\frac{i}{2}\left(m^\mu - \c{m}^\mu \right)D_{\mu}\left(m^\nu +\c{m}^\nu \right)\nonumber \\
&= \frac{1}{\sqrt{2}}\left[\t {h}{^{\hat{2}}_{\hat{2}}_{\hat{3}}} \left(m^\nu +\c{m}^\nu \right)-
i \t {h}{^{\hat{3}}_{\hat{2}}_{\hat{3}}}\left(m^\nu - \c{m}^\nu \right)\right].\nonumber \\\end{aligned}$$ Hence, $$\begin{aligned}
\left(D_{\b{m}}\c{m}^\nu - D_{\c{\b{m}}} m^\nu\right)
&=& -\frac{1}{\sqrt{2}}\left[m^\nu \left(\t {h}{^{\hat{3}}_{\hat{2}}_{\hat{3}}}+i \t {h}{^{\hat{2}}_{\hat{2}}_{\hat{3}}} \right)
\right. \nonumber \\
&&\qquad \left.
-\c{m}^\nu \left(\t {h}{^{\hat{3}}_{\hat{2}}_{\hat{3}}}-i \t {h}{^{\hat{2}}_{\hat{2}}_{\hat{3}}} \right)\right]\label{H^nu_ij}.\end{aligned}$$ Taking the inner product of both sides of Eq. (\[H\^nu\_ij\]) with the null vectors $\b{l}$ and $\b{n}$ respectively gives, $$\begin{aligned}
\inner{\b{l},D_{\b{m}}\c{\b{m}}}-\inner{\b{l},D_{\c{\b{m}}}\b{m}}&=&\c{\rho}-
\rho= 0,\\
\inner{\b{n},D_{\b{m}}\c{\b{m}}}-\inner{\b{n},D_{\c{\b{m}}}\b{m}}&=&
\left(-\mu \right)-\left(-\c{\mu} \right) = 0,\end{aligned}$$ which follow from the propagation equation (\[Dmm\_\]).
Therefore we will state that for quasilocal energy calculations in our 2+2 approach, the following three null gauge conditions must be satisfied, $$\begin{aligned}
\label{Gaugeconditions}
\tau + \c{\pi}=0, \qquad \rho = \c{\rho}, \qquad \mu = \c{\mu}.\end{aligned}$$
It is easy to check that under a type-III Lorentz transformation of the complex null tetrad, i.e., $$\begin{aligned}
\textbf{l} & \rightarrow & a^2 \textbf{l},\\
\textbf{n} & \rightarrow & \frac{1}{a^2}\, \textbf{n},\\
\textbf{m} & \rightarrow & e^ {2i\theta} \textbf{m},\\
\overline{\textbf{m}} & \rightarrow & e^ {-2i\theta} \overline{\textbf{m}},\end{aligned}$$ the gauge conditions (\[Gaugeconditions\]) are preserved. This is because transformation of the spin coefficients $\tau,~\pi, ~\rho,~\mu$ under type-III Lorentz transformation follows as [@ODonnell:2003] $$\begin{aligned}
\tau & \rightarrow & e^ {2i\theta} \tau ,\\
\pi & \rightarrow & e^ {-2i\theta} \pi ,\\
\rho & \rightarrow & a^2 \rho ,\\
\mu & \rightarrow & \frac{1}{a^2} \mu ,\end{aligned}$$ in which $a^2$ and $2\theta$ respectively refer to the $boost$ and $spin$ parameters in Newman-Penrose formalism. They are arbitrary real functions. Note that this transformation corresponds to $$\begin{aligned}
\t {E}{^{\mu} _{\hat{0}}} & \rightarrow & \gamma \left(\t {E}{^{\mu} _{\hat{0}}}-\beta \t {E}{^{\mu} _{\hat{1}}}\right),\\
\t {E}{^{\mu} _{\hat{1}}} & \rightarrow & \gamma \left(\t {E}{^{\mu} _{\hat{1}}}-\beta \t {E}{^{\mu} _{\hat{0}}}\right),\end{aligned}$$ where $$\beta =\frac{a^4-1}{a^4+1}\qquad and \qquad \gamma=\frac{1}{\sqrt{1-\beta ^2}},$$ meaning that a type-III Lorentz transformation of the null tetrad corresponds to the boosting of the timelike observers along $\t {E}{^{\mu}_{\hat{1}}}$ on $\mathbb{T}$. This is the property we want to preserve in the definition and the investigation of our quasilocal system.
Raychaudhuri equation of a timelike world sheet {#RaychaudhuriCG}
===============================================
In [@Capovilla_Guven:1994], Capovilla and Guven construct a formalism to investigate the extrinsic geometry of an arbitrary dimensional timelike world sheet embedded in an arbitrary dimensional spacetime. We use their formalism to investigate the properties of a 2-dimensional world sheet, $\mathbb{T}$, embedded in a 4-dimensional spacetime as introduced in the previous section. Note that the Raychaudhuri equation of $\mathbb{T}$ carries information about how much the congruence of timelike *world sheets* — rather than world lines — expands, shears or rotates. In their construction, Capovilla and Guven define three types of covariant derivatives, whose distinction we now introduce.
Let the torsionless covariant derivative defined by the spacetime coordinate metric be ${\t {D}{_\mu}}$ and its projection onto the world sheet be denoted by ${\t {D}{_a}}={\t {E}{^\mu_a}}{\t {D}{_\mu}}$. On the world sheet $\mathbb{T}$, ${\t {\nabla}{_a}}$ is defined with respect to the intrinsic metric and ${\t {\tilde{\nabla}}{_a}}$ is defined on tensors under rotations of the normal frame, i.e., on $\mathbb{S}$. Likewise the projection of the spacetime covariant derivative on the instantaneous 2-surface $\mathbb{S}$ is ${\t {D}{_i}}={\t {N}{^\mu_i}}{\t {D}{_\mu}}$. On $\mathbb{S}$, ${\t {\nabla}{_i}}$ is defined with respect to the intrinsic metric and ${\t {\tilde{\nabla}}{_i}}$ is defined on tensors under rotations of the normal frame of $\mathbb{S}$.
To study the deformations of $\mathbb{T}$ and $\mathbb{S}$, the following extrinsic variables are introduced [@Capovilla_Guven:1994]. The extrinsic curvature, Ricci rotation coefficients and extrinsic twist of $\mathbb{T}$ are respectively defined by $$\begin{aligned}
\t {K}{_a_b^i}&=&-\t {g}{_\mu _\nu}\left(\t {D}{_a}\t {E}{^\mu _b}\right) \t {N}{^\nu ^i}=\t {K}{_b_a^i},\label{eq:K_a_b^i} \\
\t {\gamma}{_a_b_c}&=&\t {g}{_\mu _\nu}\left(\t {D}{_a}\t {E}{^\mu _b}\right) \t {E}{^\nu _c}=-\t {\gamma}{_a_c_b},\label{eq:gamma_a_b_c} \\
\t {w}{_a^i^j}&=&\t {g}{_\mu _\nu}\left(\t {D}{_a}\t {N}{^\mu ^i}\right) \t {N}{^\nu^j}=-\t {w}{_a^j^i}, \label{eq:w_a^i^j}\end{aligned}$$ while the extrinsic curvature, Ricci rotation coefficients and extrinsic twist of $\mathbb{S}$ are respectively defined by $$\begin{aligned}
\t {J}{_a^i^j}&=&\t {g}{_\mu _\nu}\left(\t {D}{^i}\t {E}{^\mu _a}\right) \t {N}{^\nu ^j}, \label{eq:J_a^i^j} \\
\t {\gamma}{_i_j_k}&=&\t {g}{_\mu _\nu}\left(\t {D}{_i}\t {N}{^\mu _j}\right) \t {N}{^\nu _k}=-\t {\gamma}{_i_k_j},\label{gamma_i_j_k} \\
\t {S}{_a_b^i}&=&\t {g}{_\mu _\nu}\left(\t {D}{^i}\t {E}{^\mu _a}\right) \t {E}{^\nu _b}=-\t {S}{_b_a^i}. \label{eq:S_a_b^i}\end{aligned}$$ By using those extrinsic variables one can investigate how the orthonormal basis $\{ \t {E}{_a},\t {N}{^i} \}$ varies when perturbed on $\mathbb{T}$ according to $$\begin{aligned}
\t {D}{_a}\t {E}{_b}&=&\t {\gamma}{_a_b^c}\t {E}{_c}-\t {K}{_a_b^i}\t {N}{_i},\\
\t {D}{_a}\t {N}{^i}&=&\t {K}{_a_b^i}\t {E}{^b}+\t {w}{_a^i^j}\t {N}{_j},\end{aligned}$$ or perturbed on $\mathbb{S}$ according to $$\begin{aligned}
\t {D}{_i}\t {E}{_a}=\t {S}{_a_b_i}\t {E}{^b}+\t {J}{_a_i_j}\t {N}{^j},\\
\t {D}{_i}\t {N}{_j}=-\t {J}{_a_i_j}\t {E}{^a}+\t {\gamma}{_i_j^k}\t {N}{_k}.\end{aligned}$$ Then the generalized Raychaudhuri equation, after being contracted with the orthogonal basis metrics ${\t {\eta}{^a^b}}$ and ${\t {\delta}{_i_j}}$ is given by $$\begin{aligned}
\label{eq:Raych_CapGuv}
\left(\t {\tilde{\nabla}}{_b}\t {J}{_a^i^j}\right){\t {\eta}{^a^b}}\t {\delta}{_i_j} &=-\left(\t {\tilde{\nabla}}{^i}\t {K}{_a_b^j}\right){\t {\eta}{^a^b}}\t {\delta}{_i_j}-\t {J}{_b^i_k}\t {J}{_a^k^j}{\t {\eta}{^a^b}}\t {\delta}{_i_j}
\nonumber \\
&\qquad {}
+g(R(\t {E}{_b},\t {N}{^i})\t {E}{_a},\t {N}{^j}){\t {\eta}{^a^b}}\t {\delta}{_i_j}
\nonumber \\
&\qquad {}
-\t {K}{_b_c^i}\t {K}{_a^c^j}{\t {\eta}{^a^b}}\t {\delta}{_i_j},\end{aligned}$$ where $\t {R}{^\alpha_\beta_\mu_\nu}$ is the Riemann tensor of the 4-dimensional spacetime [@Capovilla_Guven:1994], and $$g(R(\t {E}{_a},\t {N}{_i})\t {E}{_b},\t {N}{_j})=\t {R}{_\alpha_\beta_\mu_\nu}\t {E}{^\mu_a}\t {N}{^\nu_i}\t {E}{^\beta_b}\t {N}{^\alpha_j}.
\label{Rproj}$$ Note that $\t {w}{_b_i^k}$ transforms as a connection under the rotation of $\mathbb{S}$ and $$\label{eq:CurlyCovJ}
\t {\tilde{\nabla}}{_b}\t {J}{_a_i_j}=\underbrace{\t {\nabla}{_b}\t {J}{_a_i_j}}_\text{$\t {D}{_b}\t {J}{_a_i_j}-\t {\gamma}{_b_a^c}\t {J}{_c_i_j}$}-\,\, \t {w}{_b_i^k}\t {J}{_a_k_j}-\,\, \t {w}{_b_j^k}\t {J}{_a_i_k}.$$ Likewise, $\t {S}{_a_b^i}$ transforms as a connection under the rotation of $\mathbb{T}$ such that $$\label{eq:CurlyCovK}
\t {\tilde{\nabla}}{_i}\t {K}{_a_b^j}=\underbrace{\t {\nabla}{_i}\t {K}{_a_b^j}}_\text{$\t {D}{_i}\t {K}{_a_b^j}-\t {\gamma}{_i^j_k}\t {K}{_a_b^k}$}-\,\, \t {S}{_a_c_i}\t {K}{_b^c^j}-\,\, \t {S}{_b_c_i}\t {K}{_a^c^j}.$$ Previously, in [@Uzun_Wiltshire:2015], we interpreted Eq. (\[eq:Raych\_CapGuv\]) for spherically symmetric systems by defining a quasilocal thermodynamic equilibrium state and the associated quasilocal thermodynamic potentials. To define quasilocal thermodynamic equilibrium, we minimized the quasilocal Helmholtz free energy density which was defined via the mean extrinsic curvature of $\mathbb{S}$. This showed us that the equilibrium takes place when the system is defined by the set of quasilocal observers who are located at the apparent horizon. For further details and the natural outcomes of this interpretation one can refer to [@Uzun_Wiltshire:2015]. In the following sections we will investigate more general systems which are in nonequilibrium with their surroundings. Moreover, we will relax the condition of spherical symmetry.
Raychaudhuri equation with the Newman-Penrose formalism {#RaychaudhuriNP}
=======================================================
We use the relations (\[eq:null\_to\_t-like1\])-(\[eq:null\_to\_t-like4\]) in order to rewrite the contracted Raychaudhuri equation of our 2-dimensional timelike world sheet, Eq. (\[eq:Raych\_CapGuv\]), in the language of the NP formalism. This will allow us to compare the results of the investigations of the energy exchange mechanisms built on null cone variables and the notation that is used in quasilocal energy calculations.
Note that Eq. (\[eq:Raych\_CapGuv\]) is built on the extrinsic geometry of $\mathbb{T}$ and $\mathbb{S}$. Those extrinsic objects, like curvature, rotation and twist, are all measures of how much the dyad vectors change when they are propagated along each other. Likewise in the NP formalism, spin coefficients are defined via the changes of null vectors when they are propagated along each other with the relevant projections. A short summary of the NP formalism and the detailed calculations of our formalism transformation can be found in Appendices \[Appendix:A\] and \[Appendix:B\] respectively.
When the formalism transformation is applied, the contracted Raychaudhuri equation, (\[eq:Raych\_CapGuv\]), of $\mathbb{T}$ can be conveniently written as $$\label{Raych_simple}
\tilde{\nabla} _{\mathbb{T}}\mathcal{J}=-\tilde{\nabla} _{\mathbb{S}}\mathcal{K}-\mathcal{J}^2-\mathcal{K}^2+\mathcal{R_{\,W}},$$ where $$\begin{aligned}
\tilde{\nabla} _{\mathbb{T}}\mathcal{J} &:= \t {\eta}{^a^b}\t {\delta}{^i^j}\t {\tilde{\nabla}}{_b}\t {J}{_a_i_j} \nonumber \\
&= \left[D_{\b{n}}\left(\rho +\c{\rho }\right)-D_{\b{l}}\left(\mu +\c{\mu }\right)\right]\nonumber
\\
&\qquad {}
-\left[\left(\varepsilon +\c{\varepsilon}\right)\left(\mu +\c{\mu}\right)+\left(\gamma +\c{\gamma}\right)\left(\rho +\c{\rho}\right)\right]
\nonumber \\
&\qquad {}
+2\left[\left(\varepsilon - \c{\varepsilon}\right)\left(\mu -\c{\mu} \right)+\left(\gamma - \c{\gamma}\right)\left(\rho -\c{\rho}\right)\right],\label{DeltaJNPgen}\\
\tilde{\nabla} _{\mathbb{S}}\mathcal{K}
&:=\t {\eta}{^a^b}\t {\delta}{^i^j}\t {\tilde{\nabla}}{_i}\t {K}{_a_b_j}\nonumber \\ &=D_{\b{m}}\left(\pi -\c{\tau} \right)+D_{\c{\b{m}}}\left(\c{\pi} -\tau \right)
\\
&\qquad {}
-\left[\left(\c{\alpha}-\beta \right)\left( \pi -\c{\tau} \right)+ \left(\alpha -\c{\beta} \right)\left( \c{\pi} -\tau \right)\right]
\nonumber \\
&\qquad {}
+ 2\left[ \left(\c{\alpha}+\beta \right)\left( \pi + \c{\tau} \right) +\left(\alpha + \c{\beta} \right)\left( \c{\pi} + \tau \right)\right],\label{DeltaKNPgen}\end{aligned}$$ $$\begin{aligned}
\mathcal{J}^2
&:=\t {J}{_b_i_k}\t {J}{_a_l_j}{\t {\eta}{^a^b}}\t {\delta}{^i^j}\t {\delta}{^l^k}
\nonumber \\
&= 2\left(\mu \c{\rho} + \c{\mu} \rho + \sigma \lambda + \c{\sigma} \c{\lambda} \right),\label{J2NPgen}\\
\mathcal{K}^2
&:=\t {K}{_b_c_i}\t {K}{_a_d_j}\t {\eta}{^a^b}\t {\eta}{^c^d} \t {\delta}{^i^j}
\nonumber \\
&= -2\left(\kappa \nu + \c{\kappa} \c{\nu} + \pi \tau + \c{\pi} \c{\tau} \right),\label{K2NPgen} \\
\mathcal{R_{\,W}}
&:=g(R(\t {E}{_b},\t {N}{_i})\t {E}{_a},\t {N}{_j}){\t {\eta}{^a^b}}\t {\delta}{^i^j}
\nonumber \\
&=D_{\b{n}}\left(\rho +\c{\rho }\right)-D_{\b{l}}\left(\mu +\c{\mu }\right)
\nonumber \\
&\qquad {}
+ D_{\b{m}}\left(\pi -\c{\tau} \right)+D_{\c{\b{m}}}\left(\c{\pi} -\tau \right)
\nonumber \\
&\qquad {}
-\left[\left(\alpha - \c{\beta} \right)\left( \c{\pi} - \tau \right)+\left(\c{\alpha}-\beta \right)\left( \pi - \c{\tau} \right)\right]
\nonumber \\
&\qquad {}
-\left[\left(\varepsilon +\c{\varepsilon}\right)\left(\mu +\c{\mu}\right)+\left(\gamma +\c{\gamma}\right)\left(\rho +\c{\rho}\right)\right]
\nonumber \\
&\qquad {}
-2\left(\kappa \nu + \c{\kappa} \c{\nu} \right)
+ 2\left(\rho \c{\mu} +\c{\rho} \mu +\lambda \sigma + \c{\lambda} \c{\sigma} \right).\label{RWNPgen}\end{aligned}$$ An alternative, more compact expression for $\mathcal{R_{\,W}}$ is $$\begin{aligned}
\mathcal{R_{\,W}}&=& -2\left(\psi _2 + \c{\psi}_2 + 4\Lambda \right).\label{RWSimpleNPgen}\end{aligned}$$ Now if we substitute the terms (\[DeltaJNPgen\])–(\[RWSimpleNPgen\]) back into Eq. (\[Raych\_simple\]) we see that the Raychaudhuri equation is not yet satisfied. This is simply because Capovilla and Guven impose the integrability condition in their formalism to define the extrinsic objects[^5] and we did not impose it after our change of formalism. We must further impose the null tetrad gauge conditions introduced in Sec. \[Null tetrad gauge\]. Thus, with $\tau +\c{\pi}=0$, $\rho =\c{\rho}$ and $\mu = \c{\mu}$ we get $$\begin{aligned}
\tilde{\nabla} _{\mathbb{T}}\mathcal{J}
&= 2\left(D_{\b{n}}\rho-D_{\b{l}}\mu \right)
\nonumber \\
&\qquad {}
-2\left[\left(\varepsilon +\c{\varepsilon}\right)\mu +\left(\gamma +\c{\gamma}\right)\rho \right]\label{DeltaJNP},\\
\tilde{\nabla} _{\mathbb{S}}\mathcal{K}
&=2\left(D_{\b{m}}\pi - D_{\c{\b{m}}}\tau \right)
\nonumber \\
&\qquad {}
- 2\left[\left(\c{\alpha}-\beta \right)\pi + \left(\alpha -\c{\beta} \right)\c{\pi}\right] \label{DeltaKNP},\\
\mathcal{J}^2
&= 4\mu \rho + 2\left( \sigma \lambda + \c{\sigma} \c{\lambda} \right)
\label{J2NP},\\
\mathcal{K}^2
&= -2\left(\kappa \nu + \c{\kappa} \c{\nu}\right) + 2\left(\pi \c{\pi} + \tau \c{\tau} \right)\label{K2NP},\\
\mathcal{R_{\,W}}
&=2 \left[D_{\b{n}}\rho - D_{\b{l}}\mu \right]
+ 2\left[D_{\b{m}}\pi - D_{\c{\b{m}}}\tau \right]
\nonumber \\
&\qquad {}
-2 \left[\left(\c{\alpha}-\beta \right)\pi + \left(\alpha -\c{\beta} \right)\c{\pi}\right]
\nonumber \\
&\qquad {}
-2\left[\left(\varepsilon +\c{\varepsilon}\right)\mu +\left(\gamma +\c{\gamma}\right)\rho \right]- 2\left(\kappa \nu + \c{\kappa} \c{\nu} \right)
\nonumber \\
&\qquad {}
+2\left(\tau \c{\tau}+\pi \c{\pi}\right) + 4 \mu \rho
+ 2\left(\sigma \lambda + \c{\sigma} \c{\lambda} \right)\label{RiemNP} ,\end{aligned}$$ and the alternative expression (\[RWSimpleNPgen\]) is unchanged. These variables now satisfy the Raychaudhuri equation as expected.
We further note that since the Einstein field equations have not yet been applied, (\[DeltaJNP\])–(\[RiemNP\]) are purely geometrical results irrespective of the underlying gravitational theory that governs the dynamics of the quasilocal observers. In order to satisfy the Einstein equations, all 16 of the field equations of the spin coefficients should be satisfied. However, we need to emphasize that this version of the contracted Raychaudhuri equation contains all the information contained in two of the NP spin field equations. Let us consider the following NP spin field equations $$\begin{aligned}
\tensor{D}{_{\textbf{l}}}\,\mu - \tensor{D}{_{\textbf{m}}}\,\pi &= \mu \, \overline{\rho}-\left( \varepsilon +\overline{\varepsilon } \right) \mu + \sigma \lambda + \pi \, \overline{\pi}
\nonumber \\
&\qquad {}
- \left( \overline{\alpha }-\beta \right) \pi
-\kappa \,\nu +\psi _2 +2\Lambda,\label{eq:NP field1}\\
\tensor{D}{_{\textbf{n}}}\,\rho -\tensor{D}{_{\overline{\textbf{m}}}}\,\tau &= -\overline{\mu}\,\rho+ \left( {\it \gamma}+\overline{{\it \gamma}}\right)\rho-\sigma\,\lambda-\tau\,\overline{\tau}
\nonumber \\
&\qquad {}
-\left(\alpha-\overline{\beta}\right)\tau
+\kappa \,\nu-\psi _2-2\Lambda.\label{eq:NP field2}\end{aligned}$$ If we take $(\ref{eq:NP field1})+(\ref{eq:NP field1})^*-(\ref{eq:NP field2})-(\ref{eq:NP field2})^*$, where $*$ denotes the complex conjugate, then the result is the contracted Raychaudhuri equation of the world sheet under our gauge conditions. We will not attempt to restrict the general set of equations (\[DeltaJNP\])-(\[RiemNP\]) by further imposing the Einstein equations. Rather, we will apply it to spacetimes that are already solutions of the Einstein field equations.
A work-energy relation {#Work-energy}
======================
In this section we are going to define quasilocal charges by using the terms that appear in the Raychaudhuri equation. Ultimately we will make definitions so as to end up with a work-energy relation that looks like the following $$\begin{aligned}
E_{\rm Total}=E_{\rm Dilatational}+E_{\rm Rotational}+W_{\rm Tidal}.\end{aligned}$$ In doing so, one of Kijowski’s quasilocal energy definitions will be our anchor. Let us recall the two energy definitions made by Kijowski which are derived from a gravitational action [@Kijowski:1997], $$E_{\rm K1}=-\frac{1}{16\pi}\oint_{\mathbb{S}}{d\mathbb{S} \left(\frac{H^2-k_0^2}{k_0}\right)},\label{eq:E_K1}$$ $$E_{\rm K2}=-\frac{1}{8\pi}\oint_{\mathbb{S}}{d\mathbb{S} \left(\sqrt{H^2}-k_0\right)},\label{eq:E_K2}$$ where the square of the mean extrinsic curvature, $H^2$, is the $k^2-l^2$ term that often appears in quasilocal energy definitions. The term $k_0$ is the extrinsic curvature of a spacelike 2-surface embedded into the 3-dimensional space of a reference spacetime which is chosen to be Minkowski, $\mathcal{M}^4$, in Kijowski’s work. Previously, we identified Eq. (\[eq:E\_K1\]) as internal energy [@Uzun_Wiltshire:2015] since it was associated with the quasilocal energy of a system in equilibrium which can potentially be used to do work, dissipate heat or exchange energy in other forms. The second expression (\[eq:E\_K2\]) is usually interpreted as the invariant mass energy of the system that is an analogue of a proper mass of a particle [@Epp:2000]. Therefore if we are after an expression which represents the energy that can be exchanged by the system, $H^2$ should be our central object.[^6]
The quasilocal energy definitions $E_{\rm K1}$ and $E_{\rm K2}$ of Kijowski both have the functional form $\left(H^2\right)^p$ with $p=1$ and $p=1/2$ respectively. This is due to Kijowski applying a Legendre transform on the boundary Hamiltonian with different boundary conditions. In the case of $E_{\rm K1}$, he controls the information on the boundary of the world tube by imposing conditions on the metric of the induced 2-surface and the associated curvature. He sets the components of the induced 2-metric of $\mathbb{S}$ to be time independent in this type of control, in order to avoid the extra volume inclusions. By contrast in $E_{\rm K2}$, the entire information of the world tube is controlled via imposing conditions on the 3-metric of the world tube. Those conditions require the world tube metric to have $g_{00}=1$ and $g_{0A}=0$, where $A$ refers to the indices of the spacelike boundary of the world tube. Ultimately $E_{\rm K1}$ and $E_{\rm K2}$ might be used for situations where different boundary conditions apply. However, this does not cause any problem in terms of the dimensionality of the quasilocal energies as the so-called reference terms, which make sure that the energy definitions are boost invariant, do not appear in the same format.
Previously, in [@Uzun_Wiltshire:2015], we defined quasilocal thermodynamic potentials at equilibrium for spherically symmetric spacetimes by using the terms that appear in the contracted Raychaudhuri equation, (\[eq:Raych\_CapGuv\]), of $\mathbb{T}$. We applied our formalism for metrics with boundary conditions $g_{00}=1$, $g_{0A}=0$ when the quasilocal observers are located at the apparent horizon. Therefore the quasilocal charges defined in [@Uzun_Wiltshire:2015] take the same form as $E_{\rm K2}$. Note that this refers to a very special state of the system in question.
In the present paper, we would like to define quasilocal charges for nonequilibrium states and we would like to go beyond spherical symmetry. We will consider spacetimes with metrics that have time independent components for the induced 2-metric on $\mathbb{S}$ just as Kijowski did to define $E_{\rm K1}$. In order to define the quasilocal charges we will first multiply the contracted Raychaudhuri equation (\[Raych\_simple\]) by 2 [^7], and add the reference energy term, $k_0^2$, to each side. Since all of the terms that appear in Eq. (\[Raych\_simple\]) have dimension $(length)^{-2}$ on account of their relationship to the Riemann tensor, to obtain a quasilocal energy expression we further divide by $k_0$ before integrating the equation on our closed 2-surface $\mathbb{S}$. Then we obtain the following quasilocal charges $$\begin{aligned}
E_{\rm Tot}&=&-\frac{1}{16\pi}\oint_{\mathbb{S}}{d\mathbb{S} \left[\frac{-\left(2\tilde{\nabla} _{\mathbb{T}}\mathcal{J} +k_0^2\right)}{k_0}\right]},\label{ETotal}\\
E_{\rm Dil}&=&-\frac{1}{16\pi}\oint_{\mathbb{S}}{d\mathbb{S} \left[\frac{2\mathcal{J}^2-k_0^2}{k_0}\right]},\label{EDilatational}\\
E_{\rm Rot}&=&-\frac{1}{16\pi}\oint_{\mathbb{S}}{d\mathbb{S} \left[\frac{2\tilde{\nabla} _{\mathbb{S}}\mathcal{K} + 2\mathcal{K}^2}{k_0}\right]},\label{Erotational}\\
W_{\rm Tid}&=&-\frac{1}{16\pi}\oint_{\mathbb{S}}{d\mathbb{S} \left[\frac{-2\mathcal{R_{\,W}}}{k_0}\right]},\label{WTidal}\end{aligned}$$ so that $$\begin{aligned}
E_{\rm Tot}&=&E_{\rm Dil}+E_{\rm Rot}+W_{\rm Tid}\end{aligned}$$ is satisfied.
In the following sections, we will discuss our reasons for these quasilocal charge definitions. The reasons behind naming our quasilocal charges like energy associated with dilatational or rotational degrees of freedom and work done by tidal fields of the system will be explained.
Energy associated with dilatational degrees of freedom
------------------------------------------------------
In spherical symmetry [@Uzun_Wiltshire:2015], we were able to write $\mathcal{J}^2:=\t {J}{_b_i_k}\t {J}{_a_l_j}{\t {\eta}{^a^b}}\t {\delta}{^i^j}\t {\delta}{^l^k}$ in terms of the square of the mean extrinsic curvature, $H^2$, of $\mathbb{S}$ via $2\mathcal{J}^2=H^2$. Note that confining the quasilocal observers to radial world lines in a spherically symmetric system results in corresponding, purely radial, null congruences that are shear-free. Indeed, for the generic case, $$\begin{aligned}
H^2 &:=&\t {J}{_a_i_k}\t {J}{_b_j_l}{\t {\eta}{^a^b}}\t {\delta}{^i^k}\t {\delta}{^j^l}=2\left(\rho + \c{\rho}\right)\left(\mu + \c{\mu}\right)\label{meanJ},\\
\mathcal{J}^2 &:=&\t {J}{_a_i_l}\t {J}{_b_j_k}{\t {\eta}{^a^b}}\t {\delta}{^i^k}\t {\delta}{^j^l}=2\left(\mu \c{\rho} + \c{\mu} \rho + \sigma \lambda + \c{\sigma} \c{\lambda} \right)\label{Raych_J}.\end{aligned}$$ Therefore with two of our null tetrad gauge conditions, $\rho = \c{\rho},~\mu = \c{\mu}$ and the shear-free case, $\sigma=0$, $$\begin{aligned}
H^2=2\mathcal{J}^2=4\left(\mu \c{\rho} + \c{\mu} \rho + \sigma \lambda + \c{\sigma} \c{\lambda} \right)=8\mu \rho .\end{aligned}$$ This is natural for radially moving observers of spherically symmetric systems. However, it is not clear which of the terms in (\[meanJ\]) and (\[Raych\_J\]) carries more information about the generic system in question.
According to the Goldberg-Sachs theorem, there exists a shear-free null congruence, $k^ \mu$, for a vacuum spacetime if [@Wald:1984] $$\begin{aligned}
k _{[\mu} C _{\nu]\alpha \beta [\gamma} k_{\sigma ]}k^ \alpha k^\beta =0\end{aligned}$$ is satisfied. This means that if we wish to have the shear-free property, we need to pick a principal null tetrad for our systems in vacuum. However, there is no such $a\,priori$ necessity for our formalism to hold.
In [@Adamoetal:2012], Adamo *et al.* investigate the shear-free null geodesics of asymptotically flat spacetimes in detail. They note that the shear-free or asymptotically shear-free null congruences may provide information about the asymptotic center of mass or intrinsic magnetic dipole in certain cases. Also the importance of the twistor theory, which is solely constructed on shear-free null congruences, cannot be denied. At this point, we should also emphasize that the spacetimes we are interested in are not necessarily asymptotically flat.
In [@Ellis:2011], Ellis investigated shear-free timelike and null congruences. He concluded that by imposing a shear-free condition on the null congruences, one puts a restriction on the way the distant matter can influence the local gravitational field. In that case, there is an information loss. Note that shear is also the central concept of Bondi’s mass loss formulation. It is only if the null congruence has shear, that one can define a $news\,function$ which is solely responsible for the mass loss via gravitational radiation at null infinity [@Bondietal:1962]. Ellis also emphasized the fact that a nonrotating null congruence in vacuum cannot shear without expanding or contracting. Thus we cannot completely separate the effects of dilatation and shear for null congruences. We will combine them in the quasilocal charge constructed from the $\mathcal{J}^2$ term, (\[J2NP\]), and write $$\begin{aligned}
E_{\rm Dil}&=&-\frac{1}{16\pi}\oint_{\mathbb{S}}{d\mathbb{S} \left[\frac{2\mathcal{J}^2-k_0^2}{k_0}\right]}\nonumber \\
&=&-\frac{1}{16\pi}\oint_{\mathbb{S}}{d\mathbb{S} \left[\frac{8\mu \rho + 4 \left( \sigma \lambda + \c{\sigma} \c{\lambda} \right)-k_0^2}{k_0}\right]}.\label{Edil}\end{aligned}$$
Since we claim that the Raychaudhuri equation of the world sheet incorporates the physically meaningful quasilocal energy densities, one might ask what the direct connection of our $\mathcal{J}^2$ term, (\[Raych\_J\]), to the boundary Hamiltonian –which is generically written in terms of the mean extrinsic curvature $H$, (\[meanJ\]) –is. The link lies in the Gauss equation of the 2-surface $\mathbb{S}$ when it is embedded directly into spacetime [@Spivak:1979], i.e., $$\label{Gauss_S}
g(R(N_k, N_l)N_j,N_i)=\mathcal{R} _{ijkl} - \t {J}{_a_i_k}\t {J}{_b_j_l}{\t {\eta}{^a^b}} + \t {J}{_a_j_k}\t {J}{_b_i_l}{\t {\eta}{^a^b}},$$ where $\mathcal{R} _{ijkl}$ is the Riemann tensor associated with the 2-dimensional metric induced on $\mathbb{S}$. If we contract Eq. (\[Gauss\_S\]) with $\t {\delta}{^i^k}\t {\delta}{^j^l}$ we find $$\begin{aligned}
\label{Gauss_S_simple}
\mathcal{J}^2=H^2-\mathcal{R}_{\, \mathbb{S}}+2\left(\Psi _2+\c{\Psi _2}-2\Lambda -2 \Phi _{11}\right),\end{aligned}$$ in which $\mathcal{R}_{\, \mathbb{S}}:=\mathcal{R} _{ijkl}\t {\delta}{^i^k}\t {\delta}{^j^l}$ is the scalar intrinsic curvature of $\mathbb{S}$ and the derivation of $g(R(N_k, N_l)N^l,N^k)=-2\left(\Psi _2+\c{\Psi _2}-2\Lambda -2 \Phi _{11}\right)$ can be found in Appendix \[Otherderivations\]. Equation (\[Gauss\_S\_simple\]) not only allows us to connect our $\mathcal{J}^2$ term to the boundary Hamiltonian of general relativity, but it can also be used to relate different quasilocal energy definitions which are built on either the extrinsic or intrinsic curvature of $\mathbb{S}$.
Energy associated with rotational degrees of freedom
----------------------------------------------------
In the previous subsection we defined the quasilocal energy associated with the dilatational degrees of freedom by combining the real divergence and the possibly existing shear of the null congruence which is constructed from the timelike dyad that spans the timelike surface $\mathbb{T}$. Now we will distinguish which spin coefficients are most significant in defining the energy associated with the rotational degrees of freedom.
Recall that by imposing the integrability conditions on our local dyad we made sure that the tangent vectors of the spacelike surface $\mathbb{S}$ always stay within the surface. Later, we transformed our construction into the NP formalism and stated that these conditions imply that the null vectors $\{\b{m}, \c{\b{m}}\}$, constructed from the spacelike dyad of $\mathbb{S}$, should satisfy certain null gauge conditions throughout the evolution of the quasilocal system. Then, under such gauge conditions, the magnitude of the change of these null vectors should be related to how much the quasilocal system rotates. Note that this interpretation makes sense only when one forces the spacelike dyad, constructed from $\{\b{m}, \c{\b{m}}\}$, to stay on $\mathbb{S}$ throughout the evolution.
Now let us define the spacetime covariant derivative via the directional covariant derivatives of the null tetrad and write $$\begin{aligned}
D_{\mu}&=-l_{\mu}D_{\b{n}}-n_{\mu}D_{\b{l}}+m_{\mu}D_{\c{\b{m}}}+\c{m}_{\mu}D_{\b{m}}.\end{aligned}$$ Then the change in components of $\{\b{m}, \c{\b{m}}\}$ follows as $$\begin{aligned}
D_{\mu}m^{\mu}&=-\inner{\b{l},D_{\b{n}}\b{m}} -\inner{\b{n},D_{\b{l}}\b{m}}
\nonumber \\
&\qquad
+\inner{\b{m},D_{\c{\b{m}}}\b{m}}
+\inner{\c{\b{m}},D_{\b{m}}\b{m}},\nonumber \\
D_{\mu}\c{m}^{\mu}&=-\inner{\b{l},D_{\b{n}}\c{\b{m}}}-\inner{\b{n},D_{\b{l}}\c{\b{m}}}
\nonumber \\
&\qquad +\inner{\b{m},D_{\c{\b{m}}}\c{\b{m}}}
+\inner{\c{\b{m}},D_{\b{m}}\c{\b{m}}}\nonumber.\end{aligned}$$ By using Eqs. (\[Dlm\])–(\[Dmm\_\]) we get $$\begin{aligned}
D_{\mu}m^{\mu}&=\left(\c{\pi} -\tau \right)+\left(\beta - \c{\alpha}\right),\nonumber \\
D_{\mu}\c{m}^{\mu}&= \left(\pi -\c{\tau} \right)+\left(\c{\beta} - \alpha \right).\nonumber\end{aligned}$$ Therefore, the spin coefficients $\{\pi,~\tau,~\alpha,~\beta \}$, their complex conjugates and their changes when one perturbs them on $\mathbb{S}$ can be used to define the energy associated with the rotational degrees of freedom. Since the terms $\tilde{\nabla} _{\mathbb{S}}\mathcal{K}$, (\[DeltaKNP\]), and $\mathcal{K}^2$, (\[K2NP\]), involve these spin coefficients and their changes we define $$\begin{aligned}
E_{\rm Rot}
&=-\frac{1}{16\pi}\oint_{\mathbb{S}}d\mathbb{S} \left[\frac{2\tilde{\nabla} _{\mathbb{S}}\mathcal{K} + 2\mathcal{K}^2}{k_0}\right] \nonumber \\
&=-\frac{1}{16\pi}\oint_{\mathbb{S}}d\mathbb{S}\frac{4}{k_0} \left[D_{\b{m}}\pi - D_{\c{\b{m}}}\tau - \pi \left(\c{\alpha}-\beta \right)
\right. \nonumber \\
&\qquad \qquad \qquad \qquad \left.
-\c{\pi} \left(\alpha -\c{\beta} \right)
+ \pi \c{\pi} + \tau \c{\tau}
\right. \nonumber \\
&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \left.
-\kappa \nu - \c{\kappa} \c{\nu}\right].\label{Erot}\end{aligned}$$ Note that the term $\left(\kappa \nu + \c{\kappa} \c{\nu}\right)$ vanishes if one picks the null vector $\b{l}$ or $\b{n}$, constructed from the timelike dyad that spans $\mathbb{T}$, to be a geodesic, i.e., $\kappa =0$ or $\nu =0$. In that case $E_{\rm Rot}$ can be written purely in terms of the spin coefficients $\{\pi,~\tau,~\alpha,~\beta \}$. However, there is no geometric or physical reason for us to demand our null congruences to be geodesic, and we will not impose the geodesic condition for the time being.
Work done by tidal distortions
------------------------------
If we want to understand the properties of a system via its energy exchange mechanisms we need to account for the different types of associated energies, especially in the nonequilibrium case. One needs to be careful about what is actually measured by the quasilocal observers. What is physical for any one observer is the tidal acceleration as measured by that observer’s local ruler and clock. The work done by tidal distortions of the whole system, however, requires the quasilocal observers to be placed in such a geometric configuration that the observers all agree on the fact that they are measuring the properties of the same system. In the previous sections, we stated that this is guaranteed by our integrability conditions.
In [@Hartle:1974], Hartle investigates the changes in the shape of an instantaneous horizon of a rotating black hole through the intrinsic scalar curvature, $\mathcal{R}_{\, \mathbb{S}}$, of a spacelike 2-surface when it is embedded into a 4-dimensional spacetime. He $chooses$ a null tetrad gauge $so \,\,that$ $\mathcal{R}_{\, \mathbb{S}}$ can be written in terms of a simple combination of $\Psi _2$ and the spin coefficients in vacuum. In the end, he finds $\mathcal{R}_{\, \mathbb{S}}=4Re\left(-\Psi _2 +\rho \mu -\lambda \sigma \right)$. In [@Hayward:1994], Hayward provides a quasilocal version of the Bondi-Sachs mass via the Hawking mass [@Hawking:1968], in which the central object is again the complex intrinsic scalar curvature given by $\mathcal{R}_{\, \mathbb{S}}^H= -\Psi _2 +\sigma \sigma ' -\rho \rho ' +\Phi _{11}+\Pi $, in the formalism of weighted spin coefficients.
We believe that the $\mathcal{R_{\,W}}$ term that appears in Eq. (\[RiemNP\]) has a more fundamental meaning than $\mathcal{R}_{\, \mathbb{S}}$ in terms of the tidal distortion. In order to show why this should be so, previously in [@Uzun_Wiltshire:2015], we considered its analogue in the 3+1 picture. In particular, $$\label{eq:relacc}
\frac{d^2\xi ^\mu}{d\tau ^2}=\tensor{R}{^\mu_\nu _\rho _\sigma}u^{\nu}u^{\rho}\xi ^{\sigma}$$ is the relative tidal acceleration of the observers on neighboring timelike geodesics, where $\vec{\xi}$ is the spacelike separation 4-vector, $\tau$ is the proper time and $u^\mu$ are the 4-velocity vector field components. Thus one can define an object which we named *relative work density*, that mimics $W=\vec{F}\cdot \vec{x}$ by $$\label{eq:relwork}
\left(\frac{d^2\xi ^\mu}{d\tau ^2}\right)\xi_{\mu}=\tensor{R}{^\gamma _\nu _\rho _\sigma}u^{\nu}u^{\rho}\xi^{\sigma}\xi_{\gamma},$$ in which the separation vector was assumed to be residing on $\mathbb{S}$. We also noted that, in the 3+1 picture, connecting the two world lines is essentially nonlocal. The reason for applying Eq. (\[eq:relacc\]) only for neighboring world lines is due to the fact that the observers are trying to approximate the value of a quantity, which is essentially quasilocal, locally [@Uzun_Wiltshire:2015]. Therefore the quantity (\[eq:relwork\]) in the 2+2 picture, i.e., $\mathcal{R_{\,W}}=g(R(\t {E}{_b},\t {N}{_i})\t {E}{_a},\t {N}{_j}){\t {\eta}{^a^b}}\t {\delta}{^i^j}=-2\left(\psi _2 + \c{\psi}_2 + 4\Lambda \right)$, should have a more fundamental importance, as it is an intrinsically quasilocal quantity. Therefore by Eq. (\[RWSimpleNPgen\]) $$\begin{aligned}
W_{\rm Tid}&=&-\frac{1}{16\pi}\oint_{\mathbb{S}}{d\mathbb{S} \left[\frac{-2\mathcal{R_{\,W}}}{k_0}\right]}\nonumber \\
&=&-\frac{1}{16\pi}\oint_{\mathbb{S}}{d\mathbb{S} \left[\frac{4\left(\psi _2 + \c{\psi}_2 + 4\Lambda \right)}{k_0}\right]}.\label{WTid}\end{aligned}$$ Note that the quasilocal tidal work of the system is written purely in terms of the Coulomb-like Weyl curvature scalar, $\psi _2$, and the Ricci scalar of the spacetime due to $\Lambda=R/24$. This interpretation does not contradict our intuition, since one would expect the quasilocal observers to measure greater magnitude of tidal distortion under higher Coulomb-like attraction and a higher Ricci curvature.
Total energy
------------
In [@Uzun_Wiltshire:2015] we associated the $\sqrt{2\mathcal{J}^2}$ term with the Helmholtz free energy density for spherically symmetric systems in equilibrium. Likewise $\sqrt{2\left|\tilde{\nabla} _{\mathbb{T}}\mathcal{J}\right|}$ was interpreted as the Gibbs free energy density of the system that *includes* the energy that is spontaneously exchanged with the surroundings to relax the system into its current state. However, in the present paper, we do not attempt to give a thermodynamic interpretation to the Raychaudhuri equation of Capovilla and Guven since systems far from equilibrium cannot be assigned unique thermodynamic relations even in classical thermodynamics [@Demirel:2007]. Therefore, by using the term $\tilde{\nabla} _{\mathbb{T}}\mathcal{J}$, (\[DeltaJNP\]), the total energy is represented by $$\begin{aligned}
E_{\rm Tot}&=-\frac{1}{16\pi}\oint_{\mathbb{S}}d\mathbb{S} \left[\frac{-\left(2\tilde{\nabla} _{\mathbb{T}}\mathcal{J} +k_0^2\right)}{k_0}\right] \nonumber \\
&=-\frac{1}{16\pi}\oint_{\mathbb{S}}d\mathbb{S}\frac{1}{k_0} \left\{-4\left[D_{\b{n}}\rho -D_{\b{l}}\mu \right]
\right. \nonumber \\
&\qquad \qquad \qquad \qquad \left.
+4\left[\left(\varepsilon +\c{\varepsilon}\right)\mu +\left(\gamma +\c{\gamma}\right)\rho \right]-k_0^2
\right\}.\label{Etot}\end{aligned}$$ Here the total energy combines two types of terms: (i) the quasilocal energy the system possesses, (ii) the energy that is expended by the “internal”(tidal) forces to bring the quasilocal observers in a geometric configuration to define $\mathbb{S}$. The first piece further splits into the energy associated with dilatational and rotational degrees of freedom. The second piece can be viewed as the energy that has already been expended by the system in order for it to create “room” for itself.
On the boost invariance of the quasilocal charges
-------------------------------------------------
Previously, in Sec. \[Null tetrad gauge\], it was shown that our tetrad conditions, (\[Gaugeconditions\]), are invariant under type-III Lorentz transformations which correspond to the boosting of physical observers in the only spacelike direction, $\t {E}{^{\mu} _{\hat{1}}}$, defined on $\mathbb{T}$. We also stated that for a well-defined construction, one would expect the matter plus gravitational energy of the system to be boost invariant.
In Appendix \[App:SpinBoost\] we show that all of the terms, (\[DeltaJNP\])-(\[RiemNP\]), that appear in the contracted Raychaudhuri equation are invariant under such spin-boost transformations. Therefore all of the quasilocal charges we defined in the current section are invariant under the boosting of the observers along the spacelike direction orthogonal to $\mathbb{S}$.
Applications {#Applications}
============
Radiating Vaidya spacetime
--------------------------
The Vaidya spacetime is used in investigations of radiating stars. It is associated with a spherically symmetric metric which reduces to the Schwarzschild metric when the mass function of the body is taken to be a constant. In standard coordinates with null coordinate, $u$, Vaidya metric is $$ds^2=-\left(1-\frac{2M(u)}{r}\right)du^2-2du\,dr+r^2d\theta ^2 +r^2\sin ^2 \theta d\phi ^2.$$ Let us pick the following complex null tetrad, $\{\b{l},\b{n},\b{m},\c{\b{m}}\}$, with $$\begin{aligned}
l^\mu &=& \partial _u-\left(\frac{1}{2}-\frac{M(u)}{r}\right)\partial _r,\\
n^\mu &=& \partial _r,\\
m^\mu &=& \frac{1}{\sqrt{2}}\left(\frac{1}{r}\partial _\theta +\frac{i}{r\sin{\theta}}\partial _\phi \right).\end{aligned}$$ For such a complex null tetrad, $\kappa,~\nu,~\sigma,~\lambda,~\tau, ~\pi$ all vanish so that $\pi + \c{\tau}=0$ is trivially satisfied. Also $\rho = \c{\rho}$, $\mu = \c{\mu}$ as expected. Therefore all of our integrability conditions are satisfied. When we evaluate the spin coefficients, their relevant directional derivatives and the curvature scalars, then substitute them in Eq. (\[Raych\_simple\]) we get $$\begin{aligned}
\tilde{\nabla} _{\mathbb{T}}\mathcal{J}
&=& \frac{-2}{r^2}+\frac{8M(u)}{r^3}\label{DeltaJVaidya},\\
\tilde{\nabla} _{\mathbb{S}}\mathcal{K}
&=&0 \label{DeltaKVaidya},\\
\mathcal{J}^2
&=& \frac{2}{r^2}-\frac{4M(u)}{r^3}\label{JVaidya},\\
\mathcal{K}^2
&=&0\label{K^2Vaidya},\\
\mathcal{R_{\,W}}
&=&\frac{4M(u)}{r^3}\label{R_WVaidya}.\end{aligned}$$ Here we immediately notice that the terms that have been associated with the rotational degrees of freedom, i.e., $\tilde{\nabla} _{\mathbb{S}}\mathcal{K}$ and $\mathcal{K}^2$, are zero. This is expected since Vaidya is a spherically symmetric spacetime.
In order to calculate our quasilocal charges we need to first find the so-called reference curvature $k_0$. This requires the isometric embedding of the $u=$ constant, $r=$ constant surface to the $\mathcal{M}^4$, Minkowski spacetime, which is considered in the spherical coordinates $\{ \bar{r},\bar{\theta},\bar{\phi} \}$. For Vaidya, by setting $\{\bar{r}=r,\bar{\theta}=\theta,\bar{\phi}=\phi \}$ we see that the metric induced on $\mathbb{S}$ is trivially isometric to that of the 2-surface embedded in $\mathcal{M}^4$. Then $k_0$ is given by the scalar curvature of a 2-sphere, i.e., $k_0=2/\bar{r}=2/r$. From Eqs. (\[Edil\]), (\[Erot\]), (\[WTid\]) and (\[Etot\]) we then have $$\begin{aligned}
E_{\rm Tot}
&=&\frac{-1}{16\pi}\int_{\mathbb{S}}d\mathbb{S}\frac{-\left[ 2\left(\frac{-2}{r^2}+\frac{8M(u)}{r^3}\right)+\frac{4}{r^2}\right]}{\frac{2}{r}}= 2M\left(u\right) \nonumber ,\\
E_{\rm Dil}
&=&\frac{-1}{16\pi}\int_{\mathbb{S}}{d\mathbb{S}\frac{\left[2\left(\frac{2}{r^2}-\frac{4M(u)}{r^3}\right)-\frac{4}{r^2}\right]}{\frac{2}{r}}}= M\left(u\right)\nonumber, \\
W_{\rm Tid}
&=&\frac{-1}{16\pi}\int_{\mathbb{S}}{d\mathbb{S}\frac{\left[-2\left(\frac{4M(u)}{r^3}\right)\right]}{\frac{2}{r}}}= M\left(u\right)\nonumber, \\
E_{\rm Rot}&=&0.\end{aligned}$$
Note that we chose a null tetrad in order to satisfy our gauge conditions which turned out to be shear-free. Therefore $H^2=2\mathcal{J}^2$ holds in this case and thus $E_{\rm Dil}=E_{\rm K1}$. Also, the spacetime Ricci scalar, $24\Lambda$, vanishes. Therefore $\mathcal{R_{\,W}}=-2\left(\Psi _2 +\c{\Psi} _2 \right)=-4\Psi _2$ and the $\mathcal{R_{\,W}}$ term is solely determined by the Coulomb-like gravitational potential.
To visualize a simple evolution, consider the mass function $M(u)=M_0-a\,u$, where $a$ is a positive constant. These kinds of linear mass functions have been used to investigate the black hole evaporation previously in the literature (cf. [@Hiscock:1980], [@Waugh_Lake:1986], [@Podolsky_Svitek:2005]). With this choice of mass function, at $u=0$ we have the case of a Schwarzschild black hole \[see Fig. \[Fig:Schwenergy\].\] which, given enough time, eventually evaporates so that the spacetime becomes Minkowski \[see Fig. \[Fig:Minkenergy\].\]. The quasilocal charges fall off linearly with the time parameter $u$ \[see Fig \[Vaidyatimeevo\].\].
Now let us consider the $\tilde{\nabla} _{\hat{0}} E_{\rm Dil}=\tilde{\nabla} _{\hat{0}}\left(E_{\rm K1}\right)=\t {E}{^\mu _{\hat{0}}}\partial _\mu \left(E_{\rm K1}\right)$. Following relation (\[eq:null\_to\_t-like1\]) and with the choices we have made here for $\b{l}$ and $\b{n}$, $$\begin{aligned}
\tilde{\nabla} _{\hat{0}}E_{\rm Dil}&=&\frac{1}{\sqrt{2}}\left[\partial _u+\left(\frac{1}{2}+\frac{M(u)}{r}\right)\partial _r \right]M(u)\nonumber \\
&=&\frac{1}{\sqrt{2}}\frac{\partial M(u)}{\partial u}.\nonumber\end{aligned}$$
According to the Einstein field equations, $-\frac{2}{r^2}\frac{\partial M(u)}{\partial u}=8\pi \tilde{\rho}$, where $\tilde{\rho}$ is the energy density of the null dust. This shows that the dilatational energy of the system which could potentially be lost by work, heat or other forms is lost purely due to radiation, for the case of the Vaidya spacetime.
The $C$-metric
--------------
For our second application we want to consider a nonspherically symmetric spacetime. The $C$-metric is not spherically symmetric and it has many interpretations depending on its coordinate representation. We will consider the coordinate representation which was introduced by Hong and Teo [@Hong_Teo:2004], $$\label{HongTeometric}
ds ^2=\frac{1}{H}\left(-F\,d\tau ^2 +\frac{dy^2}{F}+\frac{dx^2}{G}+G\,d\phi ^2\right),$$ with $$\begin{aligned}
H(x,y)&:=&A^2\left(x+y\right)^2 \nonumber,\\
G(x)&:=&\left(1-x^2\right)\left(1+2AMx\right)\nonumber,\\
F(y)&:=&-\left(1-y^2\right)\left(1-2AMy\right)\nonumber.\end{aligned}$$ Griffiths *et al.* [@Griffithsetal:2006] transformed this cylindrical form of the metric into spherical coordinates by applying the coordinate transformation $\{ \tau =At, x=\cos \theta , y= 1/(Ar) \}$ and gave physical interpretations to the $C$-metric. The transformed metric is written as [@Griffithsetal:2006] $$\begin{aligned}
\label{Griffithsmetric}
ds ^2 =\frac{1}{\Delta}\left(-Q dt^2+\frac{dr^2}{Q}+\frac{r^2d\theta ^2}{P}+Pr^2\sin ^2\theta d\phi ^2 \right),\end{aligned}$$ where $$\begin{aligned}
\Delta (r,\theta)&:= &\left(1+Ar\cos \theta \right)^2\nonumber,\\
Q(r)&:=&\left(1-\frac{2M}{r}\right)\left(1-A^2r^2\right)\nonumber,\\
P(\theta)&:=&1+2AM\cos \theta \nonumber,\end{aligned}$$ with $A$ and $M$ being constants. Note that at $r=2M$ and at $r=1/A$ the metric has coordinate singularities and one needs to satisfy the $A^2M^2<1/27$ condition in order to preserve the metric signature. Furthermore, Eq. (\[Griffithsmetric\]) reduces to the metric of the Schwarzschild black hole in standard curvature coordinates when one sets $A=0$. Because of this, following Griffiths *et al.* [@Griffithsetal:2006], we will interpret the $C$-metric as the metric of an accelerated black hole. At this point we note that the $C$-metric is sometimes interpreted as a metric representing two causally disconnected black holes that are joined by a strut and accelerating away from each other [@Bonnor:1983; @Bonnor:1988; @Cornish_Uttley:1995]. However, this interpretation is valid only when the metric is extended across each horizon, i.e., $r=2M$ and $r=1/A$ [@Griffithsetal:2006]. For the application of our quasilocal construction we will not consider such an extension of the metric, and the resulting quasilocal charges will correspond to the charges of a single accelerated black hole.
Let us consider the following null tetrad that is generated by the double dyad of the quasilocal observers: $$\begin{aligned}
l^\mu &=& \frac{1}{\sqrt{2}}\left[\frac{\Delta }{Q(r)}\right]^{1/2}\partial _t-\frac{1}{\sqrt{2}}\left[\Delta \,Q(r)\right]^{1/2}\partial _r ,\nonumber\\
n^\mu &=& \frac{1}{\sqrt{2}}\left[\frac{\Delta }{Q(r)}\right]^{1/2}\partial _t+\frac{1}{\sqrt{2}}\left[\Delta \,Q(r)\right]^{1/2}\partial _r ,\nonumber\\
m^\mu &=& \frac{1}{\sqrt{2}}\left[\frac{\Delta \, P(\theta )}{r^2}\right]^{1/2}\partial _\theta +\frac{i}{\sqrt{2} \sin \theta}\left[\frac{\Delta}{r^2 P(\theta )}\right]^{1/2}\partial _\phi . \nonumber\end{aligned}$$ For such a null tetrad, our integrability conditions $\{ \pi + \c{\tau}=0, \rho = \c{\rho}, \mu = \c{\mu} \}$ hold. The only vanishing spin coefficients are $\kappa,~\nu,~\lambda$ and $\sigma$, meaning that our null congruences, constructed from the timelike dyads residing on the 2-surface $\mathbb{T}$, are composed of geodesics which are shear-free. As noted earlier this last property is not a necessary condition in our formalism. With the remaining nonvanishing spin coefficients and the variables of the contracted Raychaudhuri equation given in (\[DeltaJNP\])-(\[RiemNP\]) we get $$\begin{aligned}
\tilde{\nabla} _{\mathbb{T}}\mathcal{J}
&=& \frac{1}{r^3}\left[P(\theta)\left(6r-2A^2r^3\right)-4A\cos \theta r^2
\right. \nonumber \\
&&\qquad \left.
+8\left(M-r\right)\right],\label{C-DeltaJ}\\
\tilde{\nabla} _{\mathbb{S}}\mathcal{K}
&=&\frac{2A}{r}\left[2AM\cos ^2 \theta\left(2A\cos \theta r +3\right)
\right. \nonumber \\
&&\qquad \left.
+\cos \theta \left(A\cos \theta r +2\right)
\right. \nonumber \\
&&\qquad \qquad \left.
+A\left(r-2M\right)\right],\label{C-DeltaK} \\
\mathcal{J}^2
&=& \frac{2Q(r)}{r^2}, \label{C-J^2}\\
\mathcal{K}^2
&=& 2A^2P(\theta )\sin ^2\theta,\label{C-K^2} \\
\mathcal{R_{\,W}}
&=&4M\left(\frac{1}{r}+A\cos \theta\right)^3.\label{C-Rw}\end{aligned}$$ In order to calculate the quasilocal charges we must first calculate the reference energy density, $k_0$. We isometrically embed $\mathbb{S}$ into $\mathcal{M}^4$, by setting $$\begin{aligned}
\frac{r^2d\theta ^2}{\Delta \, P(\theta) }&=&\bar{r}^2d\bar{\theta}^2,\\
\frac{P(\theta)r^2\sin ^2\theta d\phi ^2}{\Delta}&=&\bar{r}^2\sin ^2\bar{\theta}d\bar{\phi}^2,\end{aligned}$$ and demand that the observers measure the same solid angle in both coordinate systems. This is satisfied by choosing $\bar{r}=r\Delta ^{-1/2}$ and then $k_0=2/\bar{r}$. Here we should note that for a generic $C$-metric the angular coordinates are defined within $\{ 0< \theta < \pi , -C\pi < \phi < C\pi \}$ where $C$ is the remaining parameter, other than $A$ and $M$, that parametrizes the spacetime. It is closely related to the “deficit/excess angle” that tells us how much $\mathbb{S}$ deviates from the spherical symmetry. For example, repeating Griffiths *et al.*’s discussion $$\begin{aligned}
\frac{\rm circumference}{\rm radius}=
\begin{cases}
\lim_{\theta \to 0}{\frac{2\pi CP(\theta)\sin \theta}{\theta}} =2\pi C\left(1+2AM\right)\nonumber\\
\nonumber\\
\lim_{\theta \to \pi}{\frac{2\pi CP(\theta)\sin \theta}{\pi - \theta}}=2\pi C\left(1-2AM\right)\nonumber\\
\end{cases}\end{aligned}$$ shows us that setting $C=1$, as we choose to do here, will introduce excess and deficit angles on the spacelike surface $\mathbb{S}$ due to the conical singularities that are introduced. This, and our choices for coordinate functions of $\mathcal{M}^4$ will guarantee that the solid angle is the same for the quasilocal observers of the physical and the reference spacetimes.
We obtain the quasilocal charges by substituting the quasilocal charge densities, in Eqs. (\[C-DeltaJ\])–(\[C-Rw\]), into the definitions (\[ETotal\])–(\[WTidal\]) and numerically integrating them. The results are presented in Fig. \[fig:C\_Energies\] for a specific choice of $A=1/(\sqrt{28}M)$ to perform the numerical integration.
![Quasilocal charges of the $C$-metric which is parametrized with $A=\frac{1}{\sqrt{28}M}$. Those quasilocal charges are meaningful only in the region $2M<r<\sqrt{28}M \approx 5.29M$ due to the coordinate singularities.[]{data-label="fig:C_Energies"}](c_metric_charges){width="0.9\columnwidth"}
From Fig. \[fig:C\_Energies\] we immediately recognize that $E_{\rm K1}=E_{\rm Dil}$ decreases as the size of the system increases. For the case of Schwarzschild, i.e., $A=0$, we expect this curve to be flat, as in Fig. \[Fig:Schwenergy\]. For lower values of acceleration, $E_{\rm Dil}$ gets flatter as expected. This shows that in order for the black hole to be accelerated more, more energy should be input to the system by an *external* agent. In other words, the potential work that can be done *by* the system is lower. Note that after a certain size of the system, $E_{\rm Dil}$ and $E_{\rm Tot}$ take negative values. It may seem counterintuitive that quasilocal observers could measure a “negative energy.” To better understand this result, consider the metric (\[Griffithsmetric\]) and define $g_{tt}=-\left(Q(r)/\Delta\right)=-\left[1+2\Phi (r,\theta)\right]$ where $\Phi(r,\theta)$ plays the role of the “gravitational potential.” In Fig. \[fig:C\_gravpot28\] we plot $\Phi(r,\theta)$ for observers located at different polar angles.
![Radial behavior of the gravitational potential of the $C$-metric, which is parametrized with $A=\frac{1}{\sqrt{28}M}$, plotted for observers located at different polar angles. Those potentials are meaningful only in the region $2M<r<\sqrt{28}M \approx 5.29M$ due to the coordinate singularities.[]{data-label="fig:C_gravpot28"}](c_metric_pot28){width="0.9\columnwidth"}
We observe that for none of the observers, except the ones located at $\theta =\pi$, $\Phi(r,\theta)$ is monotonic. Moreover, for observers located at $\theta > 0.75 \pi$ the gravitational potential changes sign after a certain radial distance. This shows that the effect of the external agent on the system is repulsive. Then the positive total energy $E_{\rm Dil}+W_{\rm Tid}$, which corresponds to a system that has an otherwise attractive nature, cannot overcome the repulsive effect of the external agent which causes the black hole to accelerate. The $E_{\rm Tot}=0$ point can be viewed as the minimum energy state of the system, below which it cannot exist without the energy exchange provided by an external agent.
Also recall that the $C$-metric is interpreted as two black holes which are accelerated *away* from each other. This is a signature of the repulsive behavior we observe here. Note that here we are investigating one of the most extreme cases for an accelerated black hole, since as for acceleration parameters greater than $1/\left(\sqrt{27}M\right)$ the metric changes signature. Therefore the change in the behavior of the gravitational potential, and hence a change in the sign of the total energy of the system is not unexpected. We do not observe such behavior for the Schwarzschild geometry as the gravitational potential is monotonic with constant sign for a static black hole. In order to investigate how the acceleration parameter, $A$, affects the behavior of the gravitational potential, see Fig. \[fig:C\_gravpots\]. We plot $\Phi(r,\theta)$ for observers located at $\theta =\pi$, $\theta =\pi /2$ and $\theta =0$ respectively in Figs. \[Fig:PhiSp\], \[Fig:PhiEq\] and \[Fig:PhiNp\]. For each case, we investigate the effect of the acceleration parameter, $A$. We observe that only for $A=0$ case does the gravitational potential not change behavior.
For a more detailed investigation of the behavior of the gravitational potential of a $C$-metric, depending on the observer position and on the acceleration parameter, one can see [@Farhoosh_Zimmerman:1980].
In order to understand what this means for the acceleration vector of an observer of the quasilocal system, let us set the 4-velocity of the observer to be $u^\mu = \t {E}{^\mu _{\hat{0}}} = \frac{1}{\sqrt{2}}\left(l^{\mu}+n^{\mu}\right)$. Then the acceleration vector is obtained by $a^\mu=D_{\t {E}{_{\hat{0}}}}\t {E}{^\mu _{\hat{0}}}=a_r \partial _r +a_\theta \partial
_\theta$ with $$\begin{aligned}
a_r&=&-\frac{1}{r^2}\left[A^3r^4\cos \theta \left(AM\cos \theta +1\right)
\right. \nonumber \\
&&\qquad \left.
+A^2r^2\cos ^2\theta \left(r-3M\right)+A^2r^2\left(r-2M\right)
\right. \nonumber \\
&& \qquad \left.
+Ar\cos \theta \left(r-4M\right)-M
\right],\label{C-a_r}\\
a_\theta &=& \frac{A\sin \theta}{r}P(\theta)\Delta ^{1/2} .\end{aligned}$$
As it can be seen from Fig. \[fig:acceleration\] the sign of the radial component of the acceleration vector changes depending on the radial and angular position.
![Radial behavior of $a_r$ for observers at different polar angles. We consider the acceleration vector only in the region $2M<r<\sqrt{28}M \approx 5.29M$.[]{data-label="fig:a_r"}](c_metric_acc){width="0.9\columnwidth"}
In Fig. \[fig:a\_r\] we plot the radial dependence of the radial component, $a_r$, for different observer positions. We observe that for all observers, except the one located at $\theta =\pi$, the direction of the radial acceleration flips. This is due to the change in the behavior of the gravitational potential and explains why $E_{\rm Dil}$ takes negative values after a critical point.
The reason that $E_{\rm Dil}$ and $E_{\rm Tot}$ diverge at $r = \sqrt{28}M$, in Fig. \[fig:C\_Energies\], results from this point being the second coordinate singularity of our $C$-metric, as we chose $A=1/(\sqrt{28}M)$ and the coordinate singularities occur at $\{ r=2M, r=1/A\}$. This result is expected since after this point, the nature of the spacetime geometry is different.
We also recognize that the system does not possess any energy which can be attributed to rotational degrees of freedom. This is not immediately obvious since the densities (\[C-DeltaK\]) and (\[C-K\^2\]) which appear in definition (\[Erot\]) are nonzero. However, what is physical for the quasilocal observers are the quasilocal charges, not the quasilocal densities. Having zero energy associated with the rotational degrees of freedom is expected since the black hole in question is nonrotating.
Finally we observe that the work that has already been done by the tidal fields, $W_{\rm Tid}$, is positive for all system sizes and takes the same value as in the case of a static black hole. This means that although the individual observers could measure tidal squeezing and tidal stretching depending on their position, the overall effect on the system corresponds to a positive quasilocal charge.
Lanczos-van Stockum dust
------------------------
For our next application we would like to consider a rotating spacetime. For this, we pick one of the simplest exact solutions of Einstein equations: a rigidly rotating dust cylinder. This solution was first found by Lanczos [@Lanczos:1924], later rediscovered and matched to a vacuum exterior by van Stockum [@vanStockum:1937]. Its physicality and mathematical aspects have been investigated intensively in the literature [@Bonnor:1977; @Bonnor:1980; @daSilva:1996xt; @deAraujoetal:2000; @Bonnor:2005; @Zinggetal:2006; @Brateketal:2007; @Gurlebeck:2009]. Also lately, rotating dust metrics have been used to model galaxies in attempts to understand the general relativistic effects on the galaxy rotation curves [@Cooperstock_Tieu:2006; @Cooperstock_Tieu:2007; @Balasin_Grumiller:2008].
The original derivation of van Stockum does not end up with an asymptotically flat spacetime. The energy density of the dust, $\tilde{\rho}$, increases exponentially with increasing cylindrical radial coordinate, $x$, and it is given by $\tilde{\rho} =\omega ^2e^{\omega ^2x^2}/(2\pi)$. This is not realistic. Later investigations in the literature, naturally focus on creating more realistic models which are asymptotically flat. In such cases, components of the line element are given by series solutions [@deAraujoetal:2000; @Cooperstock_Tieu:2006; @Cooperstock_Tieu:2007; @Brateketal:2007].
For our application in the current section, we want to focus on finding the quasilocal energy of the spacetime that is associated with the rotational degrees of freedom. We need to find an orthonormal dyad that satisfies the integrability conditions and this already is not an easy task for axially symmetric stationary spacetimes.[^8] Therefore we will consider the simplest interior solution given by van Stockum which has a line element $$\begin{aligned}
\label{vanStockcylmetric}
ds^2=-dt^2+a\,\left(dx^2+dz^2\right) +b\,d\psi ^2+c\,dt\,d\psi,\end{aligned}$$ where $$\begin{aligned}
a(x)&:=&e ^{-\omega ^2x^2}\nonumber,\\
b(x)&:=&\left(x^2-\omega ^2x^4\right) \nonumber,\\
c(x)&:=&2\omega x^2 \nonumber,\end{aligned}$$ and $\omega$ is a constant that is associated with the angular velocity of the dust at $x=0$ with respect to “distant stars”. Other than the singularity at $x=0$, the spacetime becomes singular at $x=1/\omega$ for the metric in (\[vanStockcylmetric\]). Note that the $g_{\psi \psi}$ component of the metric changes sign when $x>1/\omega $. This introduces closed timelike curves into the spacetime that are not physical. Therefore we will consider systems within the $0<x<1/\omega $ range.
It is possible to transform the metric into toroidal coordinates at this point and search for a double dyad which satisfies our gauge conditions (\[Gaugeconditions\]). [^9] Eventually we would like to calculate our quasilocal charges. However, if we apply such a transformation, we lose the information about the actual symmetries of the system. Therefore, let us first consider a null tetrad in cylindrical coordinates which satisfies our gauge conditions, (\[Gaugeconditions\]), $$\begin{aligned}
\label{vanStocktetradcyl}
l^\mu &=& \frac{1}{\sqrt{2}}\left[\partial _t+a(x)^{-1/2}\partial _x\right], \\
n^\mu &=& \frac{1}{\sqrt{2}}\left[\partial _t-a(x)^{-1/2}\partial _x\right], \nonumber\\
m^\mu &=& \frac{i}{\sqrt{2}}\left[\omega x \partial _t+\frac{1}{x}\partial _\psi -i a(x)^{-1/2} \partial _z\right]. \nonumber\end{aligned}$$ For such a tetrad $\{\pi=0, \tau=0 \}$ so that the condition $\pi+\c{\tau}=0$ is trivially satisfied. Also $\{\mu=\c{\mu}, \rho =\c{\rho} \}$ holds. Now let us perform two transformations on the spacelike coordinates. The first coordinate transformation, $Tr_1 :=\{ X= x\cos \psi , Y= x\sin \psi , Z=z\}$, relates the cylindrical coordinates to Cartesian coordinates, $\{ X, Y, Z \}$. The second one, $Tr_2 :=\{ X= \left( R_0+r\cos \theta \right)\cos \phi , Y= \left( R_0+r\cos \theta \right)\sin \phi , Z=r\sin \theta \}$, relates the toroidal coordinates, $\{r,\theta , \phi \}$, to the Cartesian coordinates. After applying $Tr_1$ and $Tr_2^{-1}$ successively on the metric and on the null tetrad we find $$\begin{aligned}
ds ^2 = -dt^2+\zeta \left(dr^2+r^2d\theta ^2\right)+\chi d\phi ^2+ \xi dt d\phi ,\end{aligned}$$ where $$\begin{aligned}
R(r,\theta) &:=& R_0+r\cos \theta \nonumber ,\\
\zeta (r,\theta) &:=& e ^{-\omega ^2R^2} \nonumber ,\\
\chi (r,\theta)&:=& R ^2 \left(1-\omega ^2 R^2 \right) \nonumber ,\\
\xi (r,\theta)&:=& 2\omega R^2 \nonumber ,\\\end{aligned}$$ and $$\begin{aligned}
\label{vanStocktetradtor}
l^\mu &=& \frac{1}{\sqrt{2}}\left[\partial _t+ \zeta ^{-1/2} \left(\cos \theta \partial _r-\frac{\sin \theta}{r}\partial _\theta \right)\right], \\
n^\mu &=& \frac{1}{\sqrt{2}}\left[\partial _t- \zeta ^{-1/2} \left(\cos \theta \partial _r-\frac{\sin \theta}{r}\partial _\theta \right) \right], \nonumber \\
m^\mu &=& \frac{i}{\sqrt{2}}\left[\omega R \partial _t-\frac{1}{R} \partial _\phi
\right. \nonumber \\
&&\qquad \qquad \left.
- i\zeta ^{-1/2}\left(\sin \theta \partial _r + \frac{\cos \theta}{r} \partial _\theta \right)\right]. \nonumber\end{aligned}$$ For this null tetrad, after calculating the spin coefficients and by following (\[DeltaJNP\])–(\[RiemNP\]), we find the following variables that appear in the contracted Raychaudhuri equation: $$\begin{aligned}
\tilde{\nabla} _{\mathbb{T}}\mathcal{J}
&=& \frac{-\zeta ^{-1} \left(R^4\omega ^4+1\right)}{R^2} \label{DeltaJvanStock},\\
\tilde{\nabla} _{\mathbb{S}}\mathcal{K}
&=&0 \label{DeltaKvanStock}, \\
\mathcal{J}^2
&=& \frac{\zeta ^{-1} \left(R^4\omega ^4 +1\right)}{R^2} \label{JvanStock}, \\
\mathcal{K}^2
&=& -2\omega ^2 \zeta ^{-1} \label{K^2vanStock},\\
\mathcal{R_{\,W}}
&=&-2\omega ^2 \zeta ^{-1} \label{R_WvanStock}.\end{aligned}$$ In order to determine the reference energy density we isometrically embed $\mathbb{S}$ in $\mathcal{M}^4$ by setting $$\begin{aligned}
\zeta r^2d\theta ^2&=&\bar{r}^2d\bar{\theta}^2,\\
\left(1 -\omega ^2R^2\right)R^2 d\phi ^2&=& \left(\bar{R}_0+\bar{r}\cos \bar{\theta}\right)^2 d\bar{\phi}^2,\end{aligned}$$ so that the reference quasilocal observers are located at a flat 2-torus in Minkowski spacetime. In order to set the same surface area element both in the physical and in the reference spacetime, we choose $$\begin{aligned}
\bar{r}&=&r \zeta ^{1/2},\\
d\bar{\theta}&=& d\theta ,\\
d\bar{\phi}&=&\frac{R\left(1-\omega ^2 R^2\right)^{1/2}}{r \zeta ^{1/2} \cos \theta + \bar{R}_0}d\phi ,\end{aligned}$$ with $\bar{R}_0=R_0$. Then, when written in physical spacetime coordinates, the mean extrinsic curvature of the flat 2-torus, $$\begin{aligned}
k_0&=&\frac{\bar{R}_0+2\bar{r}\cos \bar{\theta}}{\bar{r}\left(\bar{R}_0+\bar{r}\cos \bar{\theta}\right)},\end{aligned}$$ can be used as the reference energy density.
Now that the physical and the reference energy densities are determined, we can calculate the quasilocal charges via Eqs. (\[ETotal\])–(\[WTidal\]). Recall that the spacetime is physically meaningful in the $0<R_0+r\cos \theta <1/\omega$ range. We choose $R_0=5$ and $\omega=1/10$ for our numerical example, which introduces a coordinate singularity at $x=10$. In terms of the system size, we consider only the $0<r<2.5$ range for computational ease. The results are presented in Fig. \[fig:vanStockumEnergies\], from which we immediately recognize that $E_{\rm Tot}=E_{\rm Dil}$, which is positive for a small sized system, diverges to $-\infty$ as the size of the system gets larger.
Let us try to understand what this result means. Previously, for asymptotically flat versions of the rotating dust, it has been argued by Bonnor that there has to be an infinitely large negative mass associated with the singularity, $x=0$, in order to cancel the effect of positive energy associated with the dust [@Bonnor:1977]. Later in [@Bonnor:2005] he argued that one can add an infinitely large negative mass layer into the spacetime to observe the same effect. Furthermore, Bratek *et al.* [@Brateketal:2007] discussed the same issue and concluded that singularities of the asymptotically flat rotating dust are associated with the “additional weird stresses” of the negative active mass.
Here our spacetime is not asymptotically flat. However, we observe a similar behavior. Note that in our solution the energy density of the dust increases with increasing $x$. In such a case one would expect the system to get ever closer to a collapsed state as its size increases. Zingg *et al.* [@Zinggetal:2006] and Gurlebeck [@Gurlebeck:2009] have argued that such a collapse is in fact expected for a Newtonian dust cylinder. We end up with a similar interpretation which agrees with their arguments. In our work, the fact that $E_{\rm Tot}=E_{\rm Dil}$ diverges to $-\infty$ as the size of the system gets larger, must be attributed to the work done by external fields that are required to exist outside our system to prevent the system from collapsing.
Now let us look at the quasilocal charges associated with the rotational degrees of freedom and the tidal fields.
![Quasilocal charges of the van Stockum dust. Charges are in length units which can be written as a function of individual mass of the dust particles, $m$, and the total number density, $n$.[]{data-label="fig:vanStockumEnergies"}](vanstockum_charges_v2){width="0.9\columnwidth"}
From Fig. \[fig:vanStockumEnergies\] we observe that the $W_{\rm Tid}$ is everywhere negative, corresponding to tidal stretching of the surface on which the quasilocal observers are located. As the size of the system increases, so does the energy density of dust according to $\tilde{\rho} =\omega ^2e^{\omega ^2x^2}/(2\pi)$. This requires greater negative work done by the tidal field. The magnitude of $W_{\rm Tid}$ is exactly equal to the energy associated with the rotational degrees of freedom as shown in Fig. \[fig:vanStockumEnergies\]. We note that the observers who determine the quasilocal quantities are timelike geodesic observers, i.e., with acceleration $a^\mu=D_{\t {E}{_{\hat{0}}}}\t {E}{^\mu _{\hat{0}}}=0$ and furthermore they are comoving with the dust. In other words, the orbital angular velocity of the observers is zero with respect to the given coordinate system. In such a case one might expect to get zero energy associated with the rotational degrees of freedom of the system. However, for this set of observers, the vorticity of the timelike geodesics is nonzero. Indeed, the vorticity vector and vorticity scalar are given by $$\begin{aligned}
\mathcal{w}^\mu &=&\frac{1}{2}\t {\eta} {^{\mu} _{\nu \alpha \beta}}g^{\nu \gamma}g^{\alpha \rho}\t {E}{^\beta _{\hat{0}}} D_\rho \t {E} {_{\gamma \hat{0}}}\nonumber \\
&=& \frac{2\omega \sin \theta \zeta ^{-2}}{rR}\partial _r+\frac{2\omega \zeta ^{-2}\cos \theta}{r^2R}\partial _\theta\\
\mathcal{w}&=&\sqrt{\mathcal{w}^\mu \mathcal{w}_\mu}=\frac{2\omega \zeta ^{-3/2}}{rR},\end{aligned}$$ where $\t {\eta} {^{\mu} _{\nu \alpha \beta}}$ is the Levi-Civita tensor, $g_{\mu \nu}$ is the spacetime metric and we set the observer 4-velocity $u^\mu = \t {E}{^\mu _{\hat{0}}}=\partial _t$. This shows that every dust particle swirls around its own axis. Recall that vorticity is a measure of global rotation of a spacetime. Also previously it was shown by Chrobok *et al.* [@Chroboketal:2001] that the rotation of the local matter elements, i.e. spin, can be directly linked to the global rotation of the spacetime, i.e. vorticity. Therefore even though the system we investigate here is defined by the set of observers with zero orbital angular velocity we can still calculate the energy associated with the rotational degrees of freedom of the system.
As the size of the system reaches $1/\omega$, the density of the dust reaches its maximum possible value. Accordingly, one might expect $E_{\rm Rot}$ and $W_{\rm Tid}$ to diverge to $+\infty$ and $-\infty$ respectively as the system size gets closer to the singularity point $1/\omega$. Note that, in Fig. \[fig:vanStockumEnergies\], we observe that $E_{\rm Rot}$ and $W_{\rm Tid}$ tend to $+\infty$ and $-\infty$ respectively, as the size of the system gets larger.
The challenge of stationary, axially symmetric spacetimes {#Delicate}
=========================================================
After considering those somewhat unrealistic scenarios one might wonder whether we can apply our formalism to more realistic cases. For example, can we calculate the quasilocal charges of a rotating black hole? The short answer is yes, we can. However it poses an immense technical challenge.
Recall that we need to satisfy three null tetrad conditions, namely, $\{\rho =\c{\rho}, \mu=\c{\mu}, \pi + \c{\tau}=0 \}$. It is known that in general, the divergence of a null congruence around the vector $\b{l}$ can be written as the linear combination of the expansion and the twist of the congruence, i.e., $\rho =\Theta + i \omega$. This means that we need to have nontwisting null congruences for our formalism to hold.
Let us consider the case of the Kerr spacetime [@Kerr:1963]. The circular orbits are the mostly studied world lines of Kerr because the trajectories follow the Killing vector fields and this simplifies the investigations considerably. Note that in this case, the Killing vectors $\partial _t$ and $\partial _\phi$ have nonzero twist. Moreover, the Kerr metric can be obtained by taking the $r$ coordinate of Schwarzschild to $r+ia\cos \theta$ [@Newman_Janis:1965], where $a$ is the dimensionless angular momentum parameter. This automatically means that for a *principal null tetrad* of a static black hole, by transforming the real divergence, $\rho =-1/r$ into a complex divergence $\rho =-1/(r+ia\cos \theta)$, we obtain a rotating black hole.[^10] Our problem here is that investigations of a rotating black hole are done mostly using the principal null directions of the spacetime. We should also mention that there are other transverse tetrads such as the quasi-Kinnersely tetrad, which is a powerful tool for exploring Kerr [@Zhangetal:2012]. However, once we focus on such null *geodesics*, that aid in the construction of a principal or transverse tetrad, then we have no hope of finding null congruences with a real divergence.
On the other hand, twist-free – i.e., surface forming – null congruences exist in *all* Lorentzian spacetimes [@Adamoetal:2012]. It is just that we do not require them to be geodesic. Brink *et al.* [@Brinketal:2013] have given a detailed investigation of axisymmetric spacetimes, focusing on the twist-free Killing vectors of the stationary axially symmetric spacetimes. We note that there are very few studies in the literature that investigate such a property. Bilge has found an exact twist-free solution whose principal null directions are not geodesic [@Bilge:1989]. It was also shown by Bilge and G[ü]{}rses that those spacetimes are not asymptotically flat and include generalized Kerr-Schild metrics [@Bilge_Gurses:1986]. Gergely and Perj[é]{}s later concluded that those solutions are actually homogeneous and anisotropic Kasner solutions [@Gergely_Perjes:1994] and thus they are not physical. Therefore Brink *et al.* conclude that “Future studies which aim to extract physical information about isolated dynamical, axisymmetric spacetimes will have to focus on general spacetimes, where none of the principal null directions are geodesics, and which do not fall within Bilge’s class of metrics."
In our case we are looking for *a* null congruence, constructed from the timelike dyad that resides on $\mathbb{T}$, which does not even have to be aligned with the principal null directions. It is not necessarily composed of geodesics and it is not required to be composed solely of Killing vectors. All we want from our null tetrad is for it to satisfy the three integrability conditions. To the best of our knowledge, for the case of Kerr, none of the null tetrads introduced in the literature satisfies those conditions.
In order to find such a desired tetrad for the case of Kerr, one might consider the transformations of the quasi-Kinnersely tetrad, for example, by applying two successive Lorentz transformations to the null tetrad. First, apply a type-II Lorentz transformation around $\b{n}$ with parameter $A=a+ib$ and then a type-I Lorentz transformation around $\b{l}$ with parameter $B=c+id$ where $\{a,~b,~c,~d \}$ are all real. Then for the twice transformed spin coefficients we need to satisfy $\{\rho '' =\c{\rho} '', \mu ''=\c{\mu} '', \pi ''+ \c{\tau} ''=0, \c{\pi} ''+ \tau ''=0 \}$ where $''$ denotes the fact that the spin coefficients are transformed twice. After such a procedure we end up with four complex, highly coupled, nonlinear first order differential equations. The unknowns appear in the transformed tetrad condition equations with a polynomial order that goes up to order 5. This system of equations cannot be solved by any iterative method that we are aware of.
Therefore, we observe that our formalism should, in principle, be applicable for more realistic generic spacetimes than the ones we have presented here. However, the less symmetry the system possesses, the more mathematically challenging it becomes to find a null tetrad which satisfies our integrability conditions. Arbitrary nontwisting null congruences of twisting spacetimes are the key to resolving this issue.
The discussion we presented in Sec. \[Null tetrad gauge\] should now be more clear for the reader. In the case of gravitational wave detection, one’s ultimate aim is to extract information about the properties of the astrophysical objects that are the sources of radiation. Those properties, such as mass-energy and angular momentum are at best defined quasilocally in general relativity. Therefore the local tetrads of observers should be chosen in such a manner that the quasilocal properties of the system can be well defined throughout the evolution. In [@Zhangetal:2012], Zhang *et al.* showed that the wave fronts of passing gravitational radiation are aligned with the quasi-Kinnersley tetrad. This means that the observers can measure the gravitational radiation locally. However, since quasi-Kinnersley tetrad does not satisfy the integrability conditions of $\mathbb{S}$ and $\mathbb{T}$, the quasilocal charges corresponding to the quasi-Kinnersley tetrad are not well defined. Therefore we conclude that even though one can measure the gravitational radiation locally, there is not always a guarantee that one can extract the properties of its source consistently.
Discussion and Summary {#Discussion}
======================
According to many researchers, including the authors of Refs. [@dInverno_Smallwood:1980; @Smallwood:1983; @Torre:1985; @Yoon:2004], the 2+2 picture of general relativity might be more fundamental than the 3+1 approach. Although one might debate this point, the existence of a nonvanishing boundary Hamiltonian leads to the necessity of modifying the symplectic structure of the Arnowitt–Deser–Misner formalism in phase space to obtain a covariant formalism which can directly be linked to the quasilocal charges [@Kijowski:1997; @Nester:1991]. Energy definitions, which do not conflict with the equivalence principle, generically involve the extrinsic or/and intrinsic geometry of a closed spacelike 2-surface. However, defining quasilocal charges that are measures of energy and angular momentum for a generic spacetime is often a challenge.
The energy and energy flux definitions that are made locally, globally or quasilocally, are sometimes compared and contrasted without questioning for which system those definitions are made. Actually, there exist well-defined quasilocal energy definitions that can be directly linked to the action principle of general relativity. What is ill-defined is the specification of the system that is enclosed by a boundary surface on which the quasilocal charges are to be integrated.
Let us make an analogy with classical thermodynamics and consider two systems with same number of gas molecules: (i) a constant pressure system which is expanding and (ii) a constant volume system which has increasing pressure. If we use a barometer to measure the pressure values obtained within these two systems, the readings will of course be different. However, this is not because the barometer is not working properly, rather it is because the barometer is not sensitive to the defining properties (or symmetries) of the two systems in question. In other words, the measuring agent is indifferent to how the two systems are “isolated.” Moreover, even if we find a way to define the system consistently there exist many energies one can associate with a system. Going back to our analogy, let us say we keep track of the pressure value and make sure that we are actually investigating a system with constant pressure. Now we can define the internal energy of that system or define the average kinetic energy of the particles which is not necessarily related to internal energy unless there exists equilibrium. We can also define work done by the system on the surroundings throughout the expansion process etc. In that situation we would not expect all of those energies to give us the same value.
In this paper, we presented a quasilocal work-energy relation which can be applied to generic spacetimes in order to discuss quasilocal energy exchange. We identified the quasilocal charges associated with the rotational and nonrotational degrees of freedom, in addition to a work term associated with the tidal fields. This construction was possible only after we defined a quasilocal system by constraining the double dyad of the quasilocal observers, which is highly dependent on the symmetries of the spacetime in question.
Our present investigation emerged from three questions:
- Is there something inherently fundamental about the 2+2 formalism in terms of quasilocal energy definitions?\
- quasilocal energy resides on the intrinsic and/or extrinsic curvature of a closed 2-dimensional spacelike surface, what do the other extrinsic properties of that surface correspond to?\
- Can the Raychaudhuri and the other geodesic deviation equations, which have proved their usefulness in terms of physically relevant observables in a 3+1 formalism, be investigated in a 2+2 formalism so that they can be linked to physically meaningful quasilocal charges?
To answer these questions, we considered Capovilla and Guven’s generalized Raychaudhuri equation given in [@Capovilla_Guven:1994] for a 2-dimensional world sheet that is embedded in a 4-dimensional spacetime. Previously, for spherically symmetric systems, we investigated the Raychaudhuri equation of the world sheet at quasilocal thermodynamic equilibrium [@Uzun_Wiltshire:2015], i.e., when the observers are located at the apparent horizon. In the present paper we considered more generic spacetimes that are in nonequilibrium with their surroundings. We also relaxed the spherically symmetric condition.
By transforming our equations from Capovilla and Guven’s formalism, which is constructed on an orthogonal double dyad, to the Newman-Penrose formalism, which is based on a complex null tetrad, we were able to present the contracted Raychaudhuri equation in terms of the combinations of spin coefficients, their relevant directional derivatives and some of the curvature scalars. We also imposed three null tetrad gauge conditions which result from the integrability conditions of the 2-dimensional timelike surface $\mathbb{T}$ and the 2-dimensional spacelike surface $\mathbb{S}$. This spacelike 2-surface is defined instantaneously and is orthogonal to $\mathbb{T}$ at every point. Our null tetrad gauge conditions are shown to be invariant under type-III Lorentz transformations which basically correspond to boosting of the quasilocal observers in the spacelike direction orthogonal to $\mathbb{S}$. Ultimately we realized that, under such gauge conditions, the contracted Raychaudhuri equation is a linear combination of two of the spin field equations of the Newman-Penrose formalism.
Later, we defined certain quasilocal charges via the geometric variables that appear in the contracted Raychaudhuri equation. Our motivation is that there exists a direct link between the mean extrinsic curvature of $\mathbb{S}$ that encloses the system and the variables of the contracted Raychaudhuri equation. Note that mean extrinsic curvature of such a smooth, closed, spacelike 2-surface, $\mathbb{S}$, is the main object of most of the quasilocal energy definitions which are derived by a Hamiltonian approach. By choosing the quasilocal energy definitions made by Kijowski [@Kijowski:1997] as our anchor, we were able to define relevant quasilocal charges for which a physical interpretation would be found. We also showed in Appendix \[App:SpinBoost\] that all of those quasilocal charges are invariant under type-III Lorentz transformations. Note that this property is desired for a well-defined quasilocal construction, as boosted observers should agree on the fact that they are measuring the charges of the same system.
We applied our formalism to a radiating Vaidya spacetime, a $C$-metric and an interior solution of the Lanczos-van Stockum dust cylinder. For the case of Vaidya we concluded that the usable energy of the system decreases purely due to radiation. For a $C$-metric we observed that the greater the acceleration of the black hole is, the more energy should be provided *to* the system *by* an external agent. We concluded that the decreasing trend in the total energy is due to the nonmonotonic, repulsive gravitational potential that can be observed at the exterior region of an extremely accelerated black hole. For the Lanczos-van Stockum dust we considered a nonasymptotically flat case. As the size of the system got larger, we obtained negative mass energy for the usable dilatational energy of the system. We concluded that this must be attributed to external fields doing work on the system in order to prevent it from collapse. We were also able to obtain the quasilocal energy associated with the rotational degrees of freedom whose magnitude is exactly equal to the one of work done by the tidal fields.
This paper can be seen as a first attempt to investigate the Raychaudhuri equation in 2+2 picture in terms of the quasilocal charges. There exist various open problems and delicate issues. To start with, at a given spacetime point one has six tetrad degrees of freedom and we imposed only three null tetrad gauge conditions to our system. That means we have additional freedom to specify a gauge, i.e, to define the quasilocal system. Although there exists no geometrically motivated reason we are aware of in our current approach, one can $choose$ additional conditions in order to compare the quasilocal charges of different spacetimes constructed with other well-known null tetrad gauges.
Another delicate issue which may or may not be related to our null tetrad gauge freedom is shear. There is no $a\, priori$ reason for us to impose the shear-free condition to the null congruences, constructed from the timelike dyad that resides on $\mathbb{T}$. However, for generic spacetimes, one can find a gauge which satisfies our three gauge conditions more easily once the shear-free condition is imposed. This is primarily because our gauge conditions are trying to locate the set of quasilocal observers in such a configuration that the surface $\mathbb{S}$ is always orthogonal to $\mathbb{T}$. That is natural for radially moving observers of a spherically symmetric system but may hold even if the spacetime is not spherically symmetric. The shear-free condition locates the quasilocal observers as close to as they can get to such a configuration. Note that shear is the fundamental concept of Bondi’s mass loss [@Bondietal:1962] without which gravitational radiation at null infinity cannot be defined. Thus, this automatically raises an issue for quasilocal observers at infinity who would like to measure the Bondi mass loss associated with gravitational radiation. Investigation of whether or not there exists a gauge which satisfies both the Bondi tetrad and our gauge conditions is left for future work.
Finally, we note that it is technically difficult to satisfy our null tetrad conditions for more realistic, axially symmetric, stationary spacetimes such as Kerr. This difficulty arises from the fact that our approach demands twist-free null congruences constructed by the tangent vectors of $\mathbb{T}$. However, finding twist-free null congruences for spacetimes whose principal null directions are twisting is a challenge. Although those nongeodesic null congruences that we are after are not physical, their existence will guarantee the fact that the quasilocal system, and the associated quasilocal charges, are all consistently defined.
Recently, a quasilocal energy for the Kerr spacetime has been calculated for stationary observers [@Liu_Tam:2016] by using the definition of [@Sunetal:2003] both for the quasilocal energy and the embedding method for the reference energy. Liu and Tam show that this energy is exactly equal to Brown and York’s (BY) quasilocal energy [@Brown_York:1992]. One might wonder how our construction is compared to such an investigation. To start with, the null tetrad constructed from the orthonormal double dyad of the stationary observers in Boyer-Lindquist coordinates has imaginary divergence and hence does not satisfy our null tetrad gauge conditions. Recall that the tetrad conditions we introduced here guarantees the existence of well-defined, boost-invariant quasilocal charges. Also note that BY quasilocal energy is not invariant under boosts. Therefore, the fact that Liu and Tam end up with the BY quasilocal energy for their quasilocal system defined by stationary observers in Boyer-Lindquist coordinates is no surprise. Therefore, in our view, the calculations of Liu and Tam do not satisfy all the requirements of a genuine quasilocal construction. In fact, this is exactly the point that we tried to emphasize throughout the paper. Without a well-defined quasilocal system, there is no consistent definition of energy.
Acknowledgements
================
Many thanks to David L. Wiltshire for his critical suggestions and his careful reading of the manuscript.
APPENDIX {#appendix .unnumbered}
========
Newman-Penrose Formalism {#Appendix:A}
========================
For a complex null tetrad $\{ \b{l}, \b{n}, \b{m}, \c{\b{m}}\}$ , the Newman-Penrose spin coefficients are defined as [@Newman_Penrose:1961][^11] $$\begin{aligned}
\kappa = -\inner{D_{\b{l}}\b{l},\b{m}}, \qquad
\nu = \inner{D_{\b{n}}\b{n},\c{\b{m}}} \label{kappanu},\\
\rho = -\inner{D_{\c{\b{m}}}\b{l},\b{m}}, \qquad
\mu = \inner{D_{\b{m}}\b{n},\c{\b{m}}}\label{rhomu},\\
\sigma = -\inner{D_{\b{m}}\b{l},\b{m}}, \qquad
\lambda = \inner{D_{\c{\b{m}}}\b{n},\c{\b{m}}}\label{sigmalambda},\\
\tau = -\inner{D_{\b{n}}\b{l},\b{m}},\qquad
\pi = \inner{D_{\b{l}}\b{n},\c{\b{m}}}\label{taupi},\\
\varepsilon =\frac{1}{2}\left[-\inner{D_{\b{l}}\b{l},\b{n}}+\inner{D_{\b{l}}\b{m},\c{\b{m}}}\right]\label{epsilon},\\
\gamma = \frac{1}{2}\left[\inner{D_{\b{n}}\b{n},\b{l}}-\inner{D_{\b{n}}\c{\b{m}},\b{m}}\right]\label{gamma},\\
\beta = \frac{1}{2}\left[-\inner{D_{\b{m}}\b{l},\b{n}}+\inner{D_{\b{m}}\b{m},\c{\b{m}}}\right]\label{beta},\\
\alpha = \frac{1}{2}\left[\inner{D_{\c{\b{m}}}\b{n},\b{l}}-\inner{D_{\c{\b{m}}}\c{\b{m}},\b{m}}\right]\label{alpha}.\end{aligned}$$ The propagation equations are $$\begin{aligned}
D_{\b{l}}\b{l}&=&\left(\varepsilon+\c{\varepsilon}\right)\b{l}-\c{\kappa}\b{m}-\kappa \c{\b{m}}\label{Dll},\\
D_{\b{n}}\b{l}&=&\left(\gamma+\c{\gamma}\right)\b{l}-\c{\tau}\b{m}-\tau \c{\b{m}}\label{Dnl},\\
D_{\b{m}}\b{l}&=&\left(\c{\alpha}+\beta\right)\b{l}-\c{\rho}\b{m}-\sigma \c{\b{m}}\label{Dml},\\
D_{\b{l}}\b{n}&=&-\left(\varepsilon+\c{\varepsilon}\right)\b{n}+\pi \b{m}+\c{\pi} \c{\b{m}}\label{Dln},\\
D_{\b{n}}\b{n}&=&-\left(\gamma+\c{\gamma}\right)\b{n}+\nu \b{m}+\c{\nu} \c{\b{m}}\label{Dnn},\\
D_{\b{m}}\b{n}&=&-\left(\c{\alpha}+\beta \right)\b{n}+\mu \b{m}+\c{\lambda} \c{\b{m}}\label{Dmn},\\
D_{\b{l}}\b{m}&=&\c{\pi}\b{l}-\kappa \b{n}+\left(\varepsilon - \c{\varepsilon}\right)\b{m}\label{Dlm},\\
D_{\b{n}}\b{m}&=&\c{\nu}\b{l}-\tau \b{n}+\left(\gamma - \c{\gamma}\right)\b{m}\label{Dnm},\\
D_{\b{m}}\b{m}&=&\c{\lambda}\b{l}-\sigma \b{n}+\left(-\c{\alpha}+\beta\right)\b{m}\label{Dmm},\\
D_{\b{m}}\c{\b{m}}&=&\mu \b{l}-\c{\rho} \b{n}+\left(\c{\alpha}-\beta\right)\c{\b{m}}\label{Dmm_}.\end{aligned}$$ Commutation relations, $\left[\b{X},\b{Y}\right] = D_{\b{X}}\b{Y}-D_{\b{Y}}\b{X}$, for the null vectors are $$\begin{aligned}
\left[\b{l},\b{n}\right]&=-\left(\gamma + \c{\gamma}\right)\b{l} -\left(\varepsilon + \c{\varepsilon}\right)\b{n} \nonumber \\
&\qquad
+\left(\pi + \c{\tau}\right)\b{m}+\left(\c{\pi} + \tau \right)\c{\b{m}}
\label{com_ln},\\
\left[\b{l},\b{m}\right]&=\left(\c{\pi} - \c{\alpha} -\beta \right)\b{l} -\kappa \b{n}+\left(\varepsilon - \c{\varepsilon} +\c{\rho}\right)\b{m}+\sigma \c{\b{m}} \label{com_lm},\\
\left[\b{n},\b{m}\right]&=\c{\nu}\b{l} + \left( \c{\alpha} +\beta -\tau \right)\b{n}+\left(\gamma - \c{\gamma} -\mu \right)\b{m}-\c{\lambda}\c{\b{m}} \label{com_nm},\\
\left[\b{m},\c{\b{m}}\right]&= \left(\mu - \c{\mu}\right)\b{l} +\left(\rho - \c{\rho}\right)\b{n} \nonumber \\
&\qquad
+\left(\c{\beta}-\alpha \right)\b{m}+\left(\c{\alpha} - \beta \right)\c{\b{m}}
\label{com_mm_}.\end{aligned}$$ Newman and Penrose introduce two sets of curvature scalars, Weyl scalars and Ricci scalars, which carry the same information as in the Riemann curvature tensor. The Ricci scalars are defined as $$\begin{aligned}
\Phi_{00}&:=\frac{1}{2}R_{\mu \nu}l^\mu l^\nu ,\,\,\, \qquad
\Phi_{11}:=\frac{1}{4}R_{\mu \nu}(\,l^\mu n^\nu +m^\mu \bar{m}^\nu),\nonumber \\
\Phi_{01}&:=\frac{1}{2}R_{\mu \nu}l^a m^\nu \,, \qquad
\Phi_{10}:=\frac{1}{2}R_{\mu \nu}l^\mu \bar{m}^\nu =\overline{\Phi_{01}}\,, \nonumber \\
\Phi_{02}&:=\frac{1}{2}R_{\mu \nu}m^\mu m^\nu \qquad
\Phi_{20}:=\frac{1}{2}R_{\mu \nu}\bar{m}^\mu \bar{m}^\nu =\overline{\Phi_{02}}\, \nonumber \\
\Phi_{12}&:=\frac{1}{2}R_{\mu \nu} m^\mu n^\nu , \qquad
\Phi_{21}:=\frac{1}{2}R_{\mu \nu} \bar{m}^\mu n^\nu =\overline{\Phi_{12}}\,, \nonumber \\
\Phi_{22}&:=\frac{1}{2}R_{\mu \nu}n^\mu n^\nu \,, \qquad
\Lambda:=\frac{R}{24}, \nonumber\\\end{aligned}$$ in which $R_{\mu \nu}$ is the Ricci tensor of the spacetime, $\Phi_{00},\, \Phi_{11},\, \Phi_{22},\, \Lambda$ are real scalars and $\Phi_{10},\, \Phi_{20},\, \Phi_{21}$ are complex scalars. The Weyl scalars are defined as $$\begin{aligned}
\psi _0&=&\t {C}{_\mu _\nu _\alpha _\beta}l^\mu m^\nu l^\alpha m^\beta ,\label{Psi0}\\
\psi _1&=&\t {C}{_\mu _\nu _\alpha _\beta}l^\mu n^\nu l^\alpha m^\beta ,\label{Psi1}\\
\psi _2&=&\t {C}{_\mu _\nu _\alpha _\beta}l^\mu m^\nu \c{m}^\alpha n^\beta ,\label{Psi2}\\
\psi _3&=&\t {C}{_\mu _\nu _\alpha _\beta}l^\mu n^\nu \c{m}^\alpha n^\beta ,\label{Psi3}\\
\psi _4&=&\t {C}{_\mu _\nu _\alpha _\beta}n^\mu \c{m}^\nu n^\alpha \c{m}^\beta ,\label{Psi4}\end{aligned}$$ with $\t {C}{_\mu _\nu _\alpha _\beta}$ being the Weyl tensor.
Type-III Lorentz transformations
--------------------------------
A type-III Lorentz transformation represents a boosting in the direction of $\b{l}$ and $\b{n}$ and a rotation in the $\b{m}$ and $\c{\b{m}}$ directions, i.e., the tetrad vectors transform as $$\begin{aligned}
\b{l} &\rightarrow & a^2\b{l}\label{lIII}, \\
\b{n} &\rightarrow & \frac{1}{a^2}\b{n}\label{nIII},\\
\b{m} &\rightarrow & e^{2i\theta} \b{m}\label{mIII},\\
\c{\b{m}} &\rightarrow & e^{-2i\theta}\c{\b{m}}\label{m_III}.\end{aligned}$$ Here both $a$ and $\theta$ are real functions. Accordingly the spin coefficients transform as $$\begin{aligned}
\nu &\rightarrow & a^{-4}e^{-2i\theta} \nu \label{nuIII},\\
\tau &\rightarrow & e^{2i\theta} \tau \label{tauIII},\\
\gamma &\rightarrow & a^{-2}\left(\gamma +D_{\b{n}}\left[\ln a+i\theta \right] \right) \label{gammaIII},\\
\mu &\rightarrow & a^{-2} \mu \label{muIII},\\
\sigma &\rightarrow & a^2 e^{4i\theta}\sigma \label{sigmaIII},\\
\beta &\rightarrow & e^{2i\theta}\left(\beta +D_{\b{m}}\,\left[\ln a+i\theta \right]\right) \label{betaIII},\\
\lambda &\rightarrow & a^{-2} \,e^{-4i\theta}\lambda\label{lambdaIII},\\
\rho &\rightarrow & a^2\rho \label{rhoIII},\\
\alpha &\rightarrow & e^{-2i\theta}\left(\alpha +D_{\c{\b{m}}}\,\left[\ln a+i\theta \right]\right) \label{alphaIII},\\
\kappa &\rightarrow & a^4 e^{2i\theta}\kappa \label{kappaIII},\\
\varepsilon &\rightarrow & a^2\left(\varepsilon +D_{\b{l}}\,\left[\ln a+i\theta \right]\right)\label{varepsilonIII},\\
\pi &\rightarrow & e^{-2i\theta}\pi \label{piIII}.\end{aligned}$$ The transformations of Ricci scalars are given by $$\begin{aligned}
\Phi _{00} &\rightarrow & a^4\Phi _{00},\\
\Phi _{01} &\rightarrow & a^2 e^{2i\theta}\Phi _{01},\\
\Phi _{10} &\rightarrow & a^2 e^{-2i\theta}\Phi _{10},\\
\Phi _{02} &\rightarrow & e^{4i\theta}\Phi _{02},\\
\Phi _{20} &\rightarrow & e^{-4i\theta}\Phi _{20},\\
\Phi _{11} &\rightarrow & \Phi _{11},\\
\Phi _{12} &\rightarrow & a^{-2}e^{2i\theta}\Phi _{12} ,\\
\Phi _{21} &\rightarrow & a^{-2}e^{-2i\theta}\Phi _{21},\\
\Phi _{22} &\rightarrow & a^{-4}\Phi _{22},\end{aligned}$$ and the transformations of Weyl scalars are given by $$\begin{aligned}
\Psi _{0} &\rightarrow & a^4 e^{4i\theta}\Psi _{0}, \\
\Psi _{1} &\rightarrow & a^2 e^{2i\theta}\Psi _{1},\\
\Psi _{2} &\rightarrow & \Psi _{2} \label{Psi2III},\\
\Psi _{3} &\rightarrow & a^{-2}e^{-2i\theta}\Psi _{3},\\
\Psi _{4} &\rightarrow & a^{-4}e^{-4i\theta} \Psi _{4}.\end{aligned}$$
Raychaudhuri equation in Newman-Penrose formalism {#Appendix:B}
=================================================
Useful expressions
------------------
The following expressions are used many times in our transformation to the NP formalism: $$\begin{aligned}
\t {\eta}{^a^b}\t {E}{^\rho _b}\t {E}{^\gamma _a}&=-\t {E}{^\rho _{\hat{0}}}\t {E}{^\gamma _{\hat{0}}}+\t {E}{^\rho _{\hat{1}}}\t {E}{^\gamma _{\hat{1}}}\nonumber \\
&=-\left(\frac{1}{\sqrt{2}}\right)^2\left(l^\rho +n^\rho \right)\left(l^\gamma +n^\gamma \right)
\nonumber \\
&\qquad
+\left(\frac{1}{\sqrt{2}}\right)^2\left(l^\rho -n^\rho \right)\left(l^\gamma -n^\gamma \right)\nonumber \\
&= -\left(l^\rho n^\gamma + l^\gamma n^\rho \right) \label{exp:eta_E_E}.\\
\t {\delta}{^i^j}\t {N}{^\nu _i}\t {N}{^\beta _j}&=\t {N}{^\nu _{\hat{2}}}\t {N}{^\beta _{\hat{2}}}+\t {N}{^\nu _{\hat{3}}}\t {N}{^\beta _{\hat{3}}}\nonumber \\
&= \left(\frac{1}{\sqrt{2}}\right)^2\left(m^\nu +\c{m}^\nu \right)\left(m^\beta +\c{m}^\beta \right)
\nonumber \\
&\qquad
+\left(\frac{-i}{\sqrt{2}}\right)^2\left(m^\nu -\c{m}^\nu \right)
\nonumber \\
&\qquad \qquad \qquad \times
\left(m^\beta -\c{m}^\beta \right)\nonumber \\
&= \left(m^\nu \c{m}^\beta + m^\beta \c{m}^\nu \right) \label{exp:delt_n_n}.\\
\t {\eta}{^a^b}\t {E}{^\beta _a}D_\alpha \t {E}{^\mu _b}&=-\t {E}{^\beta _{\hat{0}}}D_\mu \t {E}{^\beta _{\hat{0}}}+\t {E}{^\mu _{\hat{1}}}D_\alpha \t {E}{^\rho _{\hat{1}}}\nonumber \\
&=-\frac{1}{2}\left(l^\beta +n^\beta \right)D_\alpha \left(l^\mu +n^\mu\right)
\nonumber \\
&\qquad
+\frac{1}{2}\left(l^\beta -n^\beta \right)D_\alpha \left(l^\mu -n^\mu\right)\nonumber \\
&=-\left(l^\beta D_\alpha n^\mu +n^\beta D_{\alpha} l^\mu\right)\label{exp:eta_E_D_E}.\\
\t {\eta}{^a^b}\t {E}{^\alpha _a}D_\alpha \t {E}{^\mu _b}&=-\left( D_\b{l} n^\mu +D_\b{n} l^\mu\right) \label{exp:eta_E_D_E_cont}.\\
\t {\delta}{^i^j}\t {N}{^{\alpha} _i}D_{\beta}\t {N}{^{\nu} _j}
&=\t {N}{^{\alpha} _{\hat{2}}}D_{\beta} \t {N}{^{\nu} _{\hat{2}}}+\t {N}{^{\alpha} _{\hat{3}}}D_{\beta} \t {N}{^{\nu} _{\hat{3}}}\nonumber \\
&=\frac{1}{2}\left(m^\alpha +\c{m}^\alpha \right)D_\beta \left(m^\nu +\c{m}^\nu \right)
\nonumber \\
&\qquad
-\frac{1}{2}\left(m^\alpha -\c{m}^\alpha \right)
\nonumber \\
&\qquad \qquad \times
D_\beta \left(m^\nu -\c{m}^\nu \right)\nonumber \\
&= m^\alpha D_\beta \c{m}^\nu + \c{m}^\alpha D_\beta m^\nu \label{exp:delt_n_D_n}.\\
\t {\delta}{^i^j}\t {N}{^{\alpha} _i}D_{\alpha}\t {N}{^{\nu} _j}&= D_{\b{m}}\c{m}^\nu + D_{\c{\b{m}}}m^\nu \label{exp:delt_n_D_n_cont}.\\
\t {\eta}{^c^d}\left(D_\rho \t {E}{^\mu _c}\right)\left(D_\gamma \t {E}{^\alpha _d}\right)
&=-\left(D_\rho \t {E}{^\mu _{\hat{0}}}\right)\left(D_\gamma \t {E}{^\alpha _{\hat{0}}}\right)
\nonumber \\
&\qquad
+\left(D_\rho \t {E}{^\mu _{\hat{1}}}\right)\left(D_\gamma \t {E}{^\alpha _{\hat{1}}}\right)\nonumber \\
&=-\frac{1}{2}\left(D_\rho l^\mu + D_\rho n^\mu \right)
\nonumber \\
&\qquad \qquad \times
\left(D_\gamma l^\alpha + D_\gamma n^\alpha \right)
\nonumber \\
&\qquad
+\frac{1}{2}\left(D_\rho l^\mu - D_\rho n^\mu \right)
\nonumber \\
&\qquad \qquad \times
\left(D_\gamma l^\alpha - D_\gamma n^\alpha \right)\nonumber \\
&= -\left[\left(D_\rho l^\mu \right)\left(D_\gamma n^\alpha\right)
\right. \nonumber \\
&\qquad \qquad \left.
+\left(D_\rho n^\mu \right)\left(D_\gamma l^\alpha \right)\right]\label{exp:eta_D_E_D_E}.\end{aligned}$$ $$\begin{aligned}
\t {\eta}{^a^b}\t {E}{^\beta _b}D_\beta D_\gamma \t {E}{^\mu _a}
&= -\t {E}{^\beta _{\hat{0}}}D_\beta D_\gamma \t {E}{^\mu _{\hat{0}}}+\t {E}{^\beta _{\hat{1}}}D_\beta D_\gamma \t {E}{^\mu _{\hat{1}}}\nonumber \\
&=-\frac{1}{2}\left(l^\beta +n^\beta \right)D_\beta D_\gamma \left(l^\mu + n^\mu \right)
\nonumber \\
&\qquad
+\frac{1}{2}\left(l^\beta - n^\beta \right)D_\beta D_\gamma \left(l^\mu - n^\mu \right)\nonumber \\
&=-\frac{1}{2}\left[D_{\b{l}}D_\gamma \left(l^\mu + n^\nu \right)
\right. \nonumber \\
&\qquad \qquad \left.
+D_{\b{n}}D_\gamma \left(l^\mu + n^\nu \right)\right]
\nonumber \\
&\qquad
+ \frac{1}{2}\left[D_{\b{l}}D_\gamma \left(l^\mu - n^\nu \right)
\right. \nonumber \\
&\qquad \qquad \left.
- D_{\b{n}}D_\gamma \left(l^\mu - n^\nu \right)\right]\nonumber \\
&= -\left(D_{\b{l}}D_\gamma n^\mu + D_{\b{n}}D_\gamma l^\mu \right)\label{exp:eta_E_D_D_E}.\end{aligned}$$
Derivation of $\tilde{\nabla} _{\mathbb{T}}\mathcal{J}$
-------------------------------------------------------
Consider the left-hand side of the Raychaudhuri equation (\[Raych\_simple\]), and the world sheet covariant derivative of $\t {J}{_a_i_j}$ defined in relation (\[eq:CurlyCovJ\]), i.e., $$\begin{aligned}
\label{eq:tilD_J_a_i_j}
\tilde{\nabla} _{\mathbb{T}}\mathcal{J}
&:=\t {\eta}{^a^b}\t {\delta}{^i^j}\t {\tilde{\nabla}}{_b}\t {J}{_a_i_j} \nonumber \\
&=\t {\eta}{^a^b}\t {\delta}{^i^j}\left(\underbrace{\t {\nabla}{_b}\t {J}{_a_i_j}}_\text{$\t {D}{_b}\t {J}{_a_i_j}-\t {\gamma}{_b_a^c}\t {J}{_c_i_j}$}-\t {w}{_b_i^k}\t {J}{_a_k_j}-\t {w}{_b_j^k}\t {J}{_a_i_k}\right).\end{aligned}$$ By using the definition of $\t {J}{_a_i_j}$, Eq. (\[eq:J\_a\^i\^j\]), the first term of the equation (\[eq:tilD\_J\_a\_i\_j\]) becomes $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}D{_b}\t {J}{_a_i_j}
&= \t {\eta}{^a^b}\t {\delta}{^i^j}D_b \left[\t {g}{_\mu _\nu}D_i\left(\t {E}{^\mu _a}\right)\t {N}{^\nu _j}\right]\nonumber \\
&= \t {g}{_\mu _\nu} \t {\eta}{^a^b}\t {\delta}{^i^j}\left(D_b \t {N} {^\gamma _i}\right)\left(D_\gamma \t {E}{^\mu _a}\right)\t {N}{^\nu _j}
\nonumber \\
&\qquad
+\t {g}{_\mu _\nu} \t {\eta}{^a^b}\t {\delta}{^i^j} \t {N}{^\gamma _i}\left(D_b D_\gamma \t {E}{^\mu _a}\right)\t {N}{^\nu _j}
\nonumber \\
&\qquad
+\t {g}{_\mu _\nu} \t {\eta}{^a ^b}\t {\delta}{^i ^j}\t {N}{^\gamma _i}\left(D_\gamma \t {E}{^\mu _a} \right) \t {E}{^\beta _b} \left(D_\beta \t {N}{^\nu _j}\right)
\nonumber \\
&= \t {g}{_\mu _\nu} \left(\t {\delta}{^i ^j}\t {N}{^\nu _j} D_\beta \t {N}{^\gamma _i}\right)\left(\t {\eta}{^a ^b} \t {E}{^\beta _b} D_\gamma \t {E}{^\mu _a}\right)
\nonumber \\
&\qquad
+\t {g}{_\mu _\nu}\left(\t {\delta}{^i ^j} \t {N}{^\gamma _i} \t {N}{^\nu _j}\right)\left(\t {\eta}{^a ^b}\t {E}{^\beta _b}D_\beta D_\gamma \t {E}{^\mu _a} \right)
\nonumber \\
&\qquad
+ \t {g}{_\mu _\nu} \left(\t {\delta}{^i ^j}\t {N}{^\gamma _i} D_\beta \t {N}{^\nu _j}\right)\left(\t {\eta}{^a ^b}\t {E}{^\beta _b}D_\gamma \t {E}{^\mu _a} \right),\nonumber \end{aligned}$$ and by making use of Eqs. (\[exp:delt\_n\_n\]), (\[exp:eta\_E\_D\_E\]), (\[exp:delt\_n\_D\_n\]) and (\[exp:eta\_E\_D\_D\_E\]) we obtain $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}D{_b}\t {J}{_a_i_j}
&= -\t {g}{_\mu _\nu}\left(m^\nu D_\beta \c{m}^\gamma + \c{m}^\nu D_\beta m^\gamma \right)
\nonumber \\
&\qquad \qquad \times
\left(l^\beta D_\gamma n^\mu +n^\beta D_{\gamma} l^\mu\right)
\nonumber \\
& \qquad
- \left(m^\gamma \c{m}^\nu + m^\nu \c{m}^\gamma \right)
\nonumber \\
&\qquad \qquad \times
\left(D_{\b{l}}D_\gamma n^\mu + D_{\b{n}}D_\gamma l^\mu \right)
\nonumber \\
& \qquad
- \t {g}{_\mu _\nu}\left(m^\gamma D_\beta \c{m}^\nu + \c{m}^\gamma D_\beta m^\nu \right)
\nonumber \\
&\qquad \qquad \times
\left(l^\beta D_\gamma n^\mu +n^\beta D_{\gamma} l^\mu\right)\nonumber.\end{aligned}$$ Then $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}D{_b}\t {J}{_a_i_j}
&=-\t {g}{_\mu _\nu}\left[\c{m}^\nu \left(D_\beta m^\gamma \right)l^\beta\left(D_\gamma n^\mu \right)
\right. \nonumber \\
&\qquad \qquad \left.
+\c{m}^\nu m^\gamma D_{\b{l}}D_\gamma n^\mu \right]
\nonumber \\
& \qquad
- \t {g}{_\mu _\nu}\left[m^\nu \left(D_\beta \c{m}^\gamma \right) l^\beta \left(D_\gamma n^\mu \right)
\right. \nonumber \\
&\qquad \qquad \left.
+ m^\nu \c{m}^\gamma D_{\b{l}}D_\gamma n^\mu\right] \nonumber \\
& \qquad
-\t {g}{_\mu _\nu}\left[ \c{m}^\nu \left(D_\beta m^\gamma \right)n^\beta \left(D_\gamma l^\mu \right)
\right. \nonumber \\
&\qquad \qquad \left.
+ \c{m}^\nu m^\gamma D_{\b{n}}D_\gamma l^\mu \right]
\nonumber \\
& \qquad
- \t {g}{_\mu _\nu}\left[m^\nu \left( D_\beta \c{m}^\gamma \right) n^\beta \left(D_\gamma l^\mu \right)
\right. \nonumber \\
&\qquad \qquad \left.
+ m^\nu \c{m}^\gamma D_{\b{n}}D_\gamma l^\mu \right]
\nonumber \\
& \qquad
-\t {g}{_\mu _\nu} \left[ \left( D_{\b{l}}\c{m}^\nu \right)\left( D_{\b{m}}n^\mu \right)
\right. \nonumber \\
&\qquad \qquad \left.
+ \left( D_{\b{n}}\c{m}^\nu \right)\left(D_{\b{m}}l^\mu \right)\right] \nonumber \\
& \qquad
-\t {g}{_\mu _\nu} \left[\left(D_{\b{l}}{m}^\nu \right)\left(D_{\c{\b{m}}}n^\mu \right)
\right. \nonumber \\
&\qquad \qquad \left.
+\left(D_{\b{n}}m^\nu \right) \left(D_{\c{\b{m}}}l^\mu \right)\right]
\nonumber \\
&= -\left[\inner{\c{\b{m}}, D_{\b{l}}D_{\b{m}}\b{n}}+
\inner{\b{m}, D_{\b{l}}D_{\c{\b{m}}}\b{n}}\right]
\nonumber \\
&\qquad
-\left[\inner{\c{\b{m}}, D_{\b{n}}D_{\b{m}}\b{l}}+
\inner{\b{m}, D_{\b{n}}D_{\c{\b{m}}}\b{l}}\right]
\nonumber \\
& \qquad
-\left[\inner{D_{\b{l}}\c{\b{m}},D_{\b{m}}\b{n}} + \inner{D_{\b{n}}\c{\b{m}}, D_{\b{m}}\b{l}}\right]
\nonumber \\
&\qquad
-\left[\inner{D_{\b{l}}\b{m},D_{\c{\b{m}}}\b{n}} + \inner{D_{\b{n}}\b{m}, D_{\c{\b{m}}}\b{l}}\right].\nonumber \end{aligned}$$ Now we can use Eqs. (\[Dml\]), (\[Dln\]), (\[Dmn\]), (\[Dlm\]) and (\[Dnm\]) to obtain $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}D{_b}\t {J}{_a_i_j}
&= \left[D_{\b{n}}\left(\rho +\c{\rho }\right)-D_{\b{l}}\left(\mu +\c{\mu }\right)\right]
\nonumber \\
&\qquad
+\left[\left(\c{\alpha}+\beta \right)\left( \pi +\c{\tau} \right)+ \left(\alpha +\c{\beta} \right)\left( \c{\pi} +\tau \right)\right]
\nonumber \\
&\qquad
-\left[\left(\varepsilon - \c{\varepsilon}\right)\left(\mu -\c{\mu} \right)+\left(\gamma - \c{\gamma}\right)\left(\rho -\c{\rho}\right)\right]
\nonumber \\
&\qquad
-\left[\left(\c{\alpha}+\beta \right)\left( \pi +\c{\tau} \right)+ \left(\alpha +\c{\beta} \right)\left( \c{\pi} +\tau \right)\right]
\nonumber \\
&\qquad
+\left[\left(\varepsilon - \c{\varepsilon}\right)\left(\mu -\c{\mu} \right)+\left(\gamma - \c{\gamma}\right)\left(\rho -\c{\rho}\right)\right]\nonumber \\
&=\left[D_{\b{n}}\left(\rho +\c{\rho }\right)-D_{\b{l}}\left(\mu +\c{\mu }\right)\right].\label{DeltaJ1}\end{aligned}$$ In order to derive the second term of Eq. (\[eq:tilD\_J\_a\_i\_j\]), we will use the definitions in Eq. (\[eq:gamma\_a\_b\_c\]) and Eq. (\[eq:J\_a\^i\^j\]). Then we get $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}\t {\gamma}{_b_a^c}\t {J}{_c_i_j}&= \t {\eta}{^a^b}\t {\eta}{^c^d}\t {\delta}{^i^j}\left(\t {g}{_\mu _\nu}\left[\t {D}{_b}\t {E}{^\mu _a}\right] \t {E}{^\nu _d}\right)
\nonumber \\
&\qquad \qquad \times
\left( \t {g}{_\alpha _\beta}\left[\t {D}{_i}\t {E}{^\alpha _c}\right] \t {N}{^\beta _j}\right)\nonumber \\
&=\t {g}{_\mu _\nu}\t {g}{_\alpha _\beta}\left( \t {N}{^\gamma _i} \t {N}{^\beta _j}\t {\delta}{^i^j}\right)
\nonumber \\
&\qquad \qquad \times
\left( \t {\eta}{^a^b} \t {E}{^\rho _b}D_\rho \t {E}{^\mu _a}\right)
\nonumber \\
&\qquad \qquad \times
\left(\t {\eta}{^c^d} \t {E}{^\nu _d} D_\gamma \t {E}{^\alpha _c}\right).\nonumber\end{aligned}$$ Then by using relations (\[exp:delt\_n\_n\]), (\[exp:eta\_E\_D\_E\]) and (\[exp:eta\_E\_D\_E\_cont\]) we obtain $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}\t {\gamma}{_b_a^c}\t {J}{_c_i_j}
&=
\t {g}{_\mu _\nu}\t {g}{_\alpha _\beta}\left(m^\gamma \c{m}^\beta + m^\beta \c{m}^\gamma \right)
\nonumber \\
&\qquad \qquad \times
\left( D_\b{l} n^\mu +D_\b{n} l^\mu\right)
\nonumber \\
&\qquad \qquad \times
\left(l^\nu D_\gamma n^\alpha +n^\nu D_{\gamma} l^\alpha \right)\nonumber \\
&= \t {g}{_\mu _\nu}\t {g}{_\alpha _\beta}\left(D_{\b{l}}n^\mu + D_{\b{n}} l^\mu \right)
\nonumber \\
&\qquad \qquad \times
\left(\c{m}^\beta l^\nu D_{\b{m}}n^\alpha + \c{m}^\beta n^\nu D_{\b{m}}l^\alpha
\right. \nonumber \\
&\qquad \qquad \left.
+ m^\beta l^\nu D_{\c{\b{m}}}n^\alpha + m^\beta n^\nu D_{\c{\b{m}}}l^\alpha \right) \nonumber.\end{aligned}$$ Hence, $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}\t {\gamma}{_b_a^c}\t {J}{_c_i_j}
&= \inner{D_{\b{m}}\b{n},\c{\b{m}}}\left( \inner{D_{\b{l}}\b{n},\b{l}}
+\inner{D_{\b{n}}\b{l},\b{l}}\right)
\nonumber \\
&\qquad
+\inner{D_{\b{m}}\b{l},\c{\b{m}}}\left( \inner{D_{\b{l}}\b{n},\b{n}}+\inner{D_{\b{n}}\b{l},\b{n}}\right)
\nonumber \\
&\qquad
+ \inner{D_{\c{\b{m}}}\b{n},\b{m}}\left( \inner{D_{\b{l}}\b{n},\b{l}}+\inner{D_{\b{n}}\b{l},\b{l}}\right)
\nonumber \\
&\qquad
+\inner{D_{\c{\b{m}}}\b{l},\b{m}}\left( \inner{D_{\b{l}}\b{n},\b{n}}+\inner{D_{\b{n}}\b{l},\b{n}}\right),\nonumber \end{aligned}$$ and by using Eqs. (\[Dnl\]), (\[Dml\]), (\[Dln\]) and (\[Dmn\]) we have $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}\t {\gamma}{_b_a^c}\t {J}{_c_i_j}
&= \left(\varepsilon +\c{\varepsilon}\right)\left(\mu +\c{\mu}\right)+\left(\gamma +\c{\gamma}\right)\left(\rho +\c{\rho}\right).\label{DeltaJ2}\end{aligned}$$ In order to derive the third term of Eq. (\[eq:tilD\_J\_a\_i\_j\]) one uses the definitions in Eq. (\[eq:w\_a\^i\^j\]) and Eq. (\[eq:J\_a\^i\^j\]). Then we write $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}\t {w}{_b_i^k}\t {J}{_a_k_j} &= \t {\eta}{^a^b}\t {\delta}{^i^j}\t {\delta}{^k^l}\left[\t {g}{_\mu _\nu}\left(\t {D}{_b}\t {N}{^\mu _i}\right) \t {N}{^\nu _k}\right]
\nonumber \\
&\qquad \qquad \times
\left[ \t {g}{_\alpha _\beta}\left(\t {D}{_l}\t {E}{^\alpha _a}\right) \t {N}{^\beta _j}\right]\nonumber\\
&=\t {g}{_\mu _\nu}\t {g}{_\alpha _\beta}\left(\t {\delta}{^k ^l}\t {N}{^\gamma _l}\t {N}{^\nu _k}\right)\left(\t {\delta}{^i ^j}\t {N}{^\beta _j}D_\rho \t {N}{^\mu _i}\right)
\nonumber \\
&\qquad \qquad \times
\left( \t {\eta}{^a ^b}\t {E}{^\rho _b}D_\gamma \t {E}{^\alpha _a}\right).\nonumber\end{aligned}$$ Now using Eqs. (\[exp:delt\_n\_n\]), (\[exp:eta\_E\_D\_E\]) and (\[exp:delt\_n\_D\_n\]) results in $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}\t {w}{_b_j^k}\t {J}{_a_k_i}
&=-\t {g}{_\mu _\nu}\t {g}{_\alpha _\beta}\left(m^\gamma \c{m}^\nu + m^\nu \c{m}^\gamma \right)
\nonumber \\
&\qquad \qquad \times
\left(m^\beta D_\rho \c{m}^\mu + \c{m}^\beta D_\rho m^\mu \right)
\nonumber \\
&\qquad \qquad \times
\left(l^\rho D_\gamma n^\alpha +n^\rho D_{\gamma} l^\alpha \right)\nonumber\\
&=-\bigl[\inner{D_{\b{m}}\b{n},\b{m}}\inner{D_{\b{l}}\c{\b{m}},\c{\b{m}}}
\nonumber \\
&\qquad
+\inner{D_{\b{l}}\b{m},\c{\b{m}}}\inner{D_{\b{m}}\b{n},\c{\b{m}}}
\nonumber \\
&\qquad
+\inner{D_{\b{n}}\c{\b{m}},\c{\b{m}}}\inner{D_{\b{m}}\b{l},\b{m}}
\nonumber \\
&\qquad
+\inner{D_{\b{n}}\b{m},\c{\b{m}}}\inner{D_{\b{m}}\b{l},\c{\b{m}}}
\nonumber \\
&\qquad
+ \inner{D_{\b{l}}\b{m},\b{m}}\inner{D_{\c{\b{m}}}\b{n},\b{m}}
\nonumber \\
&\qquad
+ \inner{D_{\b{l}}\b{m},\b{m}}\inner{D_{\c{\b{m}}}\b{n},\c{\b{m}}}
\nonumber \\
&\qquad
+ \inner{D_{\b{n}}\c{\b{m}},\b{m}}\inner{D_{\c{\b{m}}}\b{l},\b{m}}
\nonumber \\
&\qquad
+ \inner{D_{\b{n}}\b{m},\b{m}}\inner{D_{\c{\b{m}}}\b{l},\c{\b{m}}}\bigr],\nonumber \end{aligned}$$ and by Eqs. (\[Dml\]), (\[Dmn\]), (\[Dlm\]) and (\[Dnm\]) we obtain $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}\t {w}{_b_j^k}\t {J}{_a_k_i}
&=-\left[\left(\varepsilon -\c{\varepsilon}\right)\left(\mu -\c{\mu}\right)
+\left(\gamma -\c{\gamma}\right)\left(\rho -\c{\rho}\right)\right].\label{DeltaJ3}\end{aligned}$$ Similarly, the fourth term in Eq. (\[eq:tilD\_J\_a\_i\_j\]) follows from $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}\t {w}{_b_j^k}\t {J}{_a_i_k} &= \t {\eta}{^a^b}\t {\delta}{^i^j}\t {\delta}{^k^l}\left[\t {g}{_\mu _\nu}\left(\t {D}{_b}\t {N}{^\mu _j}\right) \t {N}{^\nu _k}\right]
\nonumber \\
&\qquad \times
\left[ \t {g}{_\alpha _\beta}\left(\t {D}{_i}\t {E}{^\alpha _a}\right) \t {N}{^\beta _l}\right]\nonumber\\
&=\t {g}{_\mu _\nu}\t {g}{_\alpha _\beta}\left(\t {\delta}{^k ^l}\t {N}{^\nu _k}\t {N}{^\beta _l}\right)
\nonumber \\
&\qquad \times
\left(\t {\delta}{^i ^j}\t {N}{^\gamma _i}D_\rho \t {N}{^\mu _j}\right)
\nonumber \\
&\qquad \times
\left( \t {\eta}{^a ^b}\t {E}{^\rho _b}D_\gamma \t {E}{^\alpha _a}\right).\nonumber\end{aligned}$$ Then by using relations (\[exp:delt\_n\_n\]), (\[exp:eta\_E\_D\_E\]) and (\[exp:delt\_n\_D\_n\]), $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}\t {w}{_b_j^k}\t {J}{_a_i_k}
&=-\t {g}{_\mu _\nu}\t {g}{_\alpha _\beta}\left(m^\nu \c{m}^\beta + m^\beta \c{m}^\nu \right)
\nonumber \\
&\qquad \times
\left(m^\gamma D_\rho \c{m}^\mu + \c{m}^\gamma D_\rho m^\mu \right)
\nonumber \\
&\qquad \times
\left(l^\rho D_\gamma n^\alpha +n^\rho D_{\gamma} l^\alpha \right)\nonumber\\
&=-\bigl[\inner{D_{\b{m}}\b{n},\b{m}}\inner{D_{\b{l}}\c{\b{m}},\c{\b{m}}}
\nonumber \\
&\qquad
+\inner{D_{\b{l}}\c{\b{m}},\b{m}}\inner{D_{\b{m}}\b{n},\c{\b{m}}}
\nonumber \\
&\qquad
+\inner{D_{\b{n}}\c{\b{m}},\b{m}}\inner{D_{\b{m}}\b{l},\c{\b{m}}}
\nonumber \\
&\qquad
+\inner{D_{\b{n}}\b{m},\b{m}}\inner{D_{\c{\b{m}}}\b{l},\c{\b{m}}}
\nonumber \\
&\qquad
+ \inner{D_{\b{l}}\b{m},\c{\b{m}}}\inner{D_{\c{\b{m}}}\b{n},\b{m}}
\nonumber \\
&\qquad
+ \inner{D_{\b{l}}\b{m},\b{m}}\inner{D_{\c{\b{m}}}\b{n},\c{\b{m}}}
\nonumber \\
&\qquad
+ \inner{D_{\b{n}}\c{\b{m}},\c{\b{m}}}\inner{D_{\b{m}}\b{l},\b{m}}
\nonumber \\
&\qquad
+ \inner{D_{\b{n}}\b{m},\c{\b{m}}}\inner{D_{\c{\b{m}}}\b{l},\b{m}}\bigr],\nonumber \end{aligned}$$ and by further using Eqs. (\[Dml\]), (\[Dmn\]), (\[Dlm\]) and (\[Dnm\]) we obtain the same result as in (\[DeltaJ3\]), i.e., $$\begin{aligned}
\t {\eta}{^a^b}\t {\delta}{^i^j}\t {w}{_b_j^k}\t {J}{_a_i_k}
&=-\left[\left(\varepsilon -\c{\varepsilon}\right)\left(\mu -\c{\mu}\right)+\left(\gamma -\c{\gamma}\right)\left(\rho -\c{\rho}\right)\right].\label{DeltaJ4}\end{aligned}$$ Hence, substitution of the relations (\[DeltaJ1\]), (\[DeltaJ2\]), (\[DeltaJ3\]) and (\[DeltaJ4\]) into Eq. (\[eq:tilD\_J\_a\_i\_j\]) results in $$\begin{aligned}
\tilde{\nabla} _{\mathbb{T}}\mathcal{J}&= \left[D_{\b{n}}\left(\rho +\c{\rho }\right)-D_{\b{l}}\left(\mu +\c{\mu }\right)\right]
\nonumber \\
&\qquad
-\left[\left(\varepsilon +\c{\varepsilon}\right)\left(\mu +\c{\mu}\right)+\left(\gamma +\c{\gamma}\right)\left(\rho +\c{\rho}\right)\right]
\nonumber \\
&\qquad
+2\left[\left(\varepsilon - \c{\varepsilon}\right)\left(\mu -\c{\mu} \right)+\left(\gamma - \c{\gamma}\right)\left(\rho -\c{\rho}\right)\right].\end{aligned}$$
Derivation of $\tilde{\nabla} _{\mathbb{S}}\mathcal{K}$
-------------------------------------------------------
Consider the first term on the right-hand side of the Raychaudhuri equation (\[Raych\_simple\]), and the covariant derivative of $\t {K}{_a_b_j}$ on the spacelike 2-surface defined in relation (\[eq:CurlyCovK\]), i.e., $$\begin{aligned}
\label{eq:tilD_K_a_b_j}
\tilde{\nabla} _{\mathbb{S}}\mathcal{K}
&:=\t {\eta}{^a^b}\t {\delta}{^i^j}\t {\tilde{\nabla}}{_i}\t {K}{_a_b_j}
\nonumber \\
&=\t {\eta}{^a^b}\t {\delta}{^i^j} \left(\underbrace{\t {\nabla}{_i}\t {K}{_a_b_j}}_\text{$\t {D}{_i}\t {K}{_a_b_j}-\t {\gamma}{_i_j_k}\t {K}{_a_b^k}$}-\t {S}{_a_c_i}\t {K}{_b^c_j}-\t {S}{_b_c_i}\t {K}{_a^c_j}\right).\end{aligned}$$ Then, by making use of the definition (\[eq:K\_a\_b\^i\]), the first term of Eq. (\[eq:tilD\_K\_a\_b\_j\]) is as follows $$\begin{aligned}
D_i \t {K}{_a_b_j}\t {\eta}{^a ^b}\t {\delta} {^i ^j}&=\t {\eta}{^a ^b}\t {\delta} {^i ^j}D_i\left[-\t {g}{_\mu _\nu}\left(D_a \t {E}{^\mu _b}\right)\t {N}{^\nu _j}\right]\nonumber \\
&= -\t {\eta}{^a ^b}\t {\delta} {^i ^j}\left[\t {N}{^\nu _j} \t {N}{^\gamma _i}D_\gamma \left(\t {g}{_\mu _\nu}\left(D_a \t {E}{^\mu _b}\right)\right)\right]
\nonumber \\
&\qquad
-\t {\eta}{^a ^b}\t {\delta} {^i ^j}\left[ \t {g}{_\mu _\nu}\left(D_a \t {E}{^\mu _b}\right)\t {N}{^\gamma _i}D_\gamma \t {N}{^\nu _j} \right]
\nonumber \\
&=-\t {g}{_\mu _\nu}\left[\left(\t {\delta}{^i ^j}\t {N}{^\nu _j}\t {N}{^\gamma _j} \right)\t {\eta}{^a^b}D_\gamma \left( \t {E}{^\beta _a}D_\beta \t {E}{^\mu _b}\right)\right]
\nonumber \\
&\qquad
-\t {g}{_\mu _\nu}\left[\left(\t {\eta}{^a^b}\t {E}{^\beta _a}D_\beta \t {E}{^\mu _b}\right)
\right. \nonumber \\
&\qquad \qquad \left.
\times \left(\t {\delta}{^i^j}\t {N}{^{\gamma} _i}D_{\gamma}\t {N}{^{\nu} _j}\right)\right].\nonumber\end{aligned}$$ By using Eqs. (\[exp:delt\_n\_n\]), (\[exp:eta\_E\_D\_E\_cont\]) and (\[exp:delt\_n\_D\_n\_cont\]) we write $$\begin{aligned}
D_i \t {K}{_a_b_j}\t {\eta}{^a ^b}\t {\delta} {^i ^j}
&= \t {g}{_\mu _\nu} \left[\left(m^\gamma \c{m}^\nu + m^\nu \c{m}^\gamma \right)
\right. \nonumber \\
&\qquad \qquad \left. \times
D_\gamma \left( D_\b{l} n^\mu +D_\b{n} l^\mu \right)\right]
\nonumber \\
&\qquad
+ \t {g}{_\mu _\nu} \left[\left( D_\b{l} n^\mu +D_\b{n} l^\mu\right)
\right. \nonumber \\
&\qquad \qquad \qquad \left. \times
\left(D_{\b{m}}\c{m}^\nu + D_{\c{\b{m}}}m^\nu \right)\right]\nonumber \\
&= \inner{\c{\b{m}}, D_{\b{m}}D_{\b{l}}\b{n}}+ \inner{\c{\b{m}}, D_{\b{m}}D_{\b{n}}\b{l}}
\nonumber \\
&\qquad
+\inner{\b{m}, D_{\c{\b{m}}}D_{\b{l}}\b{n}}+
\inner{\b{m}, D_{\c{\b{m}}}D_{\b{n}}\b{l}}
\nonumber \\
&\qquad
+\inner{D_{\b{l}}\b{n},D_{\b{m}}\c{\b{m}}} + \inner{D_{\b{l}}\b{n}, D_{\c{\b{m}}}\b{m}}
\nonumber \\
&\qquad
+\inner{D_{\b{n}}\b{l},D_{\b{m}}\c{\b{m}}} + \inner{D_{\b{n}}\b{l}, D_{\c{\b{m}}}\b{m}}, \nonumber\end{aligned}$$ and by further using Eqs. (\[Dnl\]), (\[Dln\]) and (\[Dmm\_\]) we obtain $$\begin{aligned}
D_i \t {K}{_a_b_j}\t {\eta}{^a ^b}\t {\delta} {^i ^j}
&= D_{\b{m}}\left(\pi -\c{\tau} \right)+D_{\c{\b{m}}}\left(\c{\pi} -\tau \right)
\nonumber \\
&\qquad
-\left[\left(\alpha - \c{\beta} \right)\left( \c{\pi} - \tau \right)+\left(\c{\alpha}-\beta \right)\left( \pi - \c{\tau} \right)\right]
\nonumber \\
&\qquad
-\left[\left(\varepsilon +\c{\varepsilon}\right)\left(\mu +\c{\mu}\right)+\left(\gamma +\c{\gamma}\right)\left(\rho +\c{\rho}\right)\right]
\nonumber \\
&\qquad
+\left[\left(\varepsilon +\c{\varepsilon}\right)\left(\mu +\c{\mu}\right)+\left(\gamma +\c{\gamma}\right)\left(\rho +\c{\rho}\right)\right]
\nonumber \\
&\qquad
+\left[\left(\alpha - \c{\beta} \right)\left( \c{\pi} - \tau \right)+\left(\c{\alpha}-\beta \right)\left( \pi - \c{\tau} \right)\right]\nonumber \\
&= D_{\b{m}}\left(\pi -\c{\tau} \right)+D_{\c{\b{m}}}\left(\c{\pi} -\tau \right).\label{DeltaK1}\end{aligned}$$ The second term in Eq. (\[eq:tilD\_K\_a\_b\_j\]) is obtained by using the definitions (\[eq:K\_a\_b\^i\]) and (\[gamma\_i\_j\_k\]). The derivation follows as $$\begin{aligned}
\t {\gamma}{_i_j_k}\t {K}{_a_b_l}\t {\delta}{^i ^j}\t {\delta}{^k ^l}\t {\eta}{^a ^b}
&= \left[\t {g}{_\alpha _\beta}\left(\t {D}{_i}\t {N}{^\alpha _j}\right) \t {N}{^\beta _k}\right]
\nonumber \\
&\qquad \times
\left[-\t {g}{_\mu _\nu}\left(\t {D}{_a}\t {E}{^\mu _b}\right) \t {N}{^\nu _l}\right]\t {\delta}{^i ^j}\t {\delta}{^k ^l}\t {\eta}{^a ^b}\nonumber \\
&= -\t {g}{_\alpha _\beta}\t {g}{_\mu _\nu}\left(\t {\delta}{^k ^l}\t {N}{^\beta _k}\t {N}{^\nu _l}\right)
\nonumber \\
&\qquad \times
\left(\t {\delta}{^i ^j}\t {N}{^{\rho} _i}D_{\rho}\t {N}{^{\alpha} _j}\right)\left(\t {\eta}{^a ^b}\t {E}{^\gamma _a}D_\gamma \t {E}{^\mu _b} \right).\nonumber\end{aligned}$$ Now let us use Eqs. (\[exp:delt\_n\_n\]), (\[exp:eta\_E\_D\_E\_cont\]) and (\[exp:delt\_n\_D\_n\]) to write $$\begin{aligned}
\t {\gamma}{_i_j_k}\t {K}{_a_b_l}\t {\delta}{^i ^j}\t {\delta}{^k ^l}\t {\eta}{^a ^b}
&= \t {g}{_\alpha _\beta}\t {g}{_\mu _\nu}
\left(m^\beta \c{m}^\nu + m^\nu \c{m}^\beta \right)
\nonumber \\
&\qquad \times
\left(m^\rho D_\rho \c{m}^\alpha + \c{m}^\rho D_\rho m^\alpha \right)
\nonumber \\
&\qquad \times
\left( D_\b{l} n^\mu +D_\b{n} l^\mu\right) \nonumber \\
&= \inner{D_{\b{m}}\c{\b{m}},\b{m}} \inner{D_{\b{l}}\b{n},\c{\b{m}}}
\nonumber \\
&\qquad
+\inner{D_{\c{\b{m}}}\b{m},\b{m}}\inner{D_{\b{l}}\b{n},\c{\b{m}}}
\nonumber \\
&\qquad
+\inner{D_{\b{m}}\c{\b{m}},\b{m}} \inner{D_{\b{n}}\b{l},\c{\b{m}}}
\nonumber \\
&\qquad
+\inner{D_{\c{\b{m}}}\b{m},\b{m}}\inner{D_{\b{n}}\b{l},\c{\b{m}}}
\nonumber \\
&\qquad
+\inner{D_{\b{m}}\c{\b{m}},\c{\b{m}}} \inner{D_{\b{l}}\b{n},\b{m}}
\nonumber \\
&\qquad
+\inner{D_{\c{\b{m}}}\b{m},\c{\b{m}}}\inner{D_{\b{l}}\b{n},\b{m}}
\nonumber \\
&\qquad
+\inner{D_{\b{m}}\c{\b{m}},\c{\b{m}}} \inner{D_{\b{n}}\b{l},\b{m}}
\nonumber \\
&\qquad
+\inner{D_{\c{\b{m}}}\b{m},\c{\b{m}}}\inner{D_{\b{n}}\b{l},\b{m}}.\nonumber\end{aligned}$$ By using Eqs. (\[Dnl\]), (\[Dln\]) and (\[Dmm\_\]) we obtain $$\begin{aligned}
\t {\gamma}{_i_j_k}\t {K}{_a_b_l}\t {\delta}{^i ^j}\t {\delta}{^k ^l}\t {\eta}{^a ^b}
&=\left(\c{\alpha}-\beta \right)\left( \pi -\c{\tau} \right)+ \left(\alpha -\c{\beta} \right)\left( \c{\pi} -\tau \right).\label{DeltaK2}\end{aligned}$$ Finally we derive the third term that appears in Eq. (\[eq:tilD\_K\_a\_b\_j\]). Note that the third term is equal to the fourth term since our $\t {\eta}{_a_b}$ is diagonal. Here we make use of the definitions (\[eq:S\_a\_b\^i\]) and (\[eq:K\_a\_b\^i\]) and get $$\begin{aligned}
\t {S}{_a_c_i}\t {K}{_b_d_j}\t {\delta}{^i ^j}\t {\eta}{^a^b}\t {\eta}{^c^d}
&=\left[ \t {g}{_\mu _\nu}\left(\t {D}{_i}\t {E}{^\mu _a}\right) \t {E}{^\nu _c}\right]
\nonumber \\
&\qquad \times
\left[-\t {g}{_\alpha _\beta}\left(\t {D}{_b}\t {E}{^\alpha _d}\right) \t {N}{^\beta _j}\right]\t {\delta}{^i ^j}\t {\eta}{^a^b}\t {\eta}{^c^d}\nonumber \\
&=-\t {g}{_\mu _\nu} \t {g}{_\alpha _\beta}\left(\t {\delta}{^i ^j}\t {N}{^\gamma _i}\t {N}{^\beta _j}\right)
\nonumber \\
&\qquad \times
\left(\t {\eta}{^a ^b}\t {E}{^\rho _b}D_\gamma \t {E}{^\mu _a} \right)
\nonumber \\
&\qquad \times
\left(\t {\eta}{^c ^d}\t {E}{^\nu _c}D_\rho \t {E}{^\alpha _d} \right).\nonumber\end{aligned}$$ Also by using Eqs. (\[exp:delt\_n\_n\]) and (\[exp:eta\_E\_D\_E\]) we obtain $$\begin{aligned}
\t {S}{_a_c_i}\t {K}{_b_d_j}\t {\delta}{^i ^j}\t {\eta}{^a^b}\t {\eta}{^c^d}
&=-\t {g}{_\mu _\nu} \t {g}{_\alpha _\beta} \left(m^\gamma \c{m}^\beta + m^\beta \c{m}^\gamma \right)
\nonumber \\
&\qquad \times
\left(l^\rho D_\gamma n^\mu +n^\rho D_{\gamma} l^\mu \right)
\nonumber \\
&\qquad \times
\left(l^\nu D_\rho n^\alpha +n^\nu D_{\rho} l^\alpha \right)\nonumber \\
&=-\left[\inner{D_{\b{m}}\b{n},\b{l}}\inner{D_{\b{l}}\b{n},\c{\b{m}}}
\right. \nonumber \\
&\qquad \left.
+\inner{D_{\b{m}}\b{n},\b{n}}\inner{D_{\b{l}}\b{l},\c{\b{m}}}\right]\nonumber \\
&=-\left[\inner{D_{\b{m}}\b{l},\b{l}}\inner{D_{\b{n}}\b{n},\c{\b{m}}}
\right. \nonumber \\
&\qquad \left.
+\inner{D_{\b{m}}\b{l},\b{n}}\inner{D_{\b{n}}\b{l},\c{\b{m}}}\right]\nonumber \\
&=-\left[\inner{D_{\c{\b{m}}}\b{n},\b{l}}\inner{D_{\b{l}}\b{n},\b{m}}
\right. \nonumber \\
&\qquad \left.
+\inner{D_{\c{\b{m}}}\b{n},\b{n}}\inner{D_{\b{l}}\b{l},\b{m}}\right]\nonumber \\
&=-\left[\inner{D_{\c{\b{m}}}\b{l},\b{n}}\inner{D_{\b{n}}\b{l},\b{m}}
\right. \nonumber \\
&\qquad \left.
+\inner{D_{\c{\b{m}}}\b{l},\b{l}}\inner{D_{\b{n}}\b{n},\b{m}}\right].\nonumber \end{aligned}$$ Then by further using Eqs. (\[Dll\]), (\[Dnl\]), (\[Dml\]), (\[Dln\]) and (\[Dnn\]) we write $$\begin{aligned}
\t {S}{_a_c_i}\t {K}{_b_d_j}\t {\delta}{^i ^j}\t {\eta}{^a^b}\t {\eta}{^c^d}
&=
-\left[ \left(\c{\alpha}+\beta \right)\left( \pi + \c{\tau} \right)
\right. \nonumber \\
&\qquad \qquad \left.
+\left(\alpha + \c{\beta} \right)\left( \c{\pi} + \tau \right) \right].
\label{DeltaK3-4}\end{aligned}$$ Therefore substitution of relations (\[DeltaK1\]), (\[DeltaK2\]) and (\[DeltaK3-4\]) into Eq. (\[eq:tilD\_K\_a\_b\_j\]) results in $$\begin{aligned}
\tilde{\nabla} _{\mathbb{S}}\mathcal{K}
&=D_{\b{m}}\left(\pi -\c{\tau} \right)+D_{\c{\b{m}}}\left(\c{\pi} -\tau \right)
\nonumber \\
&\qquad
-\left[\left(\c{\alpha}-\beta \right)\left( \pi -\c{\tau} \right)
+ \left(\alpha -\c{\beta} \right)\left( \c{\pi} -\tau \right)\right]
\nonumber \\
&\qquad
+ 2\left[ \left(\c{\alpha}+\beta \right)\left( \pi + \c{\tau} \right)
+\left(\alpha + \c{\beta} \right)\left( \c{\pi} + \tau \right)\right].\end{aligned}$$
Derivation of $\mathcal{J}^2$
-----------------------------
In order to derive the second term that appears on the right-hand side of the Raychaudhuri equation (\[Raych\_simple\]), we start with the definition (\[eq:J\_a\^i\^j\]) and write $$\begin{aligned}
\mathcal{J}^2
&:=\t {J}{_b_i_k}\t {J}{_a_l_j}{\t {\eta}{^a^b}}\t {\delta}{^i^j}\t {\delta}{^l^k}
\nonumber \\
&=\left[\t {g}{_\mu _\nu}\left(\t {D}{_i}\t {E}{^\mu _b}\right) \t {N}{^\nu _k}\right] \left[\t {g}{_\alpha _\beta}\left(\t {D}{_l}\t {E}{^\alpha _a}\right) \t {N}{^\beta _j}\right]
\nonumber \\
&\qquad \times
\t {\eta}{^a^b}\t {\delta}{^i^j}\t {\delta}{^ l ^k}\nonumber \\
&= \t {g}{_\mu _\nu}\t {g}{_\alpha _\beta} \left(\t {\delta}{^ i ^j} \t {N} {^\rho _i}\t {N}{^\beta _j}\right) \left(\t {\delta}{^ k ^l} \t {N} {^\gamma _l}\t {N}{^\nu _k}\right)
\nonumber \\
&\qquad \times
\left[\t {\eta}{^a^b}\left(D_\gamma \t {E}{^\alpha _a}\right)\left(D_\rho \t {E}{^\mu _b}\right)\right], \nonumber\end{aligned}$$ then by Eqs. (\[exp:delt\_n\_n\]) and (\[exp:eta\_D\_E\_D\_E\]), $$\begin{aligned}
\mathcal{J}^2
&=-\t {g}{_\mu _\nu}\t {g}{_\alpha _\beta} \left(m^\rho \c{m}^\beta + m^\beta \c{m}^\rho \right)\left(m^\gamma \c{m}^\nu + m^\nu \c{m}^\gamma \right)
\nonumber \\
&\qquad
\times \left[\left(D_\gamma l^\alpha \right)\left(D_\rho n^\mu \right) +\left(D_\gamma n^\alpha \right)\left(D_\rho l^\mu \right)\right]\nonumber \\
&= -\left[\inner{D_{\b{m}}\b{n},\c{\b{m}}} \inner{D_{\b{m}}\b{l},\c{\b{m}}} + \inner{D_{\b{m}}\b{n},\c{\b{m}}} \inner{D_{\b{m}}\b{l},\c{\b{m}}} \right]
\nonumber \\
&\qquad
-\left[\inner{D_{\b{m}}\b{n},\b{m}} \inner{D_{\c{\b{m}}}\b{l},\c{\b{m}}} + \inner{D_{\b{m}}\b{l},\b{m}} \inner{D_{\c{\b{m}}}\b{n},\c{\b{m}}} \right]
\nonumber \\
&\qquad
-\left[\inner{D_{\c{\b{m}}}\b{n},\c{\b{m}}} \inner{D_{\b{m}}\b{l},\b{m}} + \inner{D_{\b{m}}\b{n},\b{m}} \inner{D_{\c{\b{m}}}\b{l},\c{\b{m}}} \right]
\nonumber \\
&\qquad
-\left[\inner{D_{\c{\b{m}}}\b{l},\b{m}} \inner{D_{\c{\b{m}}}\b{n},\b{m}} + \inner{D_{\c{\b{m}}}\b{l},\b{m}} \inner{D_{\c{\b{m}}}\b{n},\b{m}} \right].\nonumber\end{aligned}$$ Finally, by using Eqs. (\[Dml\]) and (\[Dmn\]) we obtain $$\begin{aligned}
\mathcal{J}^2
&= 2\left(\mu \c{\rho} + \c{\mu} \rho + \sigma \lambda + \c{\sigma} \c{\lambda} \right).\end{aligned}$$
Derivation of $\mathcal{K}^2$
-----------------------------
The third term that appears on the right-hand side of the Raychaudhuri equation (\[Raych\_simple\]), is obtained as the following once the definition (\[eq:K\_a\_b\^i\]) is considered: $$\begin{aligned}
\mathcal{K}^2&:=\t {K}{_b_c_i}\t {K}{_a_d_j}\t {\eta}{^a^b}\t {\eta}{^c^d} \t {\delta}{^i^j} \nonumber \\
&= \left[-\t {g}{_\mu _\nu}\left(\t {D}{_b}\t {E}{^\mu _c}\right) \t {N}{^\nu _i}\right]\left[-\t {g}{_\alpha _\beta}\left(\t {D}{_a}\t {E}{^\alpha _d}\right) \t {N}{^\beta _j}\right]
\nonumber \\
&\qquad \times
\t {\eta}{^a^b}\t {\eta}{^c^d} \t {\delta}{^i^j}\nonumber \\
&=\t {g}{_\mu _\nu}\t {g}{_\alpha _\beta} \left(\t {\delta}{^ i ^j} \t {N} {^\nu _i}\t {N}{^\beta _j}\right)\left(\t {\eta}{^a^b}\t {E}{^\rho _b}\t {E}{^\gamma _a}\right)
\nonumber \\
&\qquad \times
\left[\t {\eta}{^c^d}\left(D_\rho \t {E}{^\mu _c}\right)\left(D_\gamma \t {E}{^\alpha _d}\right)\right].\nonumber\end{aligned}$$ Also by making use of Eqs. (\[exp:eta\_E\_E\]), (\[exp:delt\_n\_n\]) and (\[exp:eta\_D\_E\_D\_E\]) we write $$\begin{aligned}
\mathcal{K}^2
&= \left(m^\nu \c{m}^\beta + m^\beta \c{m}^\nu \right)\left(l^\rho n^\gamma + l^\gamma n^\rho \right)
\nonumber \\
&\qquad \times
\left[\left(D_\rho l^\mu \right)\left(D_\gamma n^\alpha \right) +\left(D_\rho n^\mu \right)\left(D_\gamma l^\alpha \right)\right]\nonumber \\
&=\left[\inner{D_{\b{l}}\b{l},\b{m}} \inner{D_{\b{n}}\b{n},\c{\b{m}}} + \inner{D_{\b{l}}\b{n},\b{m}} \inner{D_{\b{n}}\b{l},\c{\b{m}}} \right]
\nonumber \\
&\qquad
+\left[\inner{D_{\b{n}}\b{l},\b{m}} \inner{D_{\b{l}}\b{n},\c{\b{m}}} + \inner{D_{\b{n}}\b{n},\b{m}} \inner{D_{\b{l}}\b{l},\c{\b{m}}} \right]
\nonumber \\
&\qquad
+\left[\inner{D_{\b{l}}\b{l},\c{\b{m}}} \inner{D_{\b{n}}\b{n},\b{m}} + \inner{D_{\b{l}}\b{n},\c{\b{m}}} \inner{D_{\b{n}}\b{l},\b{m}} \right]
\nonumber \\
&\qquad
+\left[\inner{D_{\b{n}}\b{l},\c{\b{m}}} \inner{D_{\b{l}}\b{n},\b{m}} + \inner{D_{\b{n}}\b{n},\c{\b{m}}} \inner{D_{\b{l}}\b{l},\b{m}} \right].\nonumber\end{aligned}$$ Then by Eqs. (\[Dll\]), (\[Dnl\]), (\[Dln\]) and (\[Dnn\]) we obtain the final form as $$\begin{aligned}
\mathcal{K}^2
&= -2\left(\kappa \nu + \c{\kappa} \c{\nu} + \pi \tau + \c{\pi} \c{\tau} \right).\end{aligned}$$
Derivation of $\mathcal{R_{\,W}}$
---------------------------------
Now we derive the last term on the right-hand side of the Raychaudhuri equation (\[Raych\_simple\]), in terms of the variables of the Newman-Penrose formalism, i.e., $$\begin{aligned}
\mathcal{R_{\,W}}
&:=g(R(\t {E}{_b},\t {N}{_i})\t {E}{_a},\t {N}{_j}){\t {\eta}{^a^b}}\t {\delta}{^i^j}
\nonumber \\
=&\t {R}{_\alpha_\beta_\mu_\nu}\t {E}{^\mu _b}\t {N}{^\nu _i}\t {E}{^\beta _a}\t {N}{^\alpha_j}{\t {\eta}{^a^b}}\t {\delta}{^i^j}\nonumber \\
&=\t {R}{_\alpha_\beta_\mu_\nu}\left(\t {\eta}{^a^b}\t {E}{^\mu _b}\t {E}{^\beta _a}\right)\left(\t {\delta}{^i^j} \t {N}{^\nu _i} \t {N}{^\alpha_j} \right).\nonumber \end{aligned}$$ Then by using Eqs. (\[exp:eta\_E\_E\]) and (\[exp:delt\_n\_n\]) we obtain $$\begin{aligned}
\mathcal{R_{\,W}}
&=-\t {R}{_\alpha_\beta_\mu_\nu}\left(l^\mu n^\beta + l^\beta n^\mu \right)\left(m^\nu \c{m}^\alpha + m^\alpha \c{m}^\nu \right)\nonumber \\
&=-\left[\t {R}{_{\c{\b{m}}}_{\b{n}}_{\b{l}}_{\b{m}}} + \t {R}{_{\b{m}}_{\b{n}}_{\b{l}}_{\c{\b{m}}}}+ \t {R}{_{\c{\b{m}}}_{\b{l}}_{\b{n}}_{\b{m}}}+\t {R}{_{\b{m}}_{\b{l}}_{\b{n}}_{\c{\b{m}}}}\right].\end{aligned}$$ Since, the Riemann tensor is defined as $$\begin{aligned}
\t {R}{_{\b{x}}_{\b{y}}_{\b{v}}_{\b{w}}}
&= -\inner{D_{\b{x}}D_{\b{y}}\b{v},\b{w}}+\inner{D_{\b{y}}D_{\b{x}}\b{v},\b{w}}+\inner{D_{[\b{x},\b{y}]}\b{v},\b{w}},\nonumber\end{aligned}$$ we write $$\begin{aligned}
\mathcal{R_{\,W}}
&=-\left[\t {R}{_{\c{\b{m}}}_{\b{n}}_{\b{l}}_{\b{m}}} + \t {R}{_{\b{m}}_{\b{n}}_{\b{l}}_{\c{\b{m}}}}+ \t {R}{_{\c{\b{m}}}_{\b{l}}_{\b{n}}_{\b{m}}}+\t {R}{_{\b{m}}_{\b{l}}_{\b{n}}_{\c{\b{m}}}}\right]\nonumber \\
&=-\left[-\inner{D_{\c{\b{m}}}D_{\b{n}}\b{l},\b{m}}+\inner{D_{\b{n}}D_{\c{\b{m}}}\b{l},\b{m}}+\inner{D_{[\c{\b{m}},\b{n}]}\b{l},\b{m}}\right]
\nonumber \\
&\qquad
- \left[-\inner{D_{\b{m}}D_{\b{n}}\b{l},\c{\b{m}}}+\inner{D_{\b{n}}D_{\b{m}}\b{l},\c{\b{m}}}
\right. \nonumber \\
&\qquad \qquad \qquad \qquad \qquad \qquad \left.
+\inner{D_{[\b{m},\b{n}]}\b{l},\c{\b{m}}}\right]
\nonumber \\
&\qquad
-\left[-\inner{D_{\c{\b{m}}}D_{\b{l}}\b{n},\b{m}}+\inner{D_{\b{l}}D_{\c{\b{m}}}\b{n},\b{m}}
\right. \nonumber \\
&\qquad \qquad \qquad \qquad \qquad \qquad \left.
+\inner{D_{[\c{\b{m}},\b{l}]}\b{n},\b{m}}\right]
\nonumber \\
&\qquad
- \left[-\inner{D_{\b{m}}D_{\b{l}}\b{n},\c{\b{m}}}+\inner{D_{\b{l}}D_{\b{m}}\b{n},\c{\b{m}}}
\right. \nonumber \\
&\qquad \qquad \qquad \qquad \qquad \qquad \left.
+\inner{D_{[\b{m},\b{l}]}\b{n},\c{\b{m}}}\right]\label{eq:Riemm_inner}.\end{aligned}$$ Now we will make use of the commutation relations, (\[com\_lm\]) and (\[com\_nm\]), in order to write the inner products that involve the brackets in terms of the Newman-Penrose variables. In particular, $$\begin{aligned}
D_{[\c{\b{m}},\b{n}]}\b{l}
&=-\nu D_{\b{l}}\b{l}-\left(\alpha + \c{\beta} - \c{\tau}\right)D_{\b{n}}\b{l}
\nonumber \\
&\qquad \qquad
-\left(\c{\gamma} -\gamma -\c{\mu}\right) D_{\c{\b{m}}}\b{l}+\lambda D_{\b{m}}\b{l},\nonumber \\
D_{[\b{m},\b{n}]}\b{l}
&= -\c{\nu} D_{\b{l}}\b{l}-\left(\c{\alpha} + \beta - \tau\right)D_{\b{n}}\b{l}
\nonumber \\
&\qquad \qquad
-\left(\gamma -\c{\gamma} -\mu \right) D_{\b{m}}\b{l}+\c{\lambda} D_{\c{\b{m}}}\b{l},\nonumber \\
D_{[\c{\b{m}},\b{l}]}\b{n}
&= -\left(\pi -\alpha - \c{\beta}\right) D_{\b{l}}\b{n} +\c{\kappa}D_{\b{n}}\b{n}
\nonumber \\
&\qquad \qquad
-\left(\c{\varepsilon}-\varepsilon +\rho \right)D_{\c{\b{m}}}\b{n}-\c{\sigma}D_{\b{m}}\b{n},\nonumber \\
D_{[\b{m},\b{l}]}\b{n}
&= -\left(\c{\pi} -\c{\alpha} - \beta\right) D_{\b{l}}\b{n} +\kappa D_{\b{n}}\b{n}
\nonumber \\
&\qquad \qquad
-\left(\varepsilon - \c{\varepsilon} +\c{\rho} \right)D_{\b{m}}\b{n}-\sigma D_{\c{\b{m}}}\b{n}.\end{aligned}$$ At the next step of our derivation we make use of the propagation equations (\[Dll\]), (\[Dnl\]), (\[Dml\]), (\[Dln\]), (\[Dnn\]) and (\[Dmn\]). Then we obtain $$\begin{aligned}
&\inner{D_{[\c{\b{m}},\b{n}]}\b{l},\b{m}}+\inner{D_{[\b{m},\b{n}]}\b{l},\c{\b{m}}}
+\inner{D_{[\c{\b{m}},\b{l}]}\b{n},\b{m}}\nonumber \\
&+\inner{D_{[\b{m},\b{l}]}\b{n},\c{\b{m}}}
= 2\left(\kappa \nu + \c{\kappa} \c{\nu} \right)-2\left(\tau \c{\tau}+\pi \c{\pi}\right)
\nonumber \\
& \qquad \qquad \qquad \qquad
-2\left(\rho \c{\mu} +\c{\rho} \mu +\lambda \sigma + \c{\lambda} \c{\sigma} \right)
\nonumber \\
& \qquad \qquad \qquad \qquad
+\left[ \left(\c{\alpha}+\beta \right)\left( \pi + \c{\tau} \right) +\left(\alpha + \c{\beta} \right)\left( \c{\pi} + \tau \right)\right]
\nonumber \\
& \qquad \qquad \qquad \qquad
-\left[ \left(\varepsilon -\c{\varepsilon}\right)\left(\mu -\c{\mu}\right)+\left(\gamma -\c{\gamma}\right)\left(\rho -\c{\rho}\right)\right],\nonumber\end{aligned}$$ so that $$\begin{aligned}
\mathcal{R_{\,W}}
&=\left[\inner{D_{\c{\b{m}}}D_{\b{n}}\b{l},\b{m}}-\inner{D_{\b{n}}D_{\c{\b{m}}}\b{l},\b{m}}\right]
\nonumber \\
& \qquad
+ \left[\inner{D_{\b{m}}D_{\b{n}}\b{l},\c{\b{m}}}-\inner{D_{\b{n}}D_{\b{m}}\b{l},\c{\b{m}}}\right]
\nonumber \\
& \qquad
+\left[\inner{D_{\c{\b{m}}}D_{\b{l}}\b{n},\b{m}}-\inner{D_{\b{l}}D_{\c{\b{m}}}\b{n},\b{m}}\right]
\nonumber \\
& \qquad
+ \left[\inner{D_{\b{m}}D_{\b{l}}\b{n},\c{\b{m}}}-\inner{D_{\b{l}}D_{\b{m}}\b{n},\c{\b{m}}}\right]\
\nonumber \\
& \qquad
- 2\left(\kappa \nu + \c{\kappa} \c{\nu} \right)+2\left(\tau \c{\tau}+\pi \c{\pi}\right)
\nonumber \\
& \qquad
+2\left(\rho \c{\mu} +\c{\rho} \mu +\lambda \sigma + \c{\lambda} \c{\sigma} \right)
\nonumber \\
& \qquad
-\left[ \left(\c{\alpha}+\beta \right)\left( \pi + \c{\tau} \right) +\left(\alpha + \c{\beta} \right)\left( \c{\pi} + \tau \right)\right]
\nonumber \\
& \qquad
+\left[ \left(\varepsilon -\c{\varepsilon}\right)\left(\mu -\c{\mu}\right)+\left(\gamma -\c{\gamma}\right)\left(\rho -\c{\rho}\right)\right].\nonumber\end{aligned}$$ Now we further use Eqs. (\[Dnl\]), (\[Dml\]), (\[Dln\]),(\[Dmn\]), (\[Dlm\]), (\[Dnm\]) and (\[Dmm\_\]) and write $$\begin{aligned}
\mathcal{R_{\,W}}
&=D_{\b{m}}\left(\pi -\c{\tau} \right)+D_{\c{\b{m}}}\left(\c{\pi} -\tau \right)
\nonumber \\
& \qquad
-\left[\left(\alpha - \c{\beta} \right)\left( \c{\pi} - \tau \right)+\left(\c{\alpha}-\beta \right)\left( \pi - \c{\tau} \right)\right]
\nonumber \\
& \qquad
-\left[\left(\varepsilon +\c{\varepsilon}\right)\left(\mu +\c{\mu}\right)+\left(\gamma +\c{\gamma}\right)\left(\rho +\c{\rho}\right)\right]
\nonumber \\
& \qquad
+\left[D_{\b{n}}\left(\rho +\c{\rho }\right)-D_{\b{l}}\left(\mu +\c{\mu }\right)\right]
\nonumber \\
& \qquad
+\left[\left(\c{\alpha}+\beta \right)\left( \pi +\c{\tau} \right)+ \left(\alpha +\c{\beta} \right)\left( \c{\pi} +\tau \right)\right]
\nonumber \\
& \qquad
-\left[\left(\varepsilon - \c{\varepsilon}\right)\left(\mu -\c{\mu} \right)+\left(\gamma - \c{\gamma}\right)\left(\rho -\c{\rho}\right)\right]
\nonumber \\
& \qquad
- 2\left(\kappa \nu + \c{\kappa} \c{\nu} \right)+2\left(\tau \c{\tau}+\pi \c{\pi}\right)
\nonumber \\
& \qquad
+ 2\left(\rho \c{\mu} +\c{\rho} \mu +\lambda \sigma + \c{\lambda} \c{\sigma} \right)
\nonumber \\
& \qquad
-\left[ \left(\c{\alpha}+\beta \right)\left( \pi + \c{\tau} \right) +\left(\alpha + \c{\beta} \right)\left( \c{\pi} + \tau \right)\right]
\nonumber \\
& \qquad
+\left[ \left(\varepsilon -\c{\varepsilon}\right)\left(\mu -\c{\mu}\right)+\left(\gamma -\c{\gamma}\right)\left(\rho -\c{\rho}\right)\right].\nonumber\end{aligned}$$ Hence, $$\begin{aligned}
\mathcal{R_{\,W}}
&=D_{\b{n}}\left(\rho +\c{\rho }\right)-D_{\b{l}}\left(\mu +\c{\mu }\right)
+ D_{\b{m}}\left(\pi -\c{\tau} \right)+D_{\c{\b{m}}}\left(\c{\pi} -\tau \right)
\nonumber \\
& \qquad
-\left[\left(\alpha - \c{\beta} \right)\left( \c{\pi} - \tau \right)+\left(\c{\alpha}-\beta \right)\left( \pi - \c{\tau} \right)\right]
\nonumber \\
& \qquad
-\left[\left(\varepsilon +\c{\varepsilon}\right)\left(\mu +\c{\mu}\right)+\left(\gamma +\c{\gamma}\right)\left(\rho +\c{\rho}\right)\right]
\nonumber \\
& \qquad
- 2\left(\kappa \nu + \c{\kappa} \c{\nu} \right)+2\left(\tau \c{\tau}+\pi \c{\pi}\right)
\nonumber \\
& \qquad
+2\left(\rho \c{\mu} +\c{\rho} \mu +\lambda \sigma + \c{\lambda} \c{\sigma} \right).\nonumber\end{aligned}$$
Alternative derivation of $\mathcal{R_{\,W}}$
---------------------------------------------
Here we will present a derivation of $\mathcal{R_{\,W}}$ by using the decomposition of the Riemann tensor into its fully traceless, $\t {C} {_\mu _\nu _\alpha _\beta}$, semitraceless, $\t{Y}{_\mu _\nu _\alpha _\beta}$, and the trace parts, $\t{S}{_\mu _\nu _\alpha _\beta}$. For a 4-dimensional spacetime, the decomposition is as follows [@Wald:1984]: $$\begin{aligned}
\label{Riccidecomp}
\t {R}{_\mu _\nu _\alpha _\beta}&=\t {C}{_\mu _\nu _\alpha _\beta}+\t {Y}{_\mu _\nu _\alpha _\beta}-\t {S}{_\mu _\nu _\alpha _\beta},\end{aligned}$$ where $\t {C} {_\mu _\nu _\alpha _\beta}$ is the Weyl tensor, $R$ is the Ricci scalar of the spacetime and $$\begin{aligned}
\t {Y}{_{\mu} _{\nu} _{\alpha} _{\beta}}
&=\frac{1}{2}\left(\t {g}{_{\mu} _{\alpha}}\t {R}{_\beta _\nu}-\t {g}{_\mu _\beta}\t {R}{_\alpha _\nu}-\t {g}{_\nu _\alpha}\t {R}{_\beta _\mu}+\t {g}{_\nu _\beta}\t {R}{_\alpha _\mu} \right)\label{semitraceRiem},\\
\t {S}{_\mu _\nu _\alpha _\beta}&=\frac{R}{6}\left(\t {g}{_\mu _\alpha}\t {g}{_\beta _\nu}-\t {g}{_\mu _\beta}\t {g}{_\alpha _\nu}\right)\label{traceRiem}.\end{aligned}$$ The term we are after follows as $$\begin{aligned}
\mathcal{R_{\,W}}
&:=g(R(\t {E}{_b},\t {N}{_i})\t {E}{_a},\t {N}{_j}){\t {\eta}{^a^b}}\t {\delta}{^i^j}
\nonumber \\
&=\t {R}{_\alpha_\beta_\mu_\nu}\t {E}{^\mu _b}\t {N}{^\nu _i}\t {E}{^\beta _a}\t {N}{^\alpha_j}{\t {\eta}{^a^b}}\t {\delta}{^i^j}\nonumber \\
&= \t {R}{_\alpha_\beta_\mu_\nu}\left(\t {\eta}{^a^b}\t {E}{^\mu _b}\t {E}{^\beta _a}\right)\left(\t {\delta}{^i^j} \t {N}{^\nu _i} \t {N}{^\alpha_j} \right).\nonumber \end{aligned}$$ Now by using Eqs. (\[exp:eta\_E\_E\]) and (\[exp:delt\_n\_n\]) we obtain $$\begin{aligned}
\mathcal{R_{\,W}}
&=-\t {R}{_\alpha_\beta_\mu_\nu}\left(l^\mu n^\beta + l^\beta n^\nu \right)\left(m^\nu \c{m}^\alpha + m^\alpha \c{m}^\nu \right)\nonumber \\
&=-\left(\t {R}{_{\c{\b{m}}}_{\b{n}}_{\b{l}}_{\b{m}}} + \t {R}{_{\b{m}}_{\b{n}}_{\b{l}}_{\c{\b{m}}}}+ \t {R}{_{\c{\b{m}}}_{\b{l}}_{\b{n}}_{\b{m}}}+\t {R}{_{\b{m}}_{\b{l}}_{\b{n}}_{\c{\b{m}}}}\right).\nonumber \end{aligned}$$ Symmetries of $\t {R}{_\mu _\nu _\alpha _\beta}$ allows us to write $$\begin{aligned}
\mathcal{R_{\,W}}
&=-2\left(\t {R}{_{\c{\b{m}}}_{\b{n}}_{\b{l}}_{\b{m}}} + \t {R}{_{\c{\b{m}}} _{\b{l}}_{\b{n}}_{\b{m}}} \right),\nonumber \end{aligned}$$ and by using the decomposition (\[Riccidecomp\]), $$\begin{aligned}
\mathcal{R_{\,W}}
&= -2\left(\t {C}{_{\c{\b{m}}}_{\b{n}}_{\b{l}}_{\b{m}}} + \t {C}{_{\c{\b{m}}} _{\b{l}}_{\b{n}}_{\b{m}}}\right)
\nonumber \\
& \qquad
-2\left(\t {Y}{_{\c{\b{m}}}_{\b{n}}_{\b{l}}_{\b{m}}} + \t {Y}{_{\c{\b{m}}} _{\b{l}}_{\b{n}}_{\b{m}}}-\t {S}{_{\c{\b{m}}}_{\b{n}}_{\b{l}}_{\b{m}}} - \t {S}{_{\c{\b{m}}} _{\b{l}}_{\b{n}}_{\b{m}}} \right).\nonumber\end{aligned}$$ Here we make use of the symmetries of $\t {C}{_\mu _\nu _\alpha _\beta}$ and the definition (\[Psi2\]) to get $$\begin{aligned}
\mathcal{R_{\,W}}
&= -2\left(\Psi _2 +\c{\Psi _2}\right)
\nonumber \\
& \qquad
-2\left(\t {Y}{_{\c{\b{m}}}_{\b{n}}_{\b{l}}_{\b{m}}} + \t {Y}{_{\c{\b{m}}} _{\b{l}}_{\b{n}}_{\b{m}}}-\t {S}{_{\c{\b{m}}}_{\b{n}}_{\b{l}}_{\b{m}}} - \t {S}{_{\c{\b{m}}} _{\b{l}}_{\b{n}}_{\b{m}}} \right).\nonumber \\\end{aligned}$$ By using the definitions of $\t {Y}{_\mu _\nu _\alpha _\beta}$ and $\t {S}{_\mu _\nu _\alpha _\beta}$ given in (\[semitraceRiem\]) and (\[traceRiem\]) we write $$\begin{aligned}
\t {Y}{_{\c{\b{m}}} _{\b{n}}_{\b{l}}_{\b{m}}}
&=\frac{1}{2}\left(\inner{\c{\b{m}},\b{l}}\t {R}{_{\b{m}}_{\b{n}}}-\inner{\c{\b{m}},\b{m}} \t {R}{_{\b{l}}_{\b{n}}}
\right. \nonumber \\
&\qquad \qquad \qquad \left.
-\inner{\b{n},\b{l}}\t {R}{_{\b{m}}_{\c{\b{m}}}}+\inner{\b{n},\b{m}} \t {R}{_{\b{l}}_{\c{\b{m}}}}\right),\\
\t {Y}{_{\c{\b{m}}} _{\b{l}}_{\b{n}}_{\b{m}}}
&=\frac{1}{2}\left(\inner{\c{\b{m}},\b{n}}\t {R}{_{\b{m}}_{\b{l}}}-\inner{\c{\b{m}},\b{m}} \t {R}{_{\b{n}}_{\b{l}}}
\right. \nonumber \\
&\qquad \qquad \qquad \left.
-\inner{\b{l},\b{n}}\t {R}{_{\b{m}}_{\c{\b{m}}}}+\inner{\b{l},\b{m}} \t {R}{_{\b{n}}_{\c{\b{m}}}}\right),\\
\t {S}{_{\c{\b{m}}} _{\b{n}}_{\b{l}}_{\b{m}}}
&=\frac{R}{6}\left(\inner{\c{\b{m}},\b{l}} \inner{\b{m},\b{n}}-\inner{\c{\b{m}},\b{m}}\inner{\b{l},\b{n}}\right),\\
\t {S}{_{\c{\b{m}}} _{\b{l}}_{\b{n}}_{\b{m}}}
&=\frac{R}{6}\left(\inner{\c{\b{m}},\b{n}} \inner{\b{m},\b{l}}-\inner{\c{\b{m}},\b{m}}\inner{\b{n},\b{l}}\right). \end{aligned}$$ Also, since the Ricci scalar is $R=\t {g}{^\mu ^\nu}\t {R}{_\mu _\nu}=2\left(-\t {R}{_{\b{l}}_{\b{n}}}+\t {R}{_{\b{m}}_{\c{\b{m}}}}\right)$ and the Ricci tensor is symmetric, we have $$\begin{aligned}
\mathcal{R_{\,W}}&=-2\left(\Psi _2 +\c{\Psi _2}\right)-2\left(\frac{R}{2}-\frac{R}{3}\right).\end{aligned}$$ In the NP formalism one defines a variable $\Lambda =R/24$, thus we conclude that $$\begin{aligned}
\mathcal{R_{\,W}}=-2\left(\Psi _2 +\c{\Psi _2} +4\Lambda \right).\end{aligned}$$
Other derivations {#Otherderivations}
=================
Gauss equation of $\mathbb{S}$
------------------------------
For a 2-dimensional spacelike surface embedded in a 4-dimensional spacetime, the Gauss equation reads as [@Spivak:1979] $$\begin{aligned}
\label{Gausseq}
g(R(N_k, N_l)N_j,N_i)&=\mathcal{R} _{ijkl} - \t {J}{_a_i_k}\t {J}{_b_j_l}{\t {\eta}{^a^b}} + \t {J}{_a_j_k}\t {J}{_b_i_l}{\t {\eta}{^a^b}}.\end{aligned}$$ When we contract Eq. (\[Gausseq\]) with $\t {\delta} {^i^k} \t {\delta} {^j^l}$ we get $$\begin{aligned}
\label{Gausssimple}
g(R(N^k, N^l)N_k,N_l)&=\mathcal{R}_{\,\mathbb{S}}-H^2+\mathcal{J}^2,\end{aligned}$$ where $\mathcal{R}_{\,\mathbb{S}}$ is the intrinsic curvature scalar of $\mathbb{S}$, $H^2=\t {J}{_a_i_k}\t {J}{_b_j_l}{\t {\eta}{^a^b}}\t {\delta} {^i^k} \t {\delta} {^j^l}$ is the square of the mean extrinsic curvature scalar of $\mathbb{S}$ and $\mathcal{J}^2=\t {J}{_a_j_k}\t {J}{_b_i_l}{\t {\eta}{^a^b}} \t {\delta} {^i^k} \t {\delta} {^j^l}$ is one of the variables that appear in the contracted Raychaudhuri equation. Then derivation of $g(R(N^k, N^l)N_k,N_l)$ in terms of the NP variables proceeds as follows: $$\begin{aligned}
g(R(N_k, &N_l)N_j,N_i)\t {\delta} {^i^k} \t {\delta} {^j^l} = ... \nonumber \\
&...= \t {R}{_{\alpha} _{\beta} _{\mu} _{\nu}}\t {N}{^{\mu}_k}\t {N}{^{\nu}_l}\t {N}{^{\beta}_j}\t {N}{^{\alpha}_i}\t {\delta} {^i^k} \t {\delta} {^j^l}=\t {R} {_i_j_k_l}\t {\delta} {^i^k} \t {\delta} {^j^l}\nonumber \\
&...=\t {R}{_{\alpha} _{\beta} _{\mu} _{\nu}}\left(\t {N}{^{\mu}_k} \t {N}{^{\alpha}_i} \t {\delta} {^i^k}\right)\left(\t {N}{^{\nu}_l} \t {N}{^{\beta}_j} \t {\delta} {^j^l}\right).\nonumber \end{aligned}$$ Now considering the relation (\[exp:delt\_n\_n\]) we write $$\begin{aligned}
\t {R} {_i_j_k_l} \t {\delta} {^i^k} \t {\delta} {^j^l}
&=\t {R}{_{\alpha} _{\beta} _{\mu} _{\nu}}\left(m^\mu \c{m}^\alpha + m^\alpha \c{m}^\mu \right)
\nonumber \\
&\qquad \times
\left(m^\nu \c{m}^\beta + m^\beta \c{m}^\nu \right)\nonumber \\
&=\t {R}{_{\c{\b{m}}} _{\c{\b{m}}} _{\b{m}} _{\b{m}}}+\t {R}{_{\c{\b{m}}} _{\b{m}} _{\b{m}} _{\c{\b{m}}}}
\nonumber \\
&\qquad \qquad \qquad
+\t {R}{_{\b{m}} _{\c{\b{m}}} _{\c{\b{m}}} _{\b{m}}}
+\t {R}{ _{\b{m}} _{\b{m}} _{\c{\b{m}}} _{\c{\b{m}}}}, \nonumber\end{aligned}$$ and by considering the symmetries of $\t {R}{_\mu _\nu _\alpha _\beta}$ we obtain $$\begin{aligned}
\t {R} {_i_j_k_l}\t {\delta} {^i^k} \t {\delta} {^j^l}
&=-2\t {R}{_{\c{\b{m}}} _{\b{m}} _{\c{\b{m}}} _{\b{m}}}.\nonumber\end{aligned}$$ Now let us use the decomposition (\[Riccidecomp\]) and write $$\begin{aligned}
\t {R} {_i_j_k_l}\t {\delta} {^i^k} \t {\delta} {^j^l}
&=-2\left(\t {C}{_{\c{\b{m}}} _{\b{m}} _{\c{\b{m}}} _{\b{m}}}+\t {Y}{_{\c{\b{m}}} _{\b{m}} _{\c{\b{m}}} _{\b{m}}}-\t {S}{_{\c{\b{m}}} _{\b{m}} _{\c{\b{m}}} _{\b{m}}}\right)\label{DecRm_mm_m},\end{aligned}$$ where $$\begin{aligned}
\t {C}{_{\c{\b{m}}} _{\b{m}} _{\c{\b{m}}} _{\b{m}}}
&=\Psi _2+\c{\Psi _2} \label{Cm_mm_m},\\
\t {Y}{_{\c{\b{m}}} _{\b{m}} _{\c{\b{m}}} _{\b{m}}}
&=\frac{1}{2}\left(\inner{\c{\b{m}},\c{\b{m}}}\t {R}{_{\b{m}}_{\b{m}}}
-\inner{\c{\b{m}},\b{m}} \t {R}{_{\c{\b{m}}}_{\b{m}}}
\right. \nonumber \\
&\qquad \qquad \left.
-\inner{\b{m},\c{\b{m}}}\t {R}{_{\b{m}}_{\c{\b{m}}}}+\inner{\b{m},\b{m}} \t {R}{_{\c{\b{m}}}_{\c{\b{m}}}}\right)\nonumber \\
&=-\t {R}{_{\b{m}}_{\c{\b{m}}}} \label{Ym_mm_m}, \\
\t {S}{_{\c{\b{m}}} _{\b{m}} _{\c{\b{m}}} _{\b{m}}}
&=\frac{R}{6}\left(\inner{\c{\b{m}},\c{\b{m}}} \inner{\b{m},\b{m}}-\inner{\c{\b{m}},\b{m}}\inner{\b{m},\c{\b{m}}}\right)\nonumber \\
&=-\frac{R}{6} \label{Sm_mm_m}.\end{aligned}$$ Equation (\[Cm\_mm\_m\]) follows from the fact that Weyl tensor is traceless. To see this, consider the following. For any pair of vectors $\{\b{v},\,\b{w}\}$ one can write $$\begin{aligned}
\t {g}{^x^y}\t {C}{_x_{\b{v}}_{y}_{\b{w}}}&=0 \nonumber \\
&=-\t {C}{_{\b{l}}_{\b{v}}_{\b{n}}_{\b{w}}}-\t {C}{_{\b{n}}_{\b{v}}_{\b{l}}_{\b{w}}}+\t {C}{_{\b{m}}_{\b{v}}_{\c{\b{m}}}_{\b{w}}}+ \t {C}{_{\c{\b{m}}}_{\b{v}}_{\b{m}}_{\b{w}}}.\end{aligned}$$ Now let us set $\b{v}=\b{m},\,\b{w}=\c{\b{m}}$, then we obtain $$\begin{aligned}
0&=-\t {C}{_{\b{l}}_{\b{m}}_{\b{n}}_{\c{\b{m}}}}-\t {C}{_{\b{n}}_{\b{m}}_{\b{l}}_{\c{\b{m}}}}+\t {C}{_{\b{m}}_{\b{m}}_{\c{\b{m}}}_{\c{\b{m}}}}+ \t {C}{_{\c{\b{m}}}_{\b{m}}_{\b{m}}_{\c{\b{m}}}} \nonumber \\
&=\t {C}{_{\b{l}}_{\b{m}}_{\c{\b{m}}}_{\b{n}}}+\t {C}{_{\b{l}}_{\c{\b{m}}}_{\b{m}}_{\b{n}}}+0-\t {C}{_{\c{\b{m}}}_{\b{m}}_{\c{\b{m}}}_{\b{m}}}.\end{aligned}$$ Then by using the definition given in (\[Psi2\]) we find $$\begin{aligned}
\t {C}{_{\c{\b{m}}} _{\b{m}} _{\c{\b{m}}} _{\b{m}}}&=\Psi _2+\c{\Psi _2}.\end{aligned}$$ In order to rewrite Eq. (\[Ym\_mm\_m\]) in terms of the curvature scalars consider $$\begin{aligned}
R=2\left(-\t {R}{_{\b{l}}_{\b{n}}}+\t {R}{_{\b{m}}_{\c{\b{m}}}} \right)\qquad and \qquad
\Phi _{11}=\frac{1}{4}\left(\t {R}{_{\b{l}}_{\b{n}}}+\t {R}{_{\b{m}}_{\c{\b{m}}}}\right).\end{aligned}$$ Then we write $$\begin{aligned}
\t {R}{_{\b{m}}_{\c{\b{m}}}}&=\frac{R+8\Phi _{11}}{4}.\end{aligned}$$ Therefore, substitution of Eqs. (\[Cm\_mm\_m\]), (\[Ym\_mm\_m\]) and (\[Sm\_mm\_m\]) into the decomposition (\[DecRm\_mm\_m\]) yields $$\begin{aligned}
g(R(N_k, N_l)&N^l,N^k)= ... \nonumber \\
&...=-2\left[\left(\Psi _2+\c{\Psi _2}\right)-\left(\frac{R+8\Phi _{11}}{4}\right)+\frac{R}{6}\right]\nonumber \\
&...=-2\left(\Psi _2+\c{\Psi _2}-2\Lambda -2 \Phi _{11}\right).\end{aligned}$$
Boost invariance of quasilocal charges {#App:SpinBoost}
--------------------------------------
### Transformation of $\tilde{\nabla} _{\mathbb{T}}\mathcal{J}$ under type-III Lorentz transformations:
Under a type-III Lorentz transformation, the null vectors $\b{l}$ and $\b{n}$ transform according to the relations (\[lIII\]) and (\[nIII\]) respectively. The transformed spin coefficients, $\gamma ^\prime , \, \mu ^\prime, \, \rho ^\prime$ and $\varepsilon ^\prime$ can be obtained via the relations (\[gammaIII\]), (\[muIII\]), (\[rhoIII\]) and (\[varepsilonIII\]) so that the transformation of the term $\tilde{\nabla} _{\mathbb{T}}\mathcal{J}$ in Eq. (\[DeltaJNP\]) follows as $$\begin{aligned}
\tilde{\nabla} _{\mathbb{T}}\mathcal{J} ^\prime
&= 2\left(D_{\b{n} ^\prime}\rho ^\prime-D_{\b{l} ^\prime}\mu ^\prime \right)
-2\left[\left(\varepsilon ^\prime+\c{\varepsilon ^\prime}\right)\mu ^\prime+\left(\gamma ^\prime+\c{\gamma} ^\prime\right)\rho ^\prime\right]\nonumber \\
&=2\left[\frac{1}{a^2}D_{\b{n}}\left(a^2\rho \right)-a^2D_{\b{l}}\left(\frac{1}{a^2}\mu\right)\right]
\nonumber \\
& \qquad
-2\Big\{ a^2 \left[\varepsilon + D_{\b{l}} \left(\ln a +i\theta \right)\right]
\nonumber \\
&\qquad \qquad \qquad
+a^2 \left[\c{\varepsilon} + D_{\b{l}} \left(\ln a -i\theta \right)\right]
\Big\}\frac{1}{a^2}\mu
\nonumber \\
& \qquad
-2\Big\{ \frac{1}{a^2} \left[\gamma + D_{\b{n}} \left(\ln a +i\theta \right)\right]
\nonumber \\
&\qquad \qquad \qquad
+ \frac{1}{a^2} \left[\c{\gamma} + D_{\b{n}} \left(\ln a -i\theta \right)\right]
\Big\}a^2\rho \nonumber \\
&= 2\left(D_{\b{n}}\rho -D_{\b{l}}\mu \right)-2\left[\left(\varepsilon +\c{\varepsilon }\right)\mu +\left(\gamma +\c{\gamma}\right)\rho \right].\end{aligned}$$ Therefore $\tilde{\nabla} _{\mathbb{T}}\mathcal{J}$ is invariant under a type-III Lorentz transformation.
### Transformation of $\tilde{\nabla} _{\mathbb{S}}\mathcal{K}$ under type-III Lorentz transformations:
By using Eq. (\[DeltaKNP\]), the transformed $\tilde{\nabla} _{\mathbb{S}}\mathcal{K}$ can be written as $$\begin{aligned}
\tilde{\nabla} _{\mathbb{S}}\mathcal{K} ^\prime
&=2\left(D_{\b{m} ^\prime}\pi ^\prime- D_{\c{\b{m}} ^\prime}\tau ^\prime\right)
\nonumber \\
& \qquad
- 2\left[\left(\c{\alpha} ^\prime-\beta ^\prime\right)\pi ^\prime+ \left(\alpha ^\prime-\c{\beta} ^\prime \right)\c{\pi} ^\prime\right],\end{aligned}$$ in which the transformations of the complex null vectors $\b{m}$ and $\c{\b{m}}$ are given in relations (\[mIII\]) and (\[m\_III\]) respectively. Also, the transformed spin coefficients $\tau ^\prime,\, \beta ^\prime,\, \alpha ^\prime$ and $\pi ^\prime$, are obtained via the relations (\[tauIII\]), (\[betaIII\]), (\[alphaIII\]) and (\[piIII\]) so that we have $$\begin{aligned}
\tilde{\nabla} _{\mathbb{S}}\mathcal{K} ^\prime
&=2\left[e^{2i\theta}D_{\b{m}}\left(e^{-2i\theta}\pi\right)-
e^{-2i\theta}D_{\c{\b{m}}}\left(e^{2i\theta}\tau \right)\right]
\nonumber \\
& \qquad
-2\Big\{e^{2i\theta} \left[\c{\alpha} + D_{\b{m}} \left(\ln a -i\theta \right)\right]
\nonumber \\
&\qquad \qquad \qquad
-e^{2i\theta} \left[\beta + D_{\b{m}} \left(\ln a +i\theta \right)\right]\Big \}e^{-2i\theta}\pi
\nonumber \\
&\qquad \qquad \qquad
-2\Big\{e^{-2i\theta} \left[\alpha + D_{\c{\b{m}}} \left(\ln a +i\theta \right)\right]
\nonumber \\
&\qquad \qquad \qquad
-e^{-2i\theta} \left[\c{\beta} + D_{\c{\b{m}}} \left(\ln a -i\theta \right)\right]\Big \}e^{2i\theta}\c{\pi}.\end{aligned}$$ Now by further imposing our null tetrad condition, $\tau +\c{\pi}=0$ on the above equation we obtain $$\begin{aligned}
\tilde{\nabla} _{\mathbb{S}}\mathcal{K} ^\prime
&= 2\left[D_{\b{m}}\pi - D_{\c{\b{m}}}\tau \right] - 2\left[\left(\c{\alpha}-\beta \right)\pi + \left(\alpha -\c{\beta} \right)\c{\pi}\right].\end{aligned}$$ Then, $\tilde{\nabla} _{\mathbb{S}}\mathcal{K}$ transforms invariantly under the spin-boost transformation of the null tetrad.
### Transformation of $\mathcal{J}^2$ under type-III Lorentz transformations:
The transformation of $\mathcal{J}^2$ follows from the definition (\[J2NP\]) plus the transformation relations (\[muIII\]), (\[sigmaIII\]), (\[lambdaIII\]) and (\[rhoIII\]) of the spin coefficients $\mu ^\prime,\, \sigma ^\prime,\, \lambda ^\prime$ and $\rho ^\prime$. Then we write $$\begin{aligned}
\mathcal{J}^{2 \prime}
&= 4\mu ^\prime\rho ^\prime+ 2\left( \sigma ^\prime\lambda ^\prime+ \c{\sigma} ^\prime\c{\lambda} ^\prime \right)\nonumber \\
&=4\left(a^{-2}\mu \right)\left(a^{2}\rho \right)
\nonumber \\
&\qquad
+2\left[\left(a^{2}e^{4i\theta}\sigma \right)\left(a^{-2}e^{-4i\theta}\lambda \right)
\right. \nonumber \\
& \left. \qquad \qquad \qquad \qquad
+\left(a^{2}e^{-4i\theta}\c{\sigma} \right)\left(a^{-2}e^{4i\theta}\c{\lambda} \right)\right]\nonumber \\
&= 4\mu \rho + 2\left( \sigma \lambda + \c{\sigma} \c{\lambda}\right).\end{aligned}$$ Therefore $\mathcal{J}^2$ transforms invariantly under the spin-boost transformation of the null tetrad.
### Transformation of $\mathcal{K}^2$ under type-III Lorentz transformations:
By using Eq. (\[K2NP\]) as for the definition of $\mathcal{K}^2$ and considering relations (\[nuIII\]), (\[tauIII\]), (\[kappaIII\]) and (\[piIII\]) for the transformations of spin coefficients $\nu ^\prime,\, \tau ^\prime,\, \kappa ^\prime$ and $\pi ^\prime$ we write $$\begin{aligned}
\mathcal{K}^{2 \prime}
&=-2\left(\kappa ^\prime\nu ^\prime+ \c{\kappa} ^\prime \c{\nu} ^\prime\right) + 2\left(\pi ^\prime\c{\pi} ^\prime+ \tau ^\prime\c{\tau} ^\prime\right)\nonumber \\
&= -2\left[\left(a^{4}\,e^{2i\theta} \kappa \right) \left(a^{-4}\,e^{-2i\theta} \nu \right)+
\right. \nonumber \\
& \left. \qquad \qquad \qquad \qquad
\left(a^{4}\,e^{-2i\theta} \c{\kappa} \right) \left(a^{-4}\,e^{2i\theta} \c{\nu} \right)\right]
\nonumber \\
& \qquad
+2\left[\left(e^{-2i\theta}\pi \right)\left(e^{2i\theta}\c{\pi} \right)+
\left(e^{2i\theta}\tau \right)\left(e^{-2i\theta}\c{\tau} \right)\right]\nonumber \\
&=-2\left(\kappa \nu + \c{\kappa} \c{\nu}\right) + 2\left(\pi \c{\pi} + \tau \c{\tau} \right).\end{aligned}$$ Thus $\mathcal{K}^2$ is also invariant under spin-boost transformations.
### Transformation of $\mathcal{R_{\,W}}$ under type-III Lorentz transformations:
The Weyl scalar $\Psi _2$ transforms invariantly under spin-boost transformations according to the relation (\[Psi2III\]). Moreover, the parameter $\Lambda =R/24$ is invariant under such a transformation since the Ricci scalar is unchanged. Therefore, following Eq. (\[RWSimpleNPgen\]), it is easy to see that $$\begin{aligned}
\mathcal{R_{\,W}} ^\prime &=-2\left(\psi _2 ^\prime + \c{\psi}_2 ^\prime+ 4\Lambda ^\prime \right) =-2\left(\psi _2 + \c{\psi}_2 + 4\Lambda \right),\end{aligned}$$ and $\mathcal{R_{\,W}}$ is invariant under spin-boost transformations.
[^1]: In fact, recently, it has been announced that the gravitational waves have been detected by local measurements of the two LIGO interferometers [@Abbottetal:2016].
[^2]: The discussion began with Thorne and Hartle’s statement that there exists an ambiguity in the total mass energy of the body [@Thorne_Hartle:1984]. Later, Purdue concluded that there is no ambiguity at least in the rate of work done on the system up to leading order [@Purdue:1999]. Furthermore, Favata considered different “localisations” of gravitational energy and concluded that the total mass-energy of the system does not depend on the choice of the energy-momentum pseudotensor and is thus unambiguous [@Favata:2000].
[^3]: Let $t^{\mu \nu}_{(2k)}$ be a gravitational stress-energy pseudotensor with $k \in \mathbb{R}$. Some of the well known pseudotensors in general relativity can be defined via $2|g|^{k+1}\left(8\pi G\,t^{\alpha \beta}_{(2k)}-G^{\alpha \beta}\right)$ $:=\partial _\mu \partial _\nu \left(|g|^{k+1}\left[g^{\alpha \beta}g^{\mu \nu} -g^{\alpha \nu}g^{\beta \mu}\right]\right)$. Then Einstein field equations imply that $\partial _\alpha \left(|g|^{k+1}\left[t^{\alpha \beta}_{(2k)} +T^{\alpha \beta}\right]\right)=0$ where $T^{\alpha \beta}$ is the matter stress energy tensor. This shows that there is only one pseudotensor, $t^{\mu \nu}_{(-2)}$, which satisfies the conservation of the “total” stress-energy tensor with the correct weight.
[^4]: For example see G[ó]{}mez and Winicour’s discussion on this issue [@Gomez_Winicour:1992]. Also see [@Friedrich:1981] for a construction of a conformal method and see [@Husa:2002] for a pedagogical review of conformal methods in numerical relativity.
[^5]: This can be seen by checking the symmetries of the extrinsic objects introduced at the previous section.
[^6]: The literature is divided into two camps in terms of the definition of the extrinsic curvature scalars $k$ and $l$. For example, let us consider $k := \sigma ^{\mu \nu}k_{\mu \nu}=\sigma ^{\mu \nu} \left(\t {\sigma}{^\alpha_\mu}\t {\sigma}{^\beta_\nu}D_\alpha \,n_\beta\right)$, where $\sigma _{\mu \nu}$ is the induced 2-metric on the closed spacelike surface, $\mathbb{S}$, and $\b{n}$ is the unit vector orthogonal to $\mathbb{S}$ when we consider its embedding in a spacelike 3-volume. For this definition, $k_0=+\frac{2}{r}$ for a round 2-sphere. This notation was used in Epp’s [@Epp:2000], Liu and Yau’s [@Liu_Yau:2003] and in Szabados’s review article [@Szabados:2004]. On the other hand, Brown and York [@Brown_York:1992] and Kijowski [@Kijowski:1997] follow the formal notation for the extrinsic curvature with an extra minus sign. Accordingly $k_0=-\frac{2}{r}$ for a round 2-sphere in their notation. In this paper, we follow the notation used by the first camp since the “positivity” theorem was first presented in this notation [@Liu_Yau:2003]. Moreover, we suspect most researchers refer to Szabados’ review article to compare and contrast various quasilocal energy definitions. Therefore, in Kijowski’s original paper [@Kijowski:1997], $E_{\rm K1}$ and $E_{\rm K2}$ are given in different forms than the ones presented in Eqs. (\[eq:E\_K1\]) and (\[eq:E\_K2\]) respectively.
[^7]: The reason behind this factor of 2 will be more clear in the following sections.
[^8]: We discuss this in more detail in the next section.
[^9]: The reason for choosing toroidal coordinates is that it simplifies the process of defining a smooth, closed, spacelike 2-surface in order to integrate the quasilocal densities. By using this toroidal surface, one can bypass the coordinate singularity at $x=0$. Note that without the existence of such a closed surface, quasilocal energies are not defined. This is closely related to the Stokes’ Theorem which comes up in the derivation of the non-vanishing boundary Hamiltonian from an action principle of general relativity in a covariant formulation.
[^10]: See [@Adamo_Newman:2014] for a recent review.
[^11]: Note that we are using $\{-,+,+,+\}$ signature for the spacetime metric throughout the paper. Therefore our spin coefficients and the curvature scalars have an extra negative sign when compared to Newman-Penrose’s original notation in [@Newman_Penrose:1961].
|
---
address: 'Trevor Sweeting is Professor of Statistical Science, Department of Statistical Science, University College London, London WC1E 6BT, United Kingdom .'
author:
-
title: 'Discussion of “Objective Priors: An Introduction for Frequentists” by M. Ghosh'
---
The paper by Ghosh provides a useful introduction to the main ideas underlying objective priors and how these ideas might profitably be used by frequentist statisticians, both at a theoretical and practical level. The aspects likely to be of most interest to this group of statisticians are those concerning probability matching, allowing valid frequentist procedures to be derived via a formal Bayesian analysis. But they should also be interested in priors that arise from decision-theoretic considerations, not least since the consideration of risk criteria, such as mean squared error for estimation or operating characteristic function for testing, is ubiquitous in the frequentist approach. As pointed out by the author, at a theoretical level the shrinkage argument, which I have also used extensively in the past, provides a neat way of deriving frequentist asymptotic results.
My discussion will focus on an examination of the main criteria that have been used to obtain objective priors and, partly related to this, the extent to which the theory and practical application can be extended to more complex scenarios. Before launching into this I would just like to comment on the commonly used term “objective” in the present context. As soon becomes apparent in this field, there is an array of possible criteria available for the development of objective priors, some of which depend on a specific choice of parameterization, and there may be no unique solution even for a given criterion. Thus the choice quickly ceases to be purely objective. My own preference is to use the term “nonsubjective,” which indicates that the prior is detached from subjective beliefs about parameters but which does not impart such a strong sense of broad agreement as to what the prior should be in any particular case.
Comparison of Criteria
======================
First, a general point about alternative criteria for the development of objective priors. I have a strong preference for criteria that would lead to the use of properly calibrated subjective priors whenever they are available, so that the consideration of objective priors in some sense generalizes a property of a fully subjective Bayesian approach. In a sense this is true of probability matching since this leads to (approximately) correct coverage of posterior regions in hypothetical repeated sampling. This in turn implies that these regions will also be calibrated over repeated [*use*]{}, as would automatically be the case if a properly elicited subjective prior were to be used. The same cannot be said for moment matching in the sense described in Section 5.2; there seems nothing in this criterion that would lead one to use a subjective prior when available.
Similarly, consideration of a proper scoring rule in a decision-theoretic approach would indicate the use of an elicited subjective prior whenever one is available. As a consequence, I would be uneasy using a decision-theoretic criterion that was not based on a proper scoring rule. For example, it does seem surprising that, even in the scalar parameter case, Jeffreys’ prior turns out not to be optimal under the distance measure (3.13) with $\beta=-1$. The problem is that, unlike the Bernardo criterion that arises when $\beta=0$ (see later), none of these distance measures corresponds to an average regret based on some primitive loss function that produces a (negative) score when data $x$ are observed and a prior predictive distribution $\pi(x)$ is adopted. So there seems to be no obvious sense in which we would recover a subjective prior distribution whenever one is available.
Although there is some reference to predictive probability matching in Sections 5 and 6, the paper is largely a review of objective priors obtained via parametric criteria, which usually require a focus on one or more specified parameters of interest. This has certainly been the most popular area of study and, as a technical device for obtaining frequentist procedures, it performs a useful function. However, the focus on parameters is a cause for concern for many Bayesian statisticians. Such approaches normally require a specific choice of parameters of interest, such as in quantile probability matching or the construction of group reference priors. The idea that an analysis should be redone when the spotlight turns to alternative sets of parameters is disturbing. In particular, in complex real-world applications there will potentially be many parametric functions of interest. An alternative to quantile matching is higher-order matching for highest posterior density or other regions, which may not require a specific choice of interest parameters. However, there is an infinite variety of ways in which a region can be chosen. Indeed, in the scalar parameter case, given *any* prior it is possible to choose the region in such a way that higher-order matching is achieved (Severini, [-@Sev93]; Sweeting, [-@Swe99]).
An alternative approach is to study the behavior of predictive distributions. This is appealing as the parameterization then becomes irrelevant. Just as in the parametric case one can consider predictive probability matching (Datta, Ghosh and Mukerjee, [-@DatGhoMuk00]; Severini, Mukerjee and Ghosh, [-@SevMukGho02]) and predictive risk (Komaki, [-@Kom96]; Sweeting, Datta and Ghosh, [-@SweDatGho06]), and Ghosh has contributed to both of these areas. In the former case the criterion (4.23) is replaced by the following. Let $Y$ be a future observation from the model and let $y(\pi,\alpha)$ denote the $(1-\alpha)$-quantile of the predictive distribution of $Y$ based on the prior $\pi$. If it is also the case that $$\operatorname{pr}\{Y>y(\pi,\alpha)|\theta\}=\alpha+O(n^{-r}),\vspace*{2pt}$$ then we have predictive probability matching; typically $r$ will be 2 here. In the latter case we can consider the regret when the prior $\pi$ is adopted and $\theta$ is the true parameter value. Adopting the logarithmic scoring rule $-\log\pi(y|x)$, which is the unique local proper scoring rule, this has the general form $$\label{yx}
d_{Y|X}(\theta,\pi)={\mathrm{E}}^\theta\biggl[\log\biggl\{\frac{f(Y|X,\theta)}{\pi(Y|X)}\biggr\}\biggr].$$ Priors that attempt to control this risk might be considered to be more ‘general purpose’ than priors that require the specification of certain parametric functions.
Having used a sensible broad criterion to obtain a prior, one could then go on to investigate its parametric properties. For example, there may be more than one prior that produces the same (low) predictive risk and the choice between these priors might be made on the basis of a particular interest parameterization. In Examples 1 and 2 of the paper the right Haar prior $\pi(\mu,\sigma)\propto\sigma^{-1}$ is exactly predictive probability matching and also arises as a minimax prior under (\[yx\]) (Liang and Barron, [-@LiaBar04]). We can then see that, for example, it is exactly probability matching when the interest parameter is $\mu$ or $\sigma$ and second-order probability matching when $\theta=\mu/\sigma$ is the interest parameter, as shown in Example 2 (continued).
It is instructive to compare the above predictive risk criterion with the basic reference prior approach of Bernardo ([-@Ber79], [-@Ber05]). The reference prior criterion in Section 3.1 is maximization of the Kullback–Leibler divergence between the prior and posterior distributions. As shown by Clarke and Barron ([-@ClaBar94]), this is equivalent to finding the minimax solution under the regret $$\label{x}
d_X(\theta,\pi)={\mathrm{E}}^\theta\biggl[\log\biggl\{\frac{f(X|\theta)}{\pi(X)}\biggr\}\biggr].$$ Note that (\[x\]) is based on the proper scoring rule $-\log\pi(x)$. This may be contrasted with (\[yx\]), which is based on the proper scoring rule $-\log\pi(y|x)$, as suggested by Geisser in his discussion of Bernardo ([-@Ber79]). The former is based on scoring the prior predictive distribution, which is arguably less relevant than the posterior predictive distribution on which the latter is based. We are not so much interested in predicting the data already observed as new data yet to be observed. This distinction is reminiscent of model fitting, where it is the fit to as yet unobserved data that is more relevant than the fit to observed data. Note also that working in terms of the posterior predictive distribution avoids problems of impropriety of the prior, requiring only that $\pi(x)<\infty$. Thus, to continue the discussion of Example 1 in the paper, in contrast to the predictive criterion (\[yx\]), Jeffreys’ prior emerges as the minimax solution under (\[x\]), whereas it is inadmissible under (\[yx\]).
In more complex examples (\[yx\]) involves a complicated function that includes components of skewness and curvature of the model. However, it is argued in Sweeting, Datta and Ghosh ([-@SweDatGho06]) that it is more appropriate to consider the regret $$\label{yxtau}
d_{Y|X}(\tau,\pi)={\mathrm{E}}\biggl[\log\biggl\{\frac{\tau(Y|X)}{\pi(Y|X)}\biggr\}\biggr],$$ where the expectation is taken over the joint distribution of $X$ and $Y$ under the prior $\tau$. This is because we are not so much interested in comparing the performance of $\pi$ with that in a lower-dimensional submodel at a fixed parameter value as comparing its performance with that of other nondegenerate prior distributions for the current model. Moreover, when an elicited prior $\tau$ is available criterion (\[yxtau\]) will lead us to use this prior. An asymptotic analysis of (\[yxtau\]) and the adoption of a minimax criterion, for example, produces sensible priors in specific examples. Another appealing aspect is that the asymptotic predictive criterion does not depend on the amount of prediction.
More Complex Models
===================
Some of the most important and challenging applications of the day, such as environmental science, biomedicine, neuroscience and genomics, demand large, sophisticated and often high-dimensional models. The results in Section 4 of the paper on first- and second-order matching priors are mathematically attractive, but there is clearly a need to explore the extent to which these results can be profitably used in more complex models. As the author points out in Section 6, objective priors have been successfully developed for a number of more complex problems. However, there remains a need for semi-automated procedures so that suitable “safe” default priors can be developed rapidly for arbitrary model structures. Major difficulties include the difficulty or impossibility of obtaining a closed form expression for Fisher’s information and, even if this is possible, of solving the required partial differential equations. Levine and Casella ([-@LevCas03]) proposed an algorithm for the implementation of probability matching priors for a single interest parameter in the presence of a single nuisance parameter. However, the implementation requires a substantial amount of computing time. An alternative approach is outlined in Sweeting ([-@Swe05]), where it is shown that suitable data-dependent priors can be developed in some cases. Staicu and Reid ([-@StaRei08]) proposed an elegant analytic solution based on higher-order approximation of the marginal posterior distribution. It seems to me, however, that some form of data-driven approach will be the only viable way to extend probability matching ideas to general frameworks.
Apart from computational difficulties, the major theoretical difficulty of all the approaches to objective prior construction that rely on sample size asymptotics is the potential breakdown of the theory in high-dimensional parameter spaces. In some cases it may be possible to identify directions in the parameter space about which the data are relatively uninformative. This can be conveniently explored, for example, via an eigenanalysis of the observed information matrix. Although the model is high-dimensional, most of the variation of the likelihood may take place on a lower-dimensional manifold of the parameter space. This means, of course, that the model is close to being non-identifiable, which causes difficulties if the parameters themselves are of direct interest. However, this may be amenable to analysis using a predictive approach. If a parameter only enters weakly in the model, then the predictive distribution should not depend critically on the prior chosen for that parameter and asymptotic theory should apply in such cases.
Although versions of probability matching priors and reference priors in nonregular cases have been investigated by Ghosal ([-@Gho97], [-@Gho99]) and Berger, Bernardo and Sun ([-@BerBerSun09]), it will be a major challenge to develop multidimensional priors in an automatic way when some aspects of the model are regular and others nonregular.
I suspect that the application of objective priors for high-dimensional problems will be of greater interest to Bayesian than to frequentist statisticians. Given the difficulties of deriving such priors in these cases, the frequentist may well abandon this route and explore alternative simulation-based approaches. On the other hand, a suitable high-dimensional prior is essential for the Bayesian statistician to operate at all. Yet the greater the dimension of the model the less likely it is that reliable prior information will be available on all the parameters, let alone on their mutual dependencies. Furthermore, as noted earlier, it is less likely that there will be just one or two parameters of interest, so I believe that the quest will focus more on the identification of safe, general purpose priors that allow the inclusion of subjective information when available, rather than on priors tailored to specific parameters. If this ambition is realized, then the resulting priors should be thought of as no more than “reference” priors, in the broad sense of the word, and should not replace the need for sensitivity analysis.
Some Other Difficulties
=======================
Many Bayesian statisticians remain sceptical about the need for objective priors to represent ignorance and a common practice is to utilize proper but diffuse priors instead. However, care has to be taken that the tail behavior of such priors is not too thin, otherwise the prior may have the unexpected effect of dominating the likelihood. Consider a random sample from $N(\mu,\sigma^2)$. Suppose that $\mu$ and $\sigma^2$ are taken to be a priori independent with normal and inverse Gamma distributions, respectively. How diffuse should these distributions be and how sensitive are the results to these choices? Specifically, suppose that $X_i\sim N(\mu, \phi^{-1})$, where $\phi$ is the precision parameter, and $\mu,\phi$ are a priori independent with $\mu\sim N(0,c^{-1}), \phi\sim\operatorname{Gamma}(a,b)$. Suppose we observe data $529.0, 530.0, 532.0, 533.1, 533.4, 533.6,\break 533.7, 534.1, 534.8,
535.3$. Take $a=b=c={\varepsilon}$. What is the effect of the choice of ${\varepsilon}$? The value $c=0.001$ is not small enough: the “noninformative prior” dominates the likelihood and the mean of the marginal posterior of $\mu$ is close to zero. Effectively, this happens because the normal tail of the prior for $\mu$ is thinner than the Student $t$-tail of the integrated likelihood of $\mu$. The value $c=0.0002$ is also not sufficiently small, although if a Gibbs sampler starting near the sample values is run, then it will not detect the problem at all until after a large number of iterations and it will appear from trace plots as if the sampler has converged. A value of $c$ less than 0.0001 is needed for the likelihood to dominate the prior. If we run into such problems in simple models like this, then there has to be a great deal of concern for higher-dimensional models. So objective priors do matter; it is virtually impossible to reliably elicit a high-dimensional prior distribution and there are pitfalls associated with using vague but proper priors.
Yet another difficulty arises when the likelihood does not tend to zero at the boundary of the parameter space. In that case an improper prior may lead to an improper posterior, forcing the use of a proper prior. The objective selection of such a prior is likely to be problematic. An example is the dispersion parameter in a Dirichlet process mixture model. Some authors simply set the hyperparameters in a Gamma prior to be very small, but clearly this requires great care as we know that in the limit we will obtain an improper posterior.
Concluding Remarks
==================
I do think that frequentist interest in Bayesian statistics should be rather more than simply its potential use as a device to obtain valid frequentist procedures. When there is some concern about the priors adopted, Bayesians will often “look over their shoulder” at frequentist properties, if only to check that the prior is not producing some anomalous behavior (cf. Example 3 in the paper). Likewise, frequentist statisticians should find it useful to do the same, possibly to provide an indication that they are not falling seriously foul of the conditionality principle, or possibly to see to what extent their confidence statements have direct probability interpretations. Finally, I would like to thank the author for his interesting review of this area and for stimulating me to think a little more about the basis for the construction of objective priors and the challenges that confront this field of research.
[15]{}
, (). . .
(). . .
(). . In . ( ) . .
(). . .
, (). . .
(). . .
(). . .
(). . .
(). . .
(). . .
(). . .
, (). . .
(). . .
(). . .
(). . .
, (). . .
|
---
abstract: 'With planets orbiting stars, a planetary mass function should not be seen as a low-mass extension of the stellar mass function, but a proper formalism needs to take care of the fact that the statistical properties of planet populations are linked to the properties of their respective host stars. This can be accounted for by describing planet populations by means of a differential planetary mass-radius-orbit function, which together with the fraction of stars with given properties that are orbited by planets and the stellar mass function allows to derive all statistics for any considered sample. These fundamental functions provide a framework for comparing statistics that result from different observing techniques and campaigns which all have their very specific selection procedures and detection efficiencies. Moreover, recent results both from gravitational microlensing campaigns and radial-velocity surveys of stars indicate that planets tend to cluster in systems rather than being the lonely child of their respective parent star. While planetary multiplicity in an observed system becomes obvious with the detection of several planets, its quantitative assessment however comes with the challenge to exclude the presence of further planets. Current exoplanet samples begin to give us first hints at the population statistics, whereas pictures of planet parameter space in its full complexity call for samples that are 2–4 orders of magnitude larger. In order to derive meaningful statistics however, planet detection campaigns need to be designed in such a way that well-defined fully-deterministic target selection, monitoring, and detection criteria are applied. The probabilistic nature of gravitational microlensing makes this technique an illustrative example of all the encountered challenges and uncertainties.'
author:
- |
M. Dominik,$^{1}$[^1][^2]\
$^{1}$SUPA, University of St Andrews, School of Physics & Astronomy, North Haugh, St Andrews, KY16 9SS, United Kingdom
bibliography:
- 'planetMF.bib'
title: Planetary mass function and planetary systems
---
planetary systems — gravitational lensing.
Introduction
============
More than 450 planets orbiting stars other than the Sun have been detected to date by means of four different techniques: Doppler-wobble stellar radial-velocity measurements, planetary transits, gravitational microlensing, and the direct detection of emitted or reflected light. Observing campaigns now need to evolve from the pure detection of planets to studies that allow to infer the statistical properties of the underlying populations that are being probed. In order to achieve such a goal, deterministic procedures for the selection of targets and the identification of planetary signals are required [@SIGNALMEN; @ARTEMiS; @RoboNet-II; @OT1; @MiNDSTEp2008].
While many studies on planet populations based on data from radial-velocity surveys have been carried out [e.g. @Marcy:abundance1; @Udry:abundance; @Marcy:abundance2; @Mayor:abundance; @OT2], there has been a long silence on extracting planetary abundances from gravitational microlensing campaigns since the twin papers on the first five years of the PLANET campaign[^3] [@PLANET:fiveshort; @PLANET:fivelong], and those on the OGLE-II [@OGLE2:limits] and the first year of the OGLE-III survey[^4] [@OGLE2002]. Only very recently, there appears to be a sudden inflation [@Sumi:planet; @Gould:abundance].
The various techniques currently used for studying planet populations (and any future ones as well) have their very own preferred regions of planet parameter space that they are sensitive to, while being blind to others. Consequently, these do not directly probe [*the*]{} planetary mass function, but some bits and pieces for which the detection efficiency of the campaign and the selection biases need to be determined carefully. Any quoted planet abundance needs to come with a complete description which region of planet parameter space it refers to, and how it has been averaged. In particular, one can anticipate substantial differences on whether one talks about Solar-type stars or M dwarfs, hot or cool planets, bulge or disk stars (with their different metallicities).
In order to get around such difficulties, and ease the comparison between findings that arise from different campaigns and/or techniques, a general framework of differential planetary mass functions with respect to their fundamental parameters is suggested in this paper.
Rather than planets being distributed just randomly amongst stars, it seems that they tend to cluster in planetary systems. Such a conjecture is underpinned by recent observational evidence on both outer gas-giant planets [@DoubleCatch; @Marois:planet] as well as Super-Earths and planets with Neptune-class masses in closer orbits [@Mayor:abundance; @HARPS:abundance2]. Therefore, the detection of planets in an experiment does not correspond to independent draws from its population, but the probability of their detection around a star that is known to host planets is larger than that of finding it around a randomly chosen star.
While Sect. \[sec:formalism\] presents a theoretical framework for describing planet populations in view of clustering in planetary systems and specific regions of interest or sensitivity, Sect. \[sec:measureMF\] provides rough estimates for the size of planet samples required to assess the fundamental functions that decribe these populations. Sect. \[sec:multiplicity\] is devoted to planetary multiplicity, while Sect. \[sec:abundance\] looks into planet abundance estimates arising from gravitational microlensing observations, their uncertainties, and the involved challenges. Sect. \[sec:outlook\] finally concludes the paper with a short summary and outlook.
Fundamental functions describing planetary systems {#sec:formalism}
==================================================
Only celestial bodies that are in orbit around a around a star or stellar remnant are ‘planets’.[^5] This means that planets cannot be seen in isolation from these, and consequently planets are not well-described by just extending the stellar mass function [@Salpeter; @Scalo:IMFreview; @Kroupa:Science] to lower masses, but a mass function decribing planets needs to link to their host stars (or remnants).
Let us consider explicitly the dependence of planetary abundance on stellar mass $M_\star$, metallicity $Z$, age $\tau$, and spin rate $\Omega$ and therefore define a differential stellar mass function $\xi(M_\star,Z,\tau,\Omega)$,[^6] so that for a population with density functions $p_Z(Z)$, $p_\tau(\tau)$, and $p_\Omega(\Omega)$ for metallicity, age, or spin rate, respectively, one obtains a mass function $$\Xi(M_\star) = \int \xi(M_\star,Z,\tau,\Omega)\,p_Z(Z)\,\mathrm{d}Z\,p_\tau(\tau)\,\mathrm{d}\tau\,p_\Omega(\Omega)\,\mathrm{d}\Omega\,,$$ where the number density of stars becomes $$N_\star = \int\,\Xi(M_\star)\,\rmn{d}[\lg(M_\star/M_\odot)]\,,$$ where $M_{\odot}$ denotes the mass of the Sun.
For the stars that host planets, the properties of planets can then be described by a differential planetary mass-radius-orbit function $\varphi(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega)$, where $m_p$, $r_\rmn{p}$, $a$, and $\varepsilon$ denote the mass, radius, orbital semi-major axis, or orbital eccentricity of the planet, respectively, and further parameters might be added. This implies a mass function for planetary systems around stars with $(M_\star,Z,\tau,\Omega)$ given by $$\begin{aligned}
& & \hspace*{-2em}
\Phi(m_\rmn{p};M_\star, Z, \tau,\Omega) = \int
\varphi(m_\rmn{p},r_\rmn{p},a,\varepsilon;M_\star,Z,\tau,\Omega)\;\times
\nonumber \\
& & \times\;
\rmn{d}[\lg(r_\rmn{p}/r_\oplus)]\,\rmn{d}[\lg(a/a_\oplus)]\,\rmn{d}\varepsilon\,,\end{aligned}$$ with $r_\oplus$ being the Earth’s radius and $a_\oplus = 1~\mbox{au}$, so that the average number of planets in such systems reads $$n_\mathrm{p}(M_\star, Z, \tau,\Omega) = \int \Phi(m_\rmn{p};M_\star, Z, \tau,\Omega)
\,\rmn{d}[\lg(m_\rmn{p}/M_\oplus)]\,,$$ where $M_\oplus$ is the mass of the Earth.
With $f_\mathrm{p}(M_\star,Z,\tau,\Omega)$ denoting the fraction of stars that host planets, the number density of planets for a stellar population becomes $$\begin{aligned}
& & \hspace*{-2em}
N_\mathrm{p} = \int f_\mathrm{p}(M_\star,Z,\tau,\Omega)\;
\xi(M_\star,Z,\tau,\Omega)\;
n_\mathrm{p}(M_\star,Z,\tau,\Omega)\;\times \nonumber \\
& & \times\;p_Z(Z)\,
\mathrm{d}Z\,p_\tau(\tau)\,
\mathrm{d}\tau\,p_\Omega(\Omega)\,
\mathrm{d}\Omega\,\mathrm{d}[\lg(M/M_\star)]\,.\end{aligned}$$
Moreover, one finds a population-integrated planetary mass-radius-orbit function $$\begin{aligned}
& & \hspace*{-2em} \psi(m_\rmn{p},r_\rmn{p},a,\varepsilon) = \int f_\rmn{p}(M_\star,Z,\tau,\Omega)\;\xi(M_\star,Z,\tau,\Omega)\;\times \nonumber\\
& & \times\;\varphi(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega)\;\times \nonumber \\
& & \times\;\,p_Z(Z)\,\rmn{d}Z\,p_\tau(\tau)\,
\rmn{d}\tau\,p_\Omega(\Omega)\,
\rmn{d}\Omega\,\rmn{d}[\lg(M_\star/M_\odot)] \,,\end{aligned}$$ and a corresponding planetary mass function results as $$\Psi(m_\rmn{p}) = \int \psi(m_\rmn{p},r_\rmn{p},a,\varepsilon)\,
\rmn{d}[\lg(r_\rmn{p}/r_\oplus)]\,\rmn{d}[\lg(a/a_\oplus)]\,\rmn{d}\varepsilon\,,$$ so that one finds the number density of planets again as $$N_\mathrm{p} = \int \Psi(m_\rmn{p})\,\rmn{d}[\lg(m_\rmn{p}/M_\oplus)]\,.$$
Provided that experiments in the hunt for extra-solar planets follow deterministic criteria, a mass function can be extracted that refers to the selected host stars and planetary orbits that the applied technique is sensitive to, i.e. averages are taken over the stellar population and the orbital parameters. However, in order to answer fundamental questions such as ‘How frequent are planets of a given mass range in the Solar neighbourhood?’, ‘What fraction of stars in the Milky Way do have planetary systems?’, or ‘How many planets that could host life are there in the Universe?’, one needs to trace back the description of planetary systems to more fundamental functions such as the differential mass-radius-orbit function $\varphi(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega)$, the fraction of stars with planetary systems $f_\mathrm{p}(M_\star, Z, \tau,\Omega)$, and the differential stellar mass function $\xi(M_\star,Z,\tau,\Omega)$.
Measuring planetary mass functions {#sec:measureMF}
==================================
In order to obtain an estimate on how well we can measure a planetary mass function, let us consider dividing the parameter space into multi-dimensional bins. If rather than aiming for a precision measurement, one sets the goal at an ’astronomical’ accuracy of 50 per cent, the assumption of Poisson statistics yields the requirement of each bin to contain at least 4 planets. Let $p$ denote the number of considered parameters, and $b$ the number of considered parameter ranges, the minimal number of planets needed to provide the desired result is $N_\rmn{p} = 4\,b^{p}$, which would correspond to letting the choice of parameter ranges follow the observed distribution of detected planets, in such a way that each bin contains exactly the mimimum of 4 planets. With $\kappa$ denoting the desired accuracy, one finds more generally $N_\rmn{p} = \kappa^{-1/2}\,b^{p}$. Table \[tab:nplanets\] shows the requirements for some selected cases with relative accuracies of 50 per cent or 20 per cent, 2-, 4- or 6-parameter functions, and a various number of bins ranging from 2 to 10. Roughly, one gets an idea of the distribution of the planet abundances with $b \geq 3$, but one can realistically only start talking about a “planetary mass function” for $b \geq 5$. While a planetary mass-radius-separation function $\varphi_{m_\rmn{p},r_\rmn{p},a}(m_\rmn{p},r_\rmn{p},a;M_\star,Z,\tau)$ depending on the stellar mass, metallicity, and age involves 6 parameters, less detailed 4-parameter functions are e.g. the planetary mass-separation $\varphi_{m_\rmn{p},a}(m_\rmn{p},a;M_\star,Z)$ or mass radius function $\varphi_{m_\rmn{p},r_\rmn{p}}(m_\rmn{p},r_\rmn{p};M_\star,Z)$ depending on stellar mass and metallicity, or a planetary mass-radius-separation function depending on stellar mass only, and 2-parameter functions would e.g. be the planetary mass function $\varphi_{m_\rmn{p}}(m_\rmn{p};M_\star)$ depending on stellar mass only, or the planetary mass-separation function $\varphi_{m_\rmn{p},a}(m_\rmn{p},a)$ irrespective of the stellar properties. We now have a total sample of about 450 planets orbiting stars other than the Sun, where it took about 10 years to detect the first 150, then about 3 years to detect the next 150, and then just about 1 year to detect the equal number of 150. Table \[tab:nplanets\] shows how long campaigns with a constant detection rate of 150 planets per year would have to last in order to obtain the respective functions with desired accuracies.
Right now, the collected data allow to measure 1-parameter functions, find the basic structure structure ($b \geq 10$) of 2-parameter functions, see basic trends ($b \geq 3$) in 4-parameter functions, and some hint on the dependency of the planet abundance on further parameters. With 150 planets per year, or more realistically, a fair factor of this rate, rough ideas ($b \geq 5$) of 4-parameter planetary mass functions ($b \geq 5$) and an indication of trends ($b \geq 3$) for 6-parameter planetary mass functions are obtainable within foreseeable time frames, but the numbers call for more aggressive searches.
$\kappa$ $p$ $b$ $N_\rmn{p}$ $T_{150}$
---------- ----- ----- ------------------------------ ---------------
0.5 2 2 $4\times 2^2 = 16$ 1.3 months
0.5 2 3 $4\times 3^2 = 36$ 2.9 months
0.5 2 5 $4\times 5^2 = 100$ 8 months
0.5 2 10 $4\times 10^2 = 400$ 3 years
0.5 4 2 $4\times 2^4 = 64$ 5 months
0.5 4 3 $4\times 3^4 = 324$ 2 years
0.5 4 5 $4\times 5^4 = 2500$ 17 years
0.5 4 10 $4\times 10^4 = 40,000$ 270 years
0.5 6 2 $4\times 2^6 = 256$ 1.7 years
0.5 6 3 $4\times 3^6 = 2916$ 19 years
0.5 6 5 $4\times 5^6 = 62,500$ 420 years
0.5 6 10 $4\times 10^6 = 4,000,000$ 27,000 years
0.2 2 2 $25\times 2^2 = 100$ 8 months
0.2 2 3 $25\times 3^2 = 225$ 1.5 years
0.2 2 5 $25\times 5^2 = 625$ 4 years
0.2 2 10 $25\times 10^2 = 2500$ 17 years
0.2 4 2 $25\times 2^4 = 400$ 2.7 years
0.2 4 3 $25\times 3^4 = 2025$ 13.5 years
0.2 4 5 $25\times 5^4 = 15,625$ 100 years
0.2 4 10 $25\times 10^4 = 250,000$ 1700 years
0.2 6 2 $25\times 2^6 = 1600$ 11 years
0.2 6 3 $25\times 3^6 = 18,225$ 120 years
0.2 6 5 $25\times 5^6 = 390,625$ 2600 years
0.2 6 10 $25\times 10^6 = 25,000,000$ 170,000 years
: Minimal number of planets $N_\rmn{p}$ required to sample a descriptive statistic with $p$ parameters with $b$ bins to a relative accuracy $\kappa$, and time $T_{150}$ required to acquire such a sample for a planet detection rate of 150 per year.[]{data-label="tab:nplanets"}
Planetary multiplicity {#sec:multiplicity}
======================
While stars with and without planets have been distinguished by referring to the fraction $f_\rmn{p}(M_\star,Z,\tau,\Omega)$ of stars that host planets and defining the differential planetary mass-radius-orbit function $\varphi(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega)$ to relate to these only, a further statistic is the distribution of the number of planets amongst all planetary systems. With multiplicity indices $\zeta_k$ that denote the fraction of planetary systems containing $k$ planets, where $$\sum_{k=1}^{\infty} \zeta_k = 1\,,$$ the planetary mass-radius-orbit function can be decomposed as $$\begin{aligned}
& & \hspace*{-2em}
\varphi(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega) \nonumber \\
& & = \sum_{k=1}^{\infty} k\,\zeta_k\,\hat{\varphi}_k(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega)\,,\end{aligned}$$ where $$\begin{aligned}
& & \hspace*{-2em}
\int \hat{\varphi_k}(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega)\;\times \nonumber \\
& & \times\;\rmn{d}[\lg(m_\rmn{p}/M_\oplus)]\,\rmn{d}[\lg(r_\rmn{p}/r_\oplus)]\,\rmn{d}[\lg(a/a_\oplus)]\,\rmn{d}\varepsilon = k\,.\end{aligned}$$
In general, all $\hat{\varphi}_k(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega)$ may be different. Together with the multiplicity indices $\zeta_k$, one would be left with an infinite number of parameters. This however can be meaningfully avoided by adopting a functional dependence of $\zeta_k$ and $\hat{\varphi}_k$ on k that is described by a small finite number of parameters.
In particular, one might want to distinguish stars with a single planets to multiple-planet systems, described by $\zeta_1$ (with $\zeta_\rmn{mult} = 1-\zeta_1$), $\hat{\varphi}_1(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega)$, and $$\begin{aligned}
& & \hspace*{-2em}
\hat{\varphi}_\rmn{mult}(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega) \nonumber \\
& & = \sum_{k=2}^{\infty} k\,\zeta_k\,\hat{\varphi}_k(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega)\nonumber \\
& & = \varphi(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega)\;- \nonumber \\
& & \hspace*{2em} -\;\zeta_1\,\hat{\varphi}_1(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega)\,.\end{aligned}$$ In fact, @Wright have argued that there is evidence for $\hat{\varphi}_1$ being different from $\hat{\varphi}_\rmn{mult}$.
The assessment of planetary multiplicity however poses a huge challenge for properly interpreting the observational data, given that our knowledge of the absence of further planets in observed systems is quite limited. If Hot Jupiters are considered lonely, whereas Neptune-mass planets are frequently found in multiple systems [@Mayor:abundance; @HARPS:abundance2], how much does this have to be attributed to the fact that observational techniques that report Hot Jupiters are insensitive to less massive planets, whereas if the sensitivity extends down to lower masses, other such planets are spotted rather easily? It is intriguing to see that observations of transit timing variations led to the suggestion of the presence of a 15 Earth-mass planet in the WASP-3 system [@MacPlanet] that was already known to host a Hot Jupiter [@WASP3]. Planets reported by microlensing in particular cannot be claimed to be the only ones in the system, they were just the only ones that revealed their presence during a transient event. @390further explicitly found that the acquired data do not exclude the presence of gas-giant planets at any separation orbiting the lens star that caused event OGLE-2005-BLG-390, which is known to host a cool Super-Earth [@PLANET:planet]. It is easier to detect a planet than being able to claim that there are no other planets orbiting the same star, and if one aims for quantifying multiplicity, this needs to be addressed.
Planetary multiplicity however becomes an obvious phenomenon with the detection of respective systems, such as the pair of gas-giant planets orbiting OGLE-2006-BLG-109L [@DoubleCatch], which resemble a half-scale version of the Jupiter-Saturn part of the Solar system. Interestingly, the planets found to orbit HR 8799 look like the complementary double-scale version [@Marois:planet]. It is particularly striking that very early opportunities to detect such systems by gravitational microlensing or direct imaging, respectively, were successful, while one needs to keep in mind that planets with an orbital period similar to Saturn cannot be detected from radial-velocity surveys so far (given a 10–15 year history of respective campaigns), and observing planetary transits is further disfavoured by the small transit probability. However, for Super-Earths and planets with Neptune-class masses in closer orbits, radial-velocity surveys find a very high level of multiplicity as well [@Mayor:abundance; @HARPS:abundance2].
The detection of the pair of Jupiter- and Saturn-like planets orbiting OGLE-2006-BLG-109L [@DoubleCatch] is often hailed because of the striking similarity with the Solar system, albeit that there is basically nothing that can be said about potential inner rocky planets other than that such cannot be excluded. There is however another important result arising from this discovery: outer gas-giant planets are not of the lonesome type. How does one arrive at such a conclusion? Regardless of the large detection efficiency for such planets in events with a peak magnification as large as that of OGLE-2006-BLG-109 ($A_0 \sim 290$), the planetary abundance is moderate or small. If we consider an abundance of 5 per cent, the probability for a double catch would be just 0.25 per cent if the planets were drawn independently from the population. This would mean an expected detection of $\sim\,1/30$ systems amongst the 13 events comprising the systematic sample reported by @Gould:abundance, so that we would have been very lucky to find the detected pair. Therefore, it appears the more likely assumption that the two detections were not the result of independent draws, but instead the probability for a planet to orbit a star is larger if one considers a star that is known to host planets as compared to an arbitrarily chosen star that might host planets or not. This however means that it is not appropriate to consider a planetary mass function with planets randomly drawn from it, but instead one needs to distinguish between stars with or without planets, as the formalism suggested in the previous section does. These arguments however get weaker if the planetary abundance was as large as 20 per cent, because this would mean a probability of 4 per cent for a pair, or 1/2 expected to be detected for 13 events as compared to the one found.
Planet abundance estimates from microlensing observations {#sec:abundance}
=========================================================
The statistical analysis of microlensing events in order to derive planet abundance estimates provides an illustrative example of the challenges one is facing. If a planet orbits the lens star, a detectable signal will only arise with a finite probability. This finite detection efficiency for planets of given mass and orbital separation from their host star is of relevance not only for assessing abundances by means of detections but also for drawing conclusions from the absence of planetary signals. Moreover, the host stars of planets detected by gravitational microlensing arise stochastically from the underlying population of stars that intervene the observed targets, with current experiments most of the masses of the lens stars are only known up to a broad probability distribution, although the mass of the lens star is frequently known for events in which planetary signals have been detected [@Gould:abundance]. The lack of information about the planet’s host star is troublesome, since it is important to distinguish planet population statistics in the range of stellar masses between 0.1 and 0.8 $M_\odot$, covered by microlensing, given that current planet-formation models predict substantial differences, in particular for the abundance of gas-giant planets [@IdaLin].
For the rather small planet samples acquired so far, let us however neglect this issue for the time being, and just compare planet abundance estimates that refer to the sample of probed lens stars. Recently, there have been some discussions about planetary mass functions that can be extracted from microlensing observations. While the discussion by @Sumi:planet is not based on a well-defined criterion for selecting the considered 10 planet detections from the so far published 24 candidates towards the Galactic bulge [@Dominik:review], and moreover no relation has been given between these ‘detections’ and the efficiency of the full observing campaigns, @Gould:abundance in contrast adopted selection criteria that lead to a well-defined event sample, and evaluated the detection efficiencies properly. However, they refer to a planetary mass function described by means of the planet-to-star mass ratio, whose value is highly questionable, given that rather obviously one does not expect the same number of half-massive planets to form around half-massive stars. In particular, coagulation and accretion processes depend on the masses of the bodies involved and their spatial density, but not on the mass of the star. Nevertheless, the sample drawn by @Gould:abundance allows for an insightful further look.
@Gould:abundance refer to 13 events with a peak magnification $A_0 > 200$, densely monitored by MicroFUN[^7] (and other campaigns) from 2005 to 2008. Amongst those events, 2 provided a signal that indicates the presence of a massive gas-giant planet above 150 $M_\oplus$ ($0.5~M_\rmn{jup}$), namely OGLE-2006-BLG-109 and MOA-2007-BLG-400. For such planets, the detection efficiency for orbital separations that correspond to the ‘lensing zone’[^8] can broadly be assumed to be of the order of 100 per cent [@GS98]. One would therefore estimate the abundance of such planets to be about 15 per cent.
Rather than just focussing on events with large peak magnifications, the PLANET collaboration [@PLANET:first; @PLANET:EGS] has acquired data on a much larger sample of about 50 events per year with $A_0 > 2$ from 2002 to 2007, with sampling intervals of around 2 hrs or better, where the average detection efficiency for Jupiter-mass planets in the ‘lensing zone’ for such a sample is about 15 to 20 per cent [@GL92]. Only one respective planet has been reported: OGLE-2005-BLG-071Lb [@OB71; @OB71:Dong], as compared to expected 45–60 if those reside around each of the lens stars. This gives a rough abundance estimate of 1.5–2 per cent, which looks substantially smaller than what one guesses from the densely monitored events with $A_0 > 200$. The PLANET team earlier claimed an upper abundance limit (at 95 per cent confidence) of 33 per cent on Jupiter-mass planets in the same orbital range based on the absence of any detection amongst 42 events well-covered from 1995 to 1999 [@PLANET:fiveshort; @PLANET:fivelong].
The results based on the MicroFUN and PLANET data however appear to be statistically compatible, and one finds that the small-number statistics imply large uncertainties. In fact, an Agresti-Coull confidence interval for the planetary abundance based on the underlying binomial distribution [@AgrestiCoull] at 95 per cent probability extends from 3 per cent to 43 per cent for the MicroFUN result, and from less than 0.01 per cent to 10 or 13 per cent for the PLANET result. This also gives some indication of the unquantified uncertainties of the statistical results quoted by [@Gould:abundance].
Apart from the binomial statistics, there are some systematic uncertainties. One might in fact wonder whether there are further planets just waiting to be detected in the PLANET data that have not been spotted yet due to absence of a comprehensive systematic analysis. This may not be too unlikely, given that e.g. the event OGLE-2008-BLG-513 was initially considered to be due to a stellar binary, before a planetary model had been suggested [@Gould:abundance]. On the other hand, @puzzle have pointed to a puzzle regarding the properties of the high-magnification events that casts doubt on whether we really understand the mechanism responsible for producing these. Namely, a highly statistically significant correlation has been found between the event peak magnification and the metallicity of the observed source star (i.e. not the planet’s host star). This sample bias is so far not understood.
If the planetary abundance turns out to be small, its determination becomes more difficult. If one focuses on high-magnification peaks, for an abundance 15 per cent, 5 per cent, or 2 per cent, the monitoring of 90, 320, or 800 microlensing events, respectively, would be required in order to make the half-width of a symmetric 95 per cent confidence interval match half the abundance. With an average detection efficiency of 15–20 per cent for less favourable, but useful ($A_0 \leq 2$), hourly-sampled events one would require 5–6 times as many events, but given that the number of events with $A_0 > 200$ is about 100 times smaller, such a strategy looks more feasible, in particular since with the current detection rate of the microlensing surveys, the monitoring of $\sim\,200$ suitable events per year is possible.
While the detection efficiency for Jupiter-mass planets is a rather robust number, the sensitivity of microlensing campaigns to planets between 1 and 10 $M_{\oplus}$ (“Super-Earths”) is a strong function of planet mass and orbital separation. Therefore, the interpretation becomes substantially dependent on the choice of the considered region of planet parameter parameter space and the adopted averaging. The sparcity of data makes a meaningful assessment quite difficult. The least massive planet in the sample adopted by @Gould:abundance, and the only one below 50 $M_{\oplus}$, was found to have a mass of 13 $M_{\oplus}$ with a substantial uncertainty, so that it may or may not fall into the Super-Earth mass regime. Moreover, detection efficiencies in this region vary substantially amongst the 13 events that comprise the sample with a prominent peak close to the angular separation of the planet being equal to the angular Einstein radius of its host star. The situation is slightly better for the results of the PLANET campaign: OGLE-2005-BLG-390Lb has been estimated to have a mass between 3 and 10 $M_{\oplus}$ with a probability of 68 per cent [@PLANET:planet; @Do:Estimate2], and one is within the right order of magnitude by assuming a detection efficiency of about 1–2 per cent for ‘lensing zone’ planets [c.f. @Bennett:Earth; @390further] for an average over 300 observed events. One detection then leads to an maximum-likelihood point estimate for the respective abundance of 17–33 per cent, about 10–20 times larger than that obtained for planets above 0.5 $M_\rmn{jup}$ from the same set of observations. However, none of the derived estimates should be considered to be correct to within less than a factor 3–4, but on the other hand, they are not substantially worse either. While microlensing observations show that it is implausible that low-mass planets are rare, drawing firm conclusions is prevented by the current low-number statistics. In particular, @Gould:abundance found that their sample size is insufficient for reliably determining a power-law index in the mass function with respect to the planet-to-star mass ratio. However, they estimate the local planet number density in orbital separation $d = \theta_\rmn{p}/\theta_\rmn{E}$ and mass ratio $q$ at $d = 1$ and $q = 5 \times 10^{-4}$ to be $0.36$ per decade in each of the quantities. With power laws considered to range from $\propto q^{-0.2}$ to $\propto q^{-0.6}$, and assuming a ’typical’ stellar mass of $0.3~M_{\odot}$, one finds for the middle of the decade from $1~M_{\oplus}$ to $10~M_{\oplus}$, roughly at $m_\rmn{p} = 3~M_{\oplus}$, local number densities of $0.6$ or $1.9$, respectively. For comparison, let us account for the fact that the lensing zone covers 0.42 decades, and therefore multiply the derived abundance of 17–33 per cent by 2.4, which results in values $0.4$–$0.8$, not much different from the result found by @Gould:abundance, where one also needs to consider that the local density is not equal to the average density in the considered surrounding region. Applying the same procedure for transferring the local number density according to @Gould:abundance to a planet mass $m_\rmn{p} = 1~M_{\rmn jup}$ yields values of $0.25$ or $0.12$, for the two different power-law indices, respectively. The range of power laws gives abundance density ratios between $m_\rmn{p} = 3~M_{\oplus}$ and $m_\rmn{p} = 1~M_{\rmn jup}$ in the range from $2.5$ to $16$ (as compared to 10–20 guessed from PLANET observations).
Summary and outlook {#sec:outlook}
===================
A planetary mass function is not the extension of the stellar mass function to lower masses, given that planets orbit stars. Instead, planetary mass functions more or less strongly depend on the characteristic proper properties of the respective host stars, such as the stellar mass $M_\star$, metallicity $Z$, age $\tau$, and spin rate $\Omega$ as well. As long as the planet mass $m_\rmn{p}$, planet radius $r_\rmn{p}$, orbital semi-major axis $a$, and orbital eccentricity $\varepsilon$ are considered as descriptive parameters, all population statistics can be derived from three fundamental functions, namely the differential mass-radius-orbit function $\varphi(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega)$, the fraction of stars with planetary systems $f_\mathrm{p}(M_\star, Z, \tau,\Omega)$, and the differential stellar mass function $\xi(M_\star,Z,\tau,\Omega)$. In principle, tne fragmentation of the planetary system into $k$ planetary bodies gives us an infinite number of multiplicity indices $\zeta_k$ that correspond to the fraction of planetary systems with exactly $k$ planets, as well as respective specific mass-radius-orbit functions $\hat{\varphi}_k(m_\rmn{p},r_\rmn{p},a,\varepsilon; M_\star, Z, \tau,\Omega)$. Adopting a functional dependence on $k$ however allows for a description with a finite number of parameters. A first-order step would be to distinguish single-planet and multiple-planet systems. However, the determination of their respective fraction for all stars that host planets requires a proper assessment of the constraints on the presence of further planets in the studied systems.
It cannot be stressed enough that the formation and evolution of planets is a crucial step towards the development of life, but it will [*not*]{} be understood by focusing the interest on habitable planets, rather than embracing planet populations in their amazing diversity. Moreover, rather than just optimizing planet searches for a large detection rate, the critical design feature is to follow well-defined monitoring and detection criteria that allow to carry out simulations, so that meaningful statistics can be derived.
With the probabilistic nature of the alignment of stars suitable to provide gravitational microlensing events and similarly for a planet to reveal its presence around the foreground (generally unobserved) lens star, studying planet populations by microlensing prominently comes will all difficulties and challenges that one might encounter. All conclusions that can be drawn to date not only suffer from small-number statistics, but moreover from difficulties in the assessment of the detection efficiency and the related need to refer to strictly deterministic procedures for the monitoring strategy [e.g. @MiNDSTEp2008; @Gould:abundance]. @Gould:abundance have recently presented a first statistically meaningful, but very small, sample of 13 events comprising only those events for which source and lens stars are so closely aligned to yield a peak magnification $A_0 \geq 200$. While some fundamental statistics can be derived to within a factor 3–4, current data do not allow to do substantially better. The fact that a maximum-likelihood point estimate for the abundance of planets above $0.5~M_{\rmn jup}$ from this sample comes out as 10 times larger than what one would guess from 6 years of PLANET observations that include events with smaller peak magnifications as well, while both values are statistically compatible, demonstrates the current uncertainties. Moreover, one might wonder whether any systematics that are not fully understood affect this outcome. The relatively large number of gas-giant planets in the sample adopted by @Gould:abundance as compared to the PLANET observations with an about 3–5 times [*larger*]{} total detection efficiency is somewhat surprising, in particular given that such planets are readily detected at high efficiency already for $A_0 \geq 10$ [@GS98]. In fact, for measuring planetary abundances, focussing on the few events with very large $A_0$ does not appear to be a promising strategy, due to the rarity of such events.
The prospects for obtaining a measurement of the planetary abundance with a desired accuracy strongly depend on the abundance itself, given that a small abundance will imply a small number of detections. This is serious limiting factor for any planet detection campaign, and the measurement of the abundance of ‘second Earths’ by NASA’s Kepler mission is not immune to the problem of small-number statistics either. Rather than trying to estimate an abundance from a small region near the sensitivity limit, a more robust estimate for the abundance of habitable planets would arise from an interpolation between hotter and cooler planets, making use of a larger number of detected objects, and assuming that planet formation will not be radically different just for the habitable zone. The improvement of the statistics by a more powerful mission such as PLATO (PLAnetary Transits and Oscillations)[^9] [@PLATO] appears to be much desired.
The dependency of the planet population statistics on the properties of the host star implies that a quite substantial amount of data will need to be collected for properly measuring the planetary mass function and even more for determining the full mass-radius-orbit function. With the 450 reported planets so far, we can now assess 1- and 2-parameter functions (if and only if we understand any detection bias), but it requires a much larger detection rate than the recent 150 planets per year in order to understand the distribution of planets in the Universe. From 1995 to 2009, the respective time interval of acquiring 150 planets has been cut by a factor three twice (10 years, 3 years, and then 1 year), so that we see a substantial acceleration, and the Kepler mission is already contributing to boosting the planet detection rate further. Nevertheless, drawing pictures of planet parameter space in its full complexity calls for samples that are 2–4 orders of magnitude larger than those we have now. Looking back at the history of exoplanet detections however tells us something else: we gained a lot of insight from a few individual detections that came as a surprise and challenged the prevailing understanding. Shouldn’t we expect to be surprised again when embarking on exploring further uncharted territory? Our understanding of planetary formation and evolution is not probed uniformly by a planetary mass-radius-orbit function, but certain regions of planet parameter space may prove more critical in the power to discriminate between alternative theories or to measure crucial parameters. Therefore, efficiency could be gained from observational campaigns delivering specific characteristic statistics that can be robustly determined from rather small samples.
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank Andrew Collier Cameron, Markus Hundertmark, Christine Liebig, Sohrab Rahvar, and Yiannis Tsapras for helpful comments on the manuscript, and Michel Mayor for advice on some relevant literature.
[^1]: Royal Society University Research Fellow
[^2]: E-mail: md35@st-andrews.ac.uk
[^3]: [http://www.planet-legacy.org]{}
[^4]: [http://ogle.astrouw.edu.pl]{}
[^5]: following the “Position Statement on the Definition of a ‘Planet’[”]{} (in the revision dated 28 February 2003) by the IAU Working Group on Extrasolar Planets (WGESP)
[^6]: For the vast majority of stars, the spin rate $\Omega$ essentially becomes a function of stellar mass $M_\star$ and age $\tau$ [e.g. @CCLi; @Barnes], so that this parameter can be neglected.
[^7]: http://www.astronomy.ohio-state.edu/\~microfun/
[^8]: Angular planet-star separations $0.618\,\theta_\rmn{E} \leq \theta_\rmn{p} \leq 1.618\,\theta_\rmn{E}$, where $\theta_\rmn{E} = \{[(4GM)/c^2]\,(D_\rmn{L}^{-1}-D_\rmn{S}^{-1})\}^{1/2}$, with $D_\rmn{L}$ and $D_\rmn{S}$ the distances of the lens and source star from the observer, respectively, typically a range of 1.5 to 4 au.
[^9]: [http://sci.esa.int/plato]{}
|
---
author:
- 'Hugo A. Akitaya[^1]'
- 'Leonie Ryvkin[^2]'
- 'Csaba D. Tóth [^3]'
bibliography:
- 'bibliography.bib'
title: 'Rock Climber Distance: Frogs versus Dogs[^4]'
---
[^1]: Department of Computer Science, Tufts University, Medford, MA, USA.
[^2]: Department of Mathematics, Ruhr University Bochum, Germany
[^3]: Department of Mathematics, California State University Northridge, Los Angeles, CA, USA.
[^4]: Research supported in part by the NSF awards CCF-1422311 and CCF-1423615.
|
---
abstract: 'Minor modifications are given to prove the Main Theorem under the Blaschke (instead of Carleson) condition as well as a small historical comment.'
author:
- 'F. Peherstorfer, and P. Yuditskii'
date:
-
-
title: 'Remark on the paper “Asymptotic behavior of polynomials orthonormal on a homogeneous set”'
---
Because of the reference [@DKS] on our paper [@PY03] a certain historical comment is needed.
In [@PY03] we generalized H. Widom’s Theorem [@W] based on an absolutely new idea, dealing with one dimensional perturbation of a given Jacobi matrix. As it is well known if a Jacobi matrix $J$ has the compact $E=[b_0,a_0]\setminus \cup_{j\ge 1}(a_j,b_j)$ as spectral set, then its one dimensional perturbation may have in addition to $E$ spectral points in the gaps (one point in each gap). Since in our case, the set $E$ is possibly a Cantor type set, an [*infinite*]{} number of spectral points $X$ has to be added to the support of the spectral measure. A homogeneous set $E$ possesses the following very nice property [@JM85]. Let $B(z,z_0)=B(z,z_0;\Omega)$ be the Blaschke factor in the domain $\Omega=\overline{ \mathbb C}\setminus E$ with zero at $z_0 \in
\Omega$. From each interval $(a_j,b_j)$ let us pick arbitrarily exactly one $x_j$. Then $$\label{hr1}
\inf_k\prod_{j\not =k}|B(x_k,x_j)|>0.$$ Note that the convergence of the product $\prod_{j }|B(z_0,x_j)|>0$ (the Blaschke condition) corresponds to the so called Widom property of the domain $\Omega$. So, the domain with a homogeneous boundary has even a better property: the more restrictive condition , the so called Carleson condition, holds for the given Blaschke product.
Thus to use our idea on a one dimensional perturbation, we were enforced to work with spectral measures supported on a homogeneous set $E$ but also having, possibly an infinite set of mass points supported on an arbitrary (real) set $X$, satisfying (similar to ) the Carleson condition $$\label{hr2}
\inf_{x\in X}\prod_{y\in X, y\not =x}|B(x,y)|>0.$$
When the results of [@PY03] were presented and the manuscript was submitted for publication, we recognized that the math community is much more interested in an infinite number of mass point than in a Cantor type spectral sets. We wrote [@PY01] considering $E$ just as a single interval. It became, almost immediately clear that the Carleson condition was too restrictive and a simple trick [@PY01 (2.9)] allows to use only Blaschke condition.
Unfortunately, the first paper was finally published later than the second one (in this way we have, formally, a more fresh paper with the more restrictive condition [@PY03 (6.1)] on the set $X$).
[*To prove the Main Theorem under the Blaschke condition*]{} one has to estimate [@PY03 (6.9)] in the way [@PY01 (2.9)], of course, following to the standard strategy in this paper (Lemma 4.2, Lemma 5.3, etc):
1\. For a fixed $\epsilon>0$ use the finite covering of $\Gamma^*$ (see [@PY03 p.139]), $$\label{hr3}
\Gamma^*=\cup_{j=1}^{l(\epsilon)}\left\{\beta:
\text{dist}(\beta,\beta_j)\le\eta(\epsilon)\right\}.$$
2\. For the exhaustion $\{X_N\}$ of $X$ by finite sets [@PY03 p.138], let $M_N$ be the character of the Blaschke product $B_N$ with zeros at $X\setminus X_N$. Since $B_N(0)\to 1$ and $M_N\to 1_{\Gamma^*}$ we can choose $N$ so big that $$\label{hr4}
1-B_N(0) \le \epsilon, \ \ 1-\Delta^{M_N^{-1}}(0) \le \epsilon$$ (for the definition of $\Delta^{\alpha}$ see [@PY03 p.125]).
3\. Having a finite number of the reproducing kernels (characters $\beta_j$) and a finite number of points ($X_N=\{\text{zeros of}\
B/B_N\}$), choose $n$ due to the estimation $$\label{hr5}
\begin{split}
&\left|\sum_{X_N}\left\{b^{n+1}K^{\beta_j}\left(\frac{-z'}{\phi'}\right)P_n\right\}\right|\\
\le(\sup_{X_N}|b|)^n
&\sqrt{\sum_{X_N}|P_n|^2\sigma_l}
\sqrt{\sum_{X_N}\left
|K^{\beta_j}\frac{b}{\psi}\right|^2\sigma_l}.
\end{split}$$
Let us mention that the key Lemmas 1.1, 2.2, 2.4, 5.2 were proved under the Blaschke condition [@PY03 (2.5)].
[40]{}
D. Damanik, R. Killip, and B. Simon [*Perturbation of orthogonal polynomials with periodic recursion coefficients*]{}, Preprint.
P. W. Jones and D. E. Marshall, [*Critical points of Green’s function, harmonic measure, and the corona problem*]{}, Ark. Mat. [**23**]{}, 281–314 (1985).
F. Peherstorfer and P. Yuditskii, [*Asymptotics of orthonormal polynomials in the presence of a denumerable set of mass points*]{}, Proc. Amer. Math. Soc. [**129**]{} (2001), 3213–3220.
F. Peherstorfer and P. Yuditskii, [*Asymptotic behavior of polynomials orthonormal on a homogeneous set*]{}, J. Analyse Math. [**89**]{}, 113–154 (2003).
H. Widom, [*Extremal polynomials associated with a system of curves in the complex plane*]{}, Adv. Math. [**3**]{} (1969), 127–232.
|
---
abstract: 'Gaussian process (GP) covariance function is proposed as a matching tool in GPMatch within a full Bayesian framework under relatively weaker causal assumptions. The matching is accomplished by utilizing GP prior covariance function to define matching distance. We show that GPMatch provides a doubly robust estimate of the averaged treatment effect (ATE) much like the G-estimation, the ATE is correctly estimated when either conditions are satisfied: 1) the GP mean function correctly specifies potential outcome $Y^{(0)}$; or 2) the GP covariance function correctly specifies matching structure. Simulation studies were carried out without assuming any known matching structure nor functional form of the outcomes. The results demonstrate that GPMatch enjoys well calibrated frequentist properties, and outperforms many widely used methods including Bayesian Additive Regression Trees. The case study compares effectiveness of early aggressive use of biological medication in treating children with newly diagnosed Juvenile Idiopathic Arthritis, using data extracted from electronic medical records.'
author:
- Bin Huang PhD
- Chen Chen PhD
- Jinzhong Liu PhD
bibliography:
- 'bibliography.bib'
title: ' GPMatch: A Bayesian Doubly Robust Approach to Causal Inference with Gaussian Process Covariance Function As a Matching Tool '
---
Introduction
==============
Data from nonrandomized experiments, such as registry and electronic records, are becoming indispensable sources for answering causal inference questions in health, social, political, economics and many other disciplines. Under the assumptions of ignorable treatment assignment and distinct model parameters governing the science and treatment assignment mechanisms, [@Rubin1978] showed the Bayesian inference of causal treatment effect can be approached by directly outcome modeling, treating it as a missing potential outcome problem. Direct modeling is able to utilize the many Bayesian regression modeling techniques to address complex data type and data structures, such as examples in [@Hirano2000], [@Zajonc2012], [@Imbens1997] and [@Baccini2017].
Parameter rich Bayesian modeling techniques are particularly appealing as it does not presume an known functional form, thus may help mitigate potential model miss-specification issues. [@Hill2011] suggested Bayesian additive regression tree (BART) can be used for causal inference, and showed it produced more accurate estimates of average treatment effects compared to propensity score matching, inverse propensity weighted estimators, and regression adjustment in the nonlinear setting, and performed as well under the linear setting. Others have used Gaussian Process in conjunction with Dirichlet Process priors, e.g. [@roy2016bayesian] and [@Xu2016]. [@roy2017bayesian] devised enriched Dirichlet Process priors tackling missing covariate issues. However, naive use of regression techniques could lead to substantial bias in estimating causal effect as demostrated in [@Hahn2018].
The search for ways of incorporating propensity of treatment selection into the Bayesian causal inference has been long standing. Including propensity score (PS) as a covariate into the outcome model may be a natural way. However, joint modeling of outcome and treatment selection models leads to a “feedback” issue, and a two-stage approach was suggested by [@mccandless2010cutting], [@zigler2013model] and many others. Discussion about whether the uncertainty of the first step propensity score modeling should be taken into account when obtaining the final result in the second step can be found in [@hill2006interval], [@Ho2007], [@Rubin2006], [@Rubin1996] for details. [@Saarela2016] proposed an approximate Bayesian approach incorporating inverse probability treatment assignment probabilities as importance-sampling weights in Monte Carlo integration. It offers a Bayesian version to the augmented inverse probability treatment weighting (AIPTW). [@Hahn2017] suggested incorporating estimated treatment propensity into the regression to explicitly induce covariate dependent prior in regression model. These methods all require a separate step of treatment propensity modeling, thus may suffer if the propensity model is mis-specified.
Matching is one of the most sought-after method used for designing observational study to answer causal questions. Matching experimental units on their pre-treatment assignment characteristics helps to remove the bias by ensuring the similarity or balance between the experimental units of the two treatment groups. Matching methods impute the missing potential outcome with the value from the nearest match or the weighted average of the values within the nearby neighborhood defined by (a chosen value) caliper. Matching on multiple covariates could be challenging when the dimension of the covariates are large. For this reason, matching is often performed using the estimated propensity score (PS) or by the Manhalanobis distance (MD). The idea is, under the no unmeasured confounder setting, matching induces balance between the treated and untreated groups. Therefore, it serves to transform a nonrandomized study into a pseudo randomized study. There are many different matching techniques, a comprehensive review is provided in [@Stuart2010]. A recent study by [@226731] compared the PS matching with the MD matching and suggests that PS matching can result more biased and less accurate estimate of averaged causal treatment as the precision of matching improves, while the MD matching is showing improved accuracy. Common to matching methods, the data points without a match are discarded. Such a practice may lead to a sample no longer representative of the target population. A user-specified caliper is often required, but different calipers could lead to very different results. Furthermore, matching on a miss-specified PS could lead to invalid causal inference results.
[@rubin1973use] suggested that the combination of matching and regression is a better approach than using either of them alone. [@Ho2007] advocated matching as nonparametric preprocessing for reducing dependence on parametric modeling assumptions. [@Gutman2017] examined different strategies of combining the preprocessed matching with a regression modeling of the outcome through extensive simulation studies. They demonstrated that some commonly used causal inference methods have poor operating characteristics, and suggested regression modeling after pre-processed matching works better. To our knowledge, no existing method can accomplish matching and regression modeling in a single step.
Gaussian process (GP) prior has been widely used to describe biological, social, financial and physical phenomena, due to its ability to model highly complex dynamic system and its many desirable mathematical properties. Recent literature, e.g.[@Choi2013] and [@Choi2007], has established posterior consistency for Bayesian partially linear GP regression models. Bayesian modeling with GP prior can be viewed as a marginal structural model where the potential outcome under the non-intervention condition $Y^{(0)}$ is modeled non-parametrically. It allows for predicting the missing response by a weighted sum of observed data, with larger weights assigned to those in closer proximity but smaller to those further away, much like a matching procedure. This motivated us to consider using GP prior covariance function as a matching tool for Bayesian causal inference.
The idea of utilizing GP prior in Bayesian approach to causal inference is not new. Examples can be found in [@roy2016bayesian] for addressing heterogeneous treatment effect, in [@Xu2016] for handling dynamic treatment assignment, and in [@roy2017bayesian] for tackling missing data. While these studies demonstrated GP prior could be used to achieve flexible modeling and tackle complex setting, no one has considered GP as a matching tool. This study adds to the literature in several ways. First, we offer a principled approach to Bayesian causal inference utilizing GP prior covariance function as a matching tool, which accomplishes matching and flexible outcome modeling in a single step. Second, we provide relaxed causal assumptions than the widely adopted assumptions from the landmark paper by [@rosenbaum1983]. By admitting additional random errors in outcomes and in the treatment assignment, these new assumptions fit more naturally within Bayesian framework. Under these weaker causal assumptions, GPMatch method offers a doubly robust approach in the sense that the averaged causal treatment effect is correctly estimated when either one of the conditions are met : 1) when the mean function correctly specifies the $Y^{(0)}$; or 2) the covariance function matrix correctly specifies the matching structure. At last, the proposed method has been implemented in an easy-to-use publicly available on-line application (<https://pcats.research.cchmc.org/>).
The rest of the presentation is organized as follows. Section 2 describes methods, where we present problem setup, causal assumptions, and the model specifications. The utility of GP covariance function as a matching tool is presented in Section 3, followed by discussions of its doubly robustness property. Simulation studies are presented in Section 4. Simulations are designed to represent the real world setting where the true functional form is unknown, including the well-known simulation design suggested by [@Kang2007]. We compared the GPMatch approach with some commonly used causal inference methods, i.e. linear regression with PS adjustment, AIPTW, and BART, without assuming any knowledge of the true data generating models. The results demonstrate that the GPMatch enjoys well calibrated frequentist properties, and outperforms many widely used methods under the dual miss-specification setting. Section 5 presents a case study, examining the comparative effectiveness of an early aggressive use of biological medication in treating children with recent diagnosed juvenile idiopathic arthritis (JIA). Section 6 presents summary, discussions and future directions.
Method
======
Problem Setup and Notations
---------------------------
![The Directed Acyclic Graphic (DAG) Presentation of the Problem Setup[]{data-label="fig:0"}](dagitty-model.png){width="10cm"}
The problem setup is depicted in the directed acyclic graphic (DAG), where the rectangular nodes are measured and oval nodes are latent or unmeasured variables. The $\boldsymbol{X}$ and $\boldsymbol{V}$ are observed covariates, and $\boldsymbol{Y}$ is the observed outcome. The treatment assignment ($A = 0/1$) is a binary indicator, where 0 indicating comparator or the nature occurring condition and 1 indicates intervention. Correspondingly, the potential outcomes $(Y^{(0)}, Y^{(1)}) $, are two latent variables. The unmeasured covariates are denoted by $U_0, U_1, U_2$, representing three types of unmeasured covariates for $Y^{(0)}, Y^{(1)} $ and $A$ correspondingly. The potential outcome $ Y^{(0)} $ under the controlled condition is determined jointly by $\boldsymbol{X}$ a p-dimensional and $ \boldsymbol{V}$ a q-dimensional vector of the observed covariates plus a unmeasured covariate $U_0$. Thus, $(\boldsymbol{X}, \boldsymbol{V}, U_0)$ are prognostic variables. Similarly, the potential outcome $ Y^{(1)} $ under the intervention condition is determined jointly by the observed covariates $( \boldsymbol{X}, \boldsymbol{V} )$ and the unobserved covariates ($U_0, U_1)$. The observed outcome $Y$ is a noisy version of the corresponding potential outcomes, with an error term $\epsilon$. The treatment is assigned according to an unknown propensity score, which is determined by the baseline covariates $ \boldsymbol{V}$ and $U_2$. The observed baseline covariates $\boldsymbol{X}$and $\boldsymbol{V} $ could be overlapping, whereas different symbols are used to distinguish their roles in science mechanisms and the treatment assignment process respectively. For example, X could include patient age, gender, genetic makeup, family disease history, past and current medication use as well laboratory results and other disease characteristics, which are directly related to the prognosis of the disease. The V could include the above X variable, as well as other considerations to the treatment decisions including insurance, social economic status of patient family, education and clinical centers. Most of these important X and V covariates are available in a patient registry and electronic medical records, thus are observable. Other factors could play a role in treatment decisions, such as patient and clinician’s personal preferences, culture believes and past experiences. However, they are almost never recorded. These factors are collective referred as $U_2$. The residual terms of responses $(\epsilon. U_0, U_1)$ can be overlapping or correlated, the corresponding links are omitted in the Figure 1 for better visual presentation.
The DAG can be expressed by a set of structural equation models: $$\begin{array}{l}
Y_i = A_i Y_i^{(1)}+(1-A_i) Y_i^{(0)} + \epsilon_i, \label{eq:a1}\\
%Y_i^{(0)}=f^{(0)}(\boldsymbol{x_i},\boldsymbol{v_i}) + u_{0i}\\
%Y_i^{(1)}=f^{(1)}(\boldsymbol{x_i},\boldsymbol{v_i}, \boldsymbol{m_i})=f^{(0)}(\boldsymbol{x_i},\boldsymbol{v_i})+\tau (\boldsymbol{x_i,m_i}) + u_{0i} + u_{1i}\\
Y_i^{(a)}=f^{(0)}(\boldsymbol{x_i},\boldsymbol{v_i})+ a (\tau (\boldsymbol{x_i}) + u_{1i}) + u_{0i}\\
%Pr(A_i) = \pi(v_i, m_i, u_{2i}) \\
Pr(A_i) = \pi(v_i, u_{2i}) \\
\end{array}$$ where $E(\epsilon_i) =0$ and $ E(u_{ki})=0, $ for $k=0,1,2.$ To ensure the causal treatment effect can be estimated without bias, the following condition must be true: $\epsilon \bot (Y^{(0)},Y^{(1)})$, $ (U_0, U_1) \bot A|\boldsymbol{X,V}$, $ U_2 \bot \epsilon$ and $ U_2 \bot Y | A, \boldsymbol{X},\boldsymbol{V}. $ Violation of any of these conditional independence condition can open up the back-door path from $Y$ to $A$ ([@pearl2009causality]). The $f^{(0)}(\cdot)$, $f^{(1)}(\cdot)$ and $\pi(\cdot)$ are unknown functions that describes the potential outcome science mechanism and treatment assignment process. The sample averaged treatment effect of all individual level effect $ \tau_i = \tau(x_i) + u_{1i}$, $\tau=\frac{1}{n} \sum_i \tau_i$ is the parameter of interest, which is referred as the averaged treatment effect (ATE).
The Causal Assumptions
----------------------
The causal assumptions are necessary to ensure unbiased estimate of casual treatment effect. The causal assumptions are presented in the DAG and the structural equation (1). Notice that the DAG includes three types of unmeasured covariates, where $U_0$ indicates unknown correlation between the pair of potential outcomes, $U_1$ a potential lurking variable, and $U_2$ a potential confounding variable. Under the conditional independence conditions, the observed covariates $(\boldsymbol{X}, \boldsymbol{V})$ is a minimum sufficient set for identifying causal treatment effect. Further, with assuming distinct model parameters, it is relatively straight forward to see that the posterior of the potential outcomes can be derived directly by $$[Y^{(0)},Y^{(1)} | A, \boldsymbol{X,V}, Y] = \frac{[Y, Y^{(0)}, Y^{(1)} | A, \boldsymbol{X,V}]}{[Y|A,\boldsymbol{X,V}]}.$$Comparing to the widely adopted causal assumptions laid out in the landmark paper by the [@rosenbaum1983] (RR), the DAG presents a weaker version of causal assumptions :
1. \[CA1\] Instead of the stable unit treatment value assumption (SUTVA), we assume stable unit treatment value expectation assumption (SUTVEA). Specifically,
1. The consistency assumption of RR requires the observed outcome is an exact copy of the potential outcome, i.e. $Y_i=Y_i^{(0)}(1-A_i)+Y_i^{(1)} A_i$. Instead, only $Y \bot \boldsymbol{X,V} | (A, Y^{(0)}, Y^{(1)})$ is required. In other words, we consider the observed outcome is a noisy copy of the potential outcome where the expectation of the observed outcome $E(Y_i)=Y_i^{(0)}(1-A_i)+Y_i^{(1)} A_i.$
2. The no interference assumption of RR requires the potential outcomes of one experiment unit is not influenced by the potential outcomes of another experiment unit, i.e. $Y_i^{(a)} \bot Y_j^{(b)}$. Instead, we assume the observed outcomes from different units are conditional independence given the observed covariates $Y_i \bot Y_j | A, \boldsymbol{X},\boldsymbol{V}$
The SUTVEA assumption acknowledges existence of residual random error in the outcome measure. The observed outcomes may differ from the corresponding true potential outcomes due to some measurement errors. In addition, the observed outcomes could differ when treatment received deviates from its intended version of treatment. For example, outcomes could differ by the timing of the treatment, pre-surgery preparation procedure or the concomitant medication. In addition, we consider the potential outcomes from different experimental units may be correlated, where the correlations are determined by the covariates. Since only one outcome could be observed out of all potential outcomes, the causal inference presents a highly structured missing data setup where the correlations between $ (Y_i^{(1)}, Y_i^{(0)})$ are not directly identifiable. Admitting residual errors and allowing for explicit modeling of the covariance structure, the new assumptions could facilitate better statistics inference.
2. \[CA2\] Similarly as in RR, we assume ignorable treatment assignment assumption $ [Y^{(a)} | A=1, \boldsymbol{X,V}] =[Y^{(a)} | A=0, \boldsymbol{X,V}], $ for $a = 0,1$. That is the marginal distribution of a potential outcome can be obtained by modeling the observed covariates only, indepedent from the treatment assignment. As depicted in DAG, presence of unmeasured confounder is admissible, as long as the back-door path from $Y$ to $A$ is blocked by the observed covariates. In practice, it is almost never possible for us to capture all the considerations factored into a treatment decisions, such as personal preferences and past experiences. However, it is reasonable to consider the uncounted residual error in treatment assignment, conditional on the observed covariates (e.g. patient demographics, insurance, disease characteristics, laboratory and medical diagnostic tests), is not related to the potential outcomes.
3. Positivity Assumption. Same as in RR, we assume every sample unit has nonzero probability of being assigned into either one of the treatment arms, i.e. $0<Pr(A_i|\boldsymbol{x_i,v_i})<1$ for $ \forall (\boldsymbol{x_i, v_i} ) $. This assumption is adopted to ensure the equipoise of the causal inference.
The GPMatch Model Specifications
--------------------------------
Marginal structural model (MSM) is a widely adopted modeling approach to causal inference, which serves as a natural framework for Bayesian causal inference. The MSM specifies $$Y_i^{(1)} = Y_i^{(0)}+A_i\tau_i.$$ Without prior knowledge about the true functional form, we let $ Y_i^{(0)} \sim GP(\boldsymbol{\mu_f}, \boldsymbol{K}) $, where the mean function $\mu_f$ maybe modeled by a parametric regression equation, and $\boldsymbol{K} $ defines the covariance function of the GP prior. Specifically, GPMatch is proposed as a partially linear Gaussian process regression fitting to the observed outcomes, $$\label{eq:d1}
%\begin{array} {lr}
Y_i=f_i(\boldsymbol{x_i}, \boldsymbol{v_i})+A_i\tau(\boldsymbol{x_i})+\epsilon_i,
%f_i(\boldsymbol{x_i}, \boldsymbol{v_i}) = \mu_f(x_i) + \eta(v_i),
%\begin{array}$$ where $$\begin{array} {lr}
f_i(\boldsymbol{x_i}, \boldsymbol{v_i}) = \mu_f(\boldsymbol{x_i}) + \eta(\boldsymbol{v_i}),\\
\eta_i(\boldsymbol{v_i}) \sim GP(0, \boldsymbol{K}),\\
\epsilon_i \sim N(0, \sigma_0),\\
\epsilon_i \perp \eta_i.
\end{array}$$ Here, we may let $\boldsymbol{\mu_f}=((1,\boldsymbol{{X_i}'})\beta)_{n\times 1}$, where $\boldsymbol{\beta} $ is a $ (1+p) $ dimension parameter vector of regression coefficients for the mean function. This is to allow for implementing any existing knowledge about the prognostic determinants to the outcome. Also, let $\boldsymbol{\tau} = \left( (1 , \boldsymbol{X_i}') \boldsymbol{\alpha} \right)_{n\times 1} $ to allow for potential heterogeneous treatment effect, where $\boldsymbol{\alpha} $ is a $ (1+p) $ dimension parameter vector of regression coefficients for the treatment effect.
Let $\boldsymbol{Y_{n}}=(Y_{i})_{n\times 1}$, the model can be re-expressed in a multivariate representation $$\boldsymbol{Y_{n}}|\boldsymbol{A,X,V,\gamma} \sim MVN(\boldsymbol{Z'\gamma,\Sigma }),\label{eq:d2}$$ where $\boldsymbol{Z}'=(1, \boldsymbol{X_i}', A_i, A_i\times \boldsymbol{X_i}')_{n\times (2+2 p)}$, $\boldsymbol{\gamma}=(\boldsymbol{\beta, \alpha})$, $\boldsymbol{\Sigma}=(\sigma_{ij})_{n\times n}$, with $\sigma_{ij}=K(\boldsymbol{v_i},\boldsymbol{v_j})+\sigma_0^2\delta_{ij}$. The $\delta_{ij}$ is the Kronecker function, $\delta_{ij}=1$ if $i=j$, and 0 otherwise.
Gaussian process can be considered as distribution over function. The covariance function $\boldsymbol{K} $, where $ k_{ij} = Cov(\boldsymbol{\eta_i},\boldsymbol{\eta_j})$, plays a critical role in GP regression. It can be used to reflect the prior belief about the functional form, determining its shape and degree of smoothness. In the next section, we show for the data comes from an experimental design where the matching structure is known, GP covariance could be formulated to reflect the matching structure. Often, the exact matching structure is not available, a natural choice for the GP prior covariance function $ \boldsymbol{K} $ is the squared-exponential (SE) function, where $$K(v_i,v_j)=\sigma_f^2 exp \left( -\sum_{k=1}^q \frac{|v_{ki}-v_{kj}|^2}{\phi_k}\right), \label{eq:d4}$$ for $i,j=1,...,n$. The $(\phi_1,\phi_2,...,\phi_q)$ are the length scale parameters for each of the covariates $\boldsymbol{V}$.
There are several considerations in choosing the SE covariance function. The GP regression with SE covariance can be considered as a Bayesian linear regression model with infinite basis functions, which is able to fit a smoothed response surface. Because of the GP’s ability to choose the length-scale and covariance parameters using the training data, unlike other flexible models such as splines or the supporting vector machine (SVM), GP regression does not require cross-validation([@Rasmussen2006]). Moreover, SE covariance function provides a distance metric that is similar to Mahalanobis distance, thus it could be served as a matching tool .
The model specification is completed by specification of the rest of priors. $$\begin {array}{lr}
\boldsymbol{\gamma} \sim MVN \left( \boldsymbol{0}, \omega \sigma_{lm}^2 ( \boldsymbol{(Z Z')} )^{-1} \right),\\
\sigma_0^2 \sim IG(a_0, b_0 ),\\
\sigma_f^2 \sim IG(a_f, b_f ),\\
\phi_k \sim IG(a_\phi, b_\phi ).
\end{array}$$ We set $ \omega = 10^6, a_\phi = b_\phi = 1, a_0 = a_f = 2, b_0 = b_f = \sigma_{lm}^2/2, \sigma_{lm}^2 $ is the estimated variance from a simple linear regression model of $ Y $ on $A$ and $X$ for computational efficiency.
The posterior of the parameters can be obtained by implementing a Gibbs sampling algorithm: first sample the covariate function parameters from its posterior distribution$ [ \boldsymbol{\Sigma} | Data,\boldsymbol{\alpha,\beta} ] $; then sample the regression coefficient parameter associated with the mean function from its conditional posterior distribution $ [ \boldsymbol{\alpha,\beta}| Data,\boldsymbol{\Sigma}] $, which is a multivariate normal distribution. The individual level treatment effect can be estimated by $\hat{\tau}(\boldmath{x_i}) = (1 , \boldmath{X_i})' \hat{\boldsymbol{\alpha}}$ and the averaged treatment effect is estimated by $\hat{ATE} = \sum_{i=1}^n \frac{\hat{\tau}(\boldmath{x_i})}{n} $.
Estimate ATE: Connections with Matching and G-estimation
========================================================
Design the GP Covariance Function as a Matching Tool
----------------------------------------------------
To demonstrate the utility of the GP covariance function as a matching tool, let us first consider design a covariance function for the known matching data structure . In other words, we assume for any given sample unit, we know who are the matching units. For simplicity, we consider fitting the data with a simple nonparametric version of the GPMatch, $$\boldsymbol{Y_n} \sim MVN(\mu \boldsymbol{1}_n+\tau \boldsymbol{A}_n, \boldsymbol{\Sigma}),\label{eq:d5}$$ where $\boldsymbol{\Sigma = K} + {\sigma_0}^2 \boldsymbol{I_n}$.
With known matching structure, the GP covariance function may present the matching structure by letting $\boldsymbol{K}=(k_{ij})_{n\times n} $, where $ k_{ij}=1 $ indicates that the pair is completely matched, and $k_{ij}=0$ if unmatched. A common setting of the matched data can be divided into several blocks of subsample within which the matched data points are grouped together. Subsequently, we may rewrite the covariance function of the nonparametric GP model as a block diagonal matrix where the $l^{th}$ block matrix takes the form $$\boldsymbol{\Sigma_l} = \sigma^2 \left[(1-\rho)\boldsymbol{I}_{n_l}+\rho \boldsymbol{J}_{n_l}\right],$$ where $\sigma^2 =1+\sigma_0^2$, $\rho = 1/\sigma^2$and $\boldsymbol{J}_{n_l}$ denotes the matrix of ones. The parameter estimates of the regression parameters can be derived by $$\left(\begin{array}{c}
\hat{\mu} \\
\hat{\tau}
\end{array}\right) = \left[\left(\begin{array}{c}
\boldsymbol{1}_n' \\
\boldsymbol{A}_n'
\end{array}\right) \boldsymbol{\Sigma}^{-1} \left(\begin{array}{lr}
\boldsymbol{1}_n & \boldsymbol{A}_n
\end{array}\right) \right]^{-1} \left(\begin{array}{c}
\boldsymbol{1}_n' \\
\boldsymbol{A}_n'
\end{array}\right) \boldsymbol{\Sigma}^{-1} \boldsymbol{Y_n}.$$ It follows that the estimated average treatment effect is, $$\hat{\tau}=\frac{\boldsymbol{1}_n' \boldsymbol{\Sigma}^{-1} \boldsymbol{1}_n \boldsymbol{A}_n' \boldsymbol{\Sigma}^{-1} \boldsymbol{Y_n}-\boldsymbol{A}_n' \boldsymbol{\Sigma}^{-1} \boldsymbol{1}_n \boldsymbol{1}_n' \boldsymbol{\Sigma}^{-1} \boldsymbol{Y_n} }{\boldsymbol{1}_n' \boldsymbol{\Sigma}^{-1} \boldsymbol{1}_n \boldsymbol{A}_n' \boldsymbol{\Sigma}^{-1} \boldsymbol{A_n}-\boldsymbol{A}_n' \boldsymbol{\Sigma}^{-1} \boldsymbol{1}_n \boldsymbol{1}_n' \boldsymbol{\Sigma}^{-1} \boldsymbol{A_n}},$$ Applying the Woodbury, Sherman & Morrison formula, we see $\boldsymbol{\Sigma}^{-1}$ is a block diagonal matrix of $$\boldsymbol{\Sigma_l}^{-1}=\frac{1}{\sigma^2 (1-\rho) (1-\rho +n_l)}\left[ (1+(n-1)\rho) \boldsymbol{I}_{n_l}-\rho \boldsymbol{J}_{n_l}\right].$$ Let $\bar{Y}_{l(a)}$ denote the sample mean of outcome and $n_{l(a)}$ number of observations for the control $(a=0)$ and treatment group $(a=1)$ within the $l^{th}$ subclass, $l=1,2,...,L$. The treatment effect can be expressed as a weighted sum of two terms $$\hat{\tau}=\lambda \hat{\tau}_1+(1-\lambda) \hat{\tau}_0,$$ where $\lambda =\frac{\rho D1}{\rho D1 +(1-\rho) D2}$, $\hat{\tau}_1=\frac{C1}{D1}$ and $\hat{\tau}_0=\frac{C2}{D2}$, $$\begin {array}{lr}
C1=\sum q_l n_l \times \sum q_l n_{l(1)} n_{l(0)} \left( \bar{Y}_{l(1)}-\bar{Y}_{l(0)} \right),\\
C2=\sum q_k n_{l(0)} \times \sum q_l n_{l(1)} \bar{Y}_{l(1)}- \sum q_l n_{l(1)} \times \sum q_l n_{l(0)} \bar{Y}_{l(0)},\\
D1=\sum q_l n_l \times \sum q_l n_{l(1)} n_{l(0)} ,\\
D2=\sum q_l n_{l(1)} \times \sum q_l n_{l(0)},
\end{array}$$ $q_l=(1-\rho+\rho n_l)^{-1}$, $n_l=n_{l(0)}+n_{l(1)}$ and the summations are over $l=1,...,L$. To gain better insight into this estimator, it should help to consider two special matching cases. The first example is a matched twin experiment, where for each treated unit there is a untreated twin. Here, we have a $2n\times 2n$ block diagonal matrix $\boldsymbol{\Sigma_{2n}}=\boldsymbol{I_n \otimes J_2}+\sigma_0 \boldsymbol{I_{2n}}$. Thus, $\sigma=1+\sigma_0^2$, $\rho=\frac{1}{1+\sigma_0^2}$, $n_k=2$, $n_{k(0)}=n_{k(1)}=1$. Substitute them into the treatment effect formula derived above, we have the same 1:1 matching estimator of treatment effect $\hat{\tau}=\bar{Y}_1-\bar{Y}_0$.
The second example is a stratified randomized experiment, where the true propensity of treatment assignment is known. Suppose the strata are equal sized, $\boldsymbol{\Sigma}$ is a block diagonal matrix of $\boldsymbol{I_L \otimes J_n}+\sigma_0 \boldsymbol{I_n}$, where $L$ is total number of strata, the total sample size is $N=Ln$. It is straight forward to derive $\sigma=1+\sigma_0^2$, $\rho=\frac{1}{1+\sigma_0^2}$, $n_l=n$, for $l=1,...,L$. Then the treatment effect is a weighted sum of $ \hat{\tau}_0=\bar{Y}_1-\bar{Y}_0,$ and $\hat{\tau}_1= \frac{\sum n_{l(0)} n_{l(1)} \left( \bar{Y}_{l(1)}-\bar{Y}_{l(0)} \right)}{\sum n_{l(0)} n_{l(1)}} $. Where the weight $\lambda=\frac{N\sum n_{l(0)} n_{l(1)}}{n_1 n_0 \sigma_0^2+N\sum n_{l(0)} n_{l(1)}}$ is a function of sample sizes and $\sigma_0^2$. We can see when $\sigma_0^2 \rightarrow 0$, then $\lambda \rightarrow 1$, $\tau \rightarrow \hat{\tau}_1$. That is when the outcomes are measured without error, the treatment effect is a weighted average of $ \bar{Y}_{l(1)}-\bar{Y}_{l(0)} $, i.e. the group mean difference for each strata. As $\sigma_0^2$ increase, $\lambda$ decrease, then the estimate of $\tau$ puts more weights on $\hat{\tau}_0$. In other words, GP estimate of treatment is a shrinkage estimator, where it shrinks the strata level treatment effect more towards the overall sample mean difference when outcome variance is larger.
More generally, instead of 0/1 match, the sample units may be matched in various degrees. By letting the covariance function takes a squared-exponential form, it offers a way to specify a distance matching, which closely resembles Mahalanobis distance matching. For a pair of “matched” individuals, i.e. sample units with the same set of confounding variables $\boldsymbol{v_i} = \boldsymbol{v_j} $, the model specifies $Corr(Y_i^{(0)}, Y_j^{(0)}) = 1 $. In other words, the “matched” individuals are expected to be exchangeable. As the data points move further apart in the covariate space of $ \Omega_v$, their correlation becomes smaller. When the distant is far apart sufficiently, the model specifies $Corr(Y_i^{(0)}, Y_j^{(0)}) \approx 0 $ or “unmatched”. Distinct length scale parameters are used to allow for some confounder playing more important roles than others in matching. By manipulating the values of $v_i$ and the corresponding length scale parameter, one could formulate the SE covariance matrix to reflect the known 0/1 or various degree of matching structure. However, the matching structure is usually unknown, and was left to be estimated in the GPMatch model informed by the observed data.
Doubly Robust Estimator of ATE
------------------------------
Let the true treatment effect be $\tau^* $, the GPMatch estimator is an unbiased estimate of the average treatment effect, i.e. $E(\tau_i) = \tau^* $, for $i = 1, ...n,$ when either one of the condition is true: i) the GP mean function is correctly specified, i.e. $ E(Z_i^\prime \hat{\gamma}) =Y_i^{(0)}$; and ii) the GP covariance function is correctly specified, in the sense that, from the weight-space point of view of GP regression, the weighted sum of treatment assignment $\tilde{A_i}$ correctly specifies the true treatment propensity $ \pi_i = Pr(A_i =1) $.
It is relatively straight forward for the first part. From the GPMatch model $\boldsymbol{Y_n} \sim MVN(\boldsymbol{Z^\prime \gamma, \Sigma})$, when the linear regression model fits the potential outcome correctly, i.e. $ E(Z_i^\prime \hat{\gamma}) =Y_i^{(0)}$, then $\Sigma$ degenerate to a diagonal matrix, suggesting all units are exchangeable. It follows $ E(\hat{\tau}) = \tau^* $, the treatment effect is correctly estimated.
The second part proceeds as the following. From the weight-space point of view, the GPMatch model predicts the potential outcomes using a weighted sum of the observed outcomes, $$\begin {array}{lr}
\hat{Y_i}^{(a)} = \sum_{j=1}^n {w_{ij} (Y_{j}- A_{j} \hat{\tau} )} + a \hat{\tau} = \tilde{Y_i} + (a - \tilde{A_i}) \hat{\tau}, \label{eq:d5a}
\end{array}$$ where $\tilde{Y_i}=\sum_{j=1}^n w_{ij} Y_j$ and $\tilde{A}_i=\sum_{j=1}^n w_{ij} A_j$, for $i=1,...,n$. The weight $ w_{ij} = \frac{\kappa_{ij}}{\sum_j\kappa_{ij}} $ where $\kappa_{ij}=\boldsymbol{k}(\boldsymbol{v_j})'\boldsymbol{\Sigma}^{-1}$, with $\boldsymbol{k}(\boldsymbol{v_j})=\left( k(\boldsymbol{v_j},\boldsymbol{v_i}) \right)_{n\times 1}$. Thus, the $\tilde{Y_i}$ and $\tilde{A_i}$ could be considered as the Nadaraya-Watson estimator of the observed outcomes and treatment assignment for each of the i-th unit in the sample. The estimate of treatment effect could be obtained by solving $ \frac{\partial{\sum_{i=1}^n \left( Y_i - \hat{Y_i}^{(A_i)} \right)^2}}{\partial{\tau}} = 0 $. We can see that, given a known GP covariance function, the GPMatch treatment effect $\hat{\tau}$ is an M-estimator that satisfies $\sum \Psi_i(\hat{\tau}) = 0$, where $$\label{eq:d6}
\begin {array}{lr}
\Psi_i(\tau) = \left( Y_i-\tilde{Y_i}-\tau (A_i-\tilde{A}_i) \right)(A_i-\tilde{A}_i)=0,
\end{array}$$ Let the true propensity be $\pi_i = Pr(A_i) $, given the SUTVEA, we have $ Y_i = A_i Y_i^{(1)} + (1-A_i) Y_i^{(0)} + \epsilon_i $. Given the true treatment effect $\tau^*$, it can be derived that $ Y_i^{(a)} = E(Y_i) + (a-\pi_i) \tau^* $. When $\tilde{A_i} = \pi_i $ is true, we have $\Psi_i(\tau) = [E(Y_i)-\tilde{Y_i} + (A_i-\pi_i) (\tau - \tau^*) +\epsilon_i](A_i - \pi_i) $. Thus, the GPMatch estimator is an M-estimator of ATE, where the estimating function is conditionally unbiased, i.e. $E(\Psi_i({\tau^*})) = 0$, for $i = 1, ...n$, when the GP covariance function is correctly specified in the sense $\tilde{A_i} = \pi_i $ .
There are several remarks worth noting. First, the equation is the empirical correlation of the residuals from the outcome model and the residuals from the propensity of treatment assignment. Thus, GPMatch method attempts to induce independence between two residuals - one from treatment selection process and one from the outcome modeling, just as the G-estimation equation suggested in [@robins2000] and later in [@vansteelandt2014]. Unlike the moment based G-estimator, which requires fitting of two separate models for the outcome and propensity score, the GPMatch approach estimates covariance parameters the same time as it estimates the treatment and mean function parameters. All within a full Bayesian framework.
Second, some data points may have treatment propensity close to 0 or 1. Those data usually are a cause of concern in causal inference. In the naive regression type of model, it may cause unstable estimation without added regularization. In the IPTW type of method, a few data points may put undue influence over the estimation of treatment effect. In matching methods, these data points often are discarded. Such practice could lead to sample no longer representative of the target population. Like the G-estimation, we see in the equation , these data points receive zero or near zero value of $(A_i - \tilde{A_i})$, putting very little influence over the estimation of treatment effect. Thus GPMatch shares the same added robustness as the G-estimation against the lack of overlapping.
At last, the GPMatch model with a parametric mean function will be predicting the potential outcomes for any new unit. Given the model setup, two regression surfaces are predicted, where the distance between the two regression surfaces represents the treatment effect. By including the treatment by covariate interactions, the model could offer conditional treatment effect as a function of the patient characteristics. Although the model specifications presented in section 2.3 suggest using a parametric linear regression equation for modeling the treatment effect $\tau(x_i)$, it is always difficult to know if any higher order terms should be included in the model. One may consider introducing a few fixed basis functions instead, estimation of the regression coefficients could inform existence of any nonlinear or heterogeneous treatment effect.
Simulation Studies
==================
To empirically evaluate the performances of GPMatch in a real world setting where neither matching structure nor functional form of the outcome model are known, we conducted three sets of simulation studies to evaluate the performances of the GPMatch approach to causal inference. The first set evaluated frequentist performance of GPMatch. The second set compared the performance of GPMatch against MD match, and the last set utilized the widely used Kang and Schafer design, comparing the performance of GPMatch against some commonly used methods.
In all simulation studies, the GPMatch approach used squared exponential covariate function, including only treatment indicator in the mean and all observed covariates into the covariance function, unless otherwise noted. The results were compared with the following widely used causal inference methods: sub-classification by PS quantile (QNT-PS); AIPTW, linear model with PS adjustment (LM-PS), linear model with spline fit PS adjustment (LM-sp(PS)) and BART. Cubic B-splines with knots based on quantiles of PS were used for LM-sp(PS). We also considered direct linear regression model (LM) as a comparison. The ATE estimates were obtained by averaging over 5000 posterior MCMC draws, after 5,000 burn in. For each scenario, three sample sizes were considered, $N$ = 100, 200, and 400.The standard error and the 95% symmetric interval estimate of ATE for each replicate were calculated from the 5,000 MCMC chain. For comparing performances of different methods, all results were summarized over N=100 replicates by the root mean square error RMSE $=\sqrt{\sum (\hat{\tau}_i-\tau)^2/N}$, median absolute error MAE $=median\mid \hat{\tau}_i-\tau\mid$, coverage rate Rc = (the number of intervals that include $\tau)/N$ of the 95% symmetric posterior interval, the averaged standard error estimate $SE_{ave}=\sum \hat{\sigma}_i/N$, where $\hat{\sigma}_i$ is the square root of the estimated standard deviation of $\hat{\tau}_i$, and the standard error of ATE was calculated from 100 replicates $SE_{emp}=\sqrt{\sum (\hat{\tau}_i-\bar{\hat{\tau_i}})^2/(N-1)}$.
Well Calibrated Frequentist Performances
----------------------------------------
Let the observed covariate $x \sim N(0,1)$ and the unobserved covariates $\{U_0, U_1, U_2, \epsilon\} \sim^{iid} N(0,1)$. The potential outcome was generated by $y^{(a)} = e^x+(1+\gamma_1 U_1)\times a + \gamma_0 U_0$ for $a=0,1$, where the true treatment effect was $1+\gamma_1 U_{1i}$ for the i-th individual unit. The $(U_0,U_1)$ are unobserved covariates. The treatment was selected for each individual following $logit(P(A=1|X))=-0.2+(1.8X)^{1/3} + \gamma_2 U_2^2$. The observed outcome was generated by $y | x,a = y^{(a)} + \gamma_3 \epsilon $. Four parameter settings were considered for the combinations of $\{\gamma_0, \gamma_1 , \gamma_2, \gamma_3 \}$: $\{0.5, 0, 0, \sqrt{0.75}\} $, $\{1, 0.15, 0, 0\} $, $\{0.5, 0, 0.7, \sqrt{0.75}\} $, and $\{1, 0.15, 0.7, 0\} $. In the $1^{st}$ and $3^{rd}$ settings, let $\tau_i = 1$. In the $2^{nd}$ and $4^{th}$ settings, the treatment effect $\tau_i \sim (1, \gamma_1^2)$, varying among individual units. Except for the first setting, the simulation settings included unmeasured confounders $U_1$ and/or $U_2$.
![Distribution of the GPMatch Estimate of ATE, by Different Sample Sizes under the Single Covariate Simulation Study Settings[]{data-label="fig:2"}](Fig2_OneX_Histogram){width="14cm"}
[rccccccc]{}
\
**Method** & **Sample Size** & **RMSE** & **MAE** & **Bias** & **Rc** & **$\boldsymbol{SE_{avg}}$** & **$\boldsymbol{SE_{emp}}$**\
[ – *Continued from previous page*]{}\
**Method** & **Sample Size** & **RMSE** & **MAE** & **Bias** & **Rc** & **$\boldsymbol{SE_{avg}}$** & **$\boldsymbol{SE_{emp}}$**\
\
\
Gold & 100 & 0.243 & 0.165 & -0.066 & 0.93 & 0.216 & 0.235\
& 200 & 0.149 & 0.109 & 0.027 & 0.94 & 0.150 & 0.147\
& 400 & 0.123 & 0.087 & -0.007 & 0.93 & 0.107 & 0.123\
& & & & & & &\
GPMatch & 100 & 0.260 & 0.160 & -0.038 & 0.93 & 0.242 & 0.258\
& 200 & 0.161 & 0.116 & 0.033 & 0.97 & 0.167 & 0.159\
& 400 & 0.122 & 0.085 & -0.005 & 0.96 & 0.118 & 0.123\
\
Gold & 100 & 0.220 & 0.134 & -0.011 & 0.92 & 0.213 & 0.221\
& 200 & 0.159 & 0.098 & 0.001 & 0.94 & 0.151 & 0.159\
& 400 & 0.107 & 0.077 & -0.003 & 0.95 & 0.107 & 0.108\
& & & & & & &\
GPMatch & 100 & 0.237 & 0.152 & 0.013 & 0.97 & 0.244 & 0.238\
& 200 & 0.175 & 0.114 & 0.007 & 0.94 & 0.169 & 0.175\
& 400 & 0.117 & 0.084 & 0.001 & 0.96 & 0.117 & 0.118\
\
Gold & 100 & 0.228 & 0.137 & -0.016 & 0.92 & 0.214 & 0.228\
& 200 & 0.154 & 0.099 & 0.005 & 0.94 & 0.151 & 0.155\
& 400 & 0.113 & 0.078 & 0.001 & 0.94 & 0.107 & 0.114\
& & & & & & &\
GPMatch & 100 & 0.231 & 0.156 & 0.009 & 0.96 & 0.237 & 0.232\
& 200 & 0.166 & 0.107 & -0.003 & 0.93 & 0.164 & 0.167\
& 400 & 0.115 & 0.088 & 0.003 & 0.96 & 0.114 & 0.115\
\
Gold & 100 & 0.209 & 0.148 & 0.015 & 0.96 & 0.215 & 0.209\
& 200 & 0.136 & 0.098 & 0.008 & 0.97 & 0.152 & 0.136\
& 400 & 0.095 & 0.076 & -0.002 & 0.98 & 0.107 & 0.095\
& & & & & & &\
GPMatch & 100 & 0.226 & 0.140 & 0.022 & 0.97 & 0.238 & 0.226\
& 200 & 0.164 & 0.105 & 0.024 & 0.98 & 0.169 & 0.163\
& 400 & 0.104 & 0.073 & 0.009 & 0.96 & 0.114 & 0.104\
RMSE = root mean square error; MAE = median absolute error; Bias = Estimate-True; Rc = Rate of coverage by the 95% interval estimate; $SE_{avg}$ = average of standard error estimate from all replicate; $SE_{emp}$ = standard error of ATE estimates from all replicate;
Gold: Using the true outcome generating model;
GPMatch: Bayesian marginal structural model with Gaussian process prior, only treatment effect is included in the mean function; covariance function includes $X$.
The simulation results were summarized in the histogram of the posterior mean over the 100 replicates across three sample sizes in Figure \[fig:2\]. Table \[table:1\] presented the results of GPMatch and the gold standard. The gold standard was obtained by fitting the true outcome generating model. Under all settings, GPMatch presented well calibrated frequentist properties with nominal coverage rate, and only slightly larger RMSE. The averaged bias, RMSE and MAE quickly improve as sample size increases, and perform as well as the gold standard with the sample size of 400. Comparison of the RMSE and MAE with the results using other causal inference methods were presented in Figures S1 - S4.
Compared to Manhalanobis Distance Matching
------------------------------------------
To compare the performances between the MD matching and GPMatch, we considered a simulation study with two independent covariates $x_1$, $x_2$ from the uniform distribution $U(-2,2)$, treatment was assigned by letting $A_i\sim Ber(\pi_i)$, where $$logit \pi_i=-x_1-x_2.$$ The potential outcomes were generated by $$\begin{array}{cr}
y_i^{(a)} = 3+5a+x_{1i}^3,\\
Y_i|X_i, A_i \sim N(y_i^{(A_i)}, 1).
\end{array}$$ The true treatment effect is 5. Three different sample sizes were considered N= 100, 200 and 400. For each setting, 100 replicates were performed and the results were summarized.
![Simulation Study Results of comparing GPMatch with Manhalanobis Distance Matching Methods. The circles are the averaged biases of estimates of ATE using Mahalanobis matching with corresponding calipers. The corresponding vertical lines indicate the ranges between 5th and 95th percentiles of the biases. The horizontal lines are the averaged ATE (short dashed line), and the 5th percentile and 95th percentile (long dashed line) of the biases of the estimates from GPMatch.[]{data-label="fig:1"}](Fig1_Maha_all){width="10cm"}
We estimated ATE by applying Mahalanobis distance matching and GPMatch. The MD matching considered caliper varied from 0.125 to 1 with step size 0.025, including both $X_1$ and $X_2$ in the matching using the function Match in R package Matching by [@sekhon2007multivariate]. The averaged bias and its 95%-tile and 5%-tile were presented as vertical lines corresponding to different calipers in Figure \[fig:1\]. To be directly comparable to the matching approach, the GPMatch estimated the ATE by including treatment effect only in modeling the mean function, both $X_1$ and $X_2$ were considered in the covariance function modeling. The posterior results were generated with 5,000 MCMC sample after 5,000 burn-in. Its averaged bias (short dashed horizontal line) and 5% and 95%-tiles of the ATE estimate (long dashed horizontal lines) were presented on the Figure \[fig:1\] for each the sample sizes. Also presented in the Figure were the bias, median absolute error (MAE), root mean square error (RMSE), and rate of coverage rate (Rc) summarized over 100 replicates of GPMatch. The bias from the matching method increases with caliper; the width of interval estimate varies by sample size and caliper. It reduces with increased caliper for the sample size of 100, but increases with increased caliper for sample size of 400. In contrast, GPMatch produced a much more accurate and efficient estimate of ATE for all sample sizes, with unbiased ATE estimate and nominal coverage rate. The 5% and 95%-tiles of ATE estimates are always smaller than those from the matching methods for all settings considered, suggesting better efficiency of GPMatch.
Performance under Dual Misspecification
----------------------------------------
Following the well-known simulation design suggested by [@Kang2007], covariates $z_1,z_2,z_3,z_4$ were independently generated from the standard normal distribution $N(0,1)$. Treatment was assigned by $A_i \sim Ber(\pi_i)$, where $$logit \pi_i = -z_{i1} + 0.5z_{i2} - 0.25z_{i3} - 0.1z_{i4}.$$ The potential outcomes were generated for $a=0,1$ by $$\begin{array}{cl}
y_i^{(a)} = 210+5a+27.4z_{i1}+13.7z_{i2}+13.7z_{i3}+13.7z_{i4},\\
Y_i|A_i, X_i \sim N(y^{(A_i)}, 1).
\end{array}$$ The true treatment effect is 5. To assess the performances of the methods under the dual miss-specifications, the transformed covariates $x_1 = exp(z_1/2),x_2 = z_2/(1 + exp(z_1)) + 10,x_3 = \left(\frac{z_1 z_3}{25}+ 0.6\right)^3$, and $x_4 = (z_2 + z_4 + 20)^2 $ were used in the model instead of $z_i$.
Two GPMatch models were considered: GPMatch1 modeled the treatment effect only and GPMatch2 modeled all four covariates $X_1-X_4$ in the mean function model. Both included $X_1-X_4$ with four distinct length scale parameters. The PS was estimated using two approaches including the logistic regression model on $X_1-X_4$ and the covariate balancing propensity score method (CBPS, [@imai2014]) applied to $X_1-X_4$. The results corresponding to both versions of PS were presented. Summaries over all replicates were presented in Table \[table:2\], and the RMSE and the MAE were plotted in Figure \[fig:3\], for all methods considered. As a comparison, the gold standard which uses the true outcome generating model of $Y \sim Z_1-Z_4$ was also presented. Both GPMatch1 and GPMatch2 clearly outperforms all the other causal inference methods in terms of bias, RMSE, MAE, Rc, and the $SE_{ave}$ is closely matched to $SE_{emp}$. The ATE and the corresponding SE estimates improve quickly as sample size increases for GPMatch. In contrast, the QNT\_PS, AIPT, LM\_PS and LM\_sp(PS) methods show little improvement over increased sample size, so is the simple LM. Improvements in the performance of GPMatch over existing methods are clearly evident, with more than 5 times accuracy in RMSE and MAE compared to all the other methods except for BART. Even compared to the BART results, the improvement in MAE is nearly twice for GPMatch2, and about 1.5 times for the GPMatch1. Similar results are evident in RMSE and averaged bias. The lower than nominal coverage rate is mainly driven by the remaining bias, which quickly reduces as sample size increases. Additional results are presented in Figure S5.
[rccccccc]{}
\
**Method** & **Sample Size** & **RMSE** & **MAE** & **Bias** & **Rc** & **$\boldsymbol{SE_{avg}}$** & **$\boldsymbol{SE_{emp}}$**\
[ – *Continued from previous page*]{}\
**Method** & **Sample Size** & **RMSE** & **MAE** & **Bias** & **Rc** & **$\boldsymbol{SE_{avg}}$** & **$\boldsymbol{SE_{emp}}$**\
\
Gold & 100 & 0.224 & 0.150 & 0.011 & 0.95 & 0.225 & 0.225\
& 200 & 0.171 & 0.125 & -0.015 & 0.94 & 0.163 & 0.171\
& 400 & 0.102 & 0.063 & -0.015 & 0.96 & 0.112 & 0.102\
GPMatch1 & 100 & 2.400 & 1.606 & -1.254 & 0.92 & 2.158 & 2.057\
& 200 & 1.663 & 1.309 & -1.051 & 0.86 & 1.213 & 1.295\
& 400 & 0.897 & 0.587 & -0.564 & 0.86 & 0.673 & 0.701\
GPMatch2 & 100 & 1.977 & 1.358 & -0.940 & 0.91 & 1.672 & 1.748\
& 200 & 1.375 & 1.083 & -0.809 & 0.82 & 0.980 & 1.117\
& 400 & 0.761 & 0.484 & -0.432 & 0.87 & 0.567 & 0.629\
QNT\_PS$^a$ & 100 & 7.574 & 6.483 & -6.234 & 0.970 & 7.641 & 4.324\
& 200 & 7.408 & 6.559 & -6.615 & 0.860 & 5.199 & 3.353\
& 400 & 7.142 & 6.907 & -6.797 & 0.500 & 3.576 & 2.203\
QNT\_PS$^b$ & 100 & 8.589 & 7.360 & -7.177 & 0.970 & 7.541 & 4.744\
& 200 & 8.713 & 8.121 & -7.964 & 0.720 & 5.214 & 3.550\
& 400 & 8.909 & 7.980 & -8.399 & 0.300 & 3.607 & 2.987\
LM & 100 & 6.442 & 5.183 & -5.556 & 0.65 & 3.571 & 3.277\
& 200 & 6.906 & 6.226 & -6.375 & 0.28 & 2.547 & 2.668\
& 400 & 7.005 & 6.649 & -6.702 & 0.04 & 1.796 & 2.048\
AIPTW$^a$ & 100 & 5.927 & 4.402 & -4.330 & 0.72 & 3.736 & 4.067\
& 200 & 19.226 & 5.262 & -7.270 & 0.59 & 4.874 & 17.888\
& 400 & 29.405 & 5.603 & -9.676 & 0.36 & 6.115 & 27.908\
AIPTW$^b$ & 100 & 5.410 & 4.243 & -3.659 & 0.77 & 3.780 & 4.005\
& 200 & 5.780 & 5.075 & -4.950 & 0.52 & 2.712 & 2.999\
& 400 & 6.204 & 5.482 & -5.652 & 0.24 & 2.105 & 2.569\
LM\_PS$^a$ & 100 & 5.103 & 3.832 & -4.091 & 0.74 & 3.420 & 3.066\
& 200 & 5.392 & 4.648 & -4.793 & 0.53 & 2.452 & 2.483\
& 400 & 5.091 & 5.128 & -4.787 & 0.19 & 1.706 & 1.741\
LM\_PS$^b$ & 100 & 5.451 & 4.156 & -4.528 & 0.72 & 3.427 & 3.051\
& 200 & 5.891 & 4.981 & -5.278 & 0.46 & 2.466 & 2.631\
& 400 & 5.585 & 5.452 & -5.272 & 0.13 & 1.726 & 1.852\
LM\_sp(PS)$^a$ & 100 & 4.809 & 3.161 & -3.598 & 0.79 & 3.165 & 3.207\
& 200 & 4.982 & 4.152 & -4.266 & 0.52 & 2.250 & 2.587\
& 400 & 4.470 & 4.038 & -4.127 & 0.23 & 1.559 & 1.727\
LM\_sp(PS)$^b$ & 100 & 4.984 & 3.619 & -3.806 & 0.77 & 3.095 & 3.233\
& 200 & 5.237 & 4.374 & -4.507 & 0.51 & 2.248 & 2.681\
& 400 & 4.856 & 4.484 & -4.494 & 0.18 & 1.585 & 1.851\
BART & 100 & 3.148 & 2.504 & -2.491 & 0.79 & 2.163 & 1.935\
& 200 & 2.176 & 1.870 & -1.726 & 0.74 & 1.308 & 1.332\
& 400 & 1.283 & 0.942 & -0.997 & 0.71 & 0.757 & 0.812\
$^a$ Propensity score estimated using logistic regression on $X_1-X_4$.
$^b$ Propensity score estimated using CBPS on $X_1-X_4$.
RMSE = root mean square error; MAE = median absolute error; Bias = Estimate-True; Rc = Rate of coverage by the 95% interval estimate; $SE_{avg}$ = average of standard error estimate from all replicate; $SE_{emp}$ = standard error of ATE estimates from all replicate;
GPMatch1-2: Bayesian structural model with Gaussian process prior. GPMatch1 including only treatment effect, and GPMatch2 including both treatment effect and $X_1-X_4$ in the mean function; both including $X_1-X_4$ in the covariance function.
QNT\_PS: Propensity score sub-classification by quintiles.
AIPTW: augmented inversed probability of treatment weighting;
LM: linear regression modeling $Y \sim X_1-X_4$;
LM\_PS: linear regression modeling with propensity score adjustment.
LM\_sp(PS): linear regression modeling with spline fit propensity score adjustment.
BART: Bayesian additive regression tree.
![The RMSE and MAE of ATE Estimates using Different Methods under the Kang and Shafer Simulation Study Setting. GPMatch1-2: Bayesian structural model with Gaussian Process prior. GPMatch1 including only treatment effect, and GPMatch2 including both treatment effect and $X_1-X_4$ in the mean function; and $X_1-X_4$ are included in the covariance function. QNT\_PS: Propensity score sub-classification by quintiles. AIPTW: augmented inverse probability of treatment weighting; LM: linear regression modeling $Y \sim X_1-X_4$; LM\_PS: linear regression modeling with propensity score adjustment. LM\_sp(PS): linear regression modeling with spline fit propensity score adjustment[]{data-label="fig:3"}](Fig3_KS_MAE_RMSE){width="13cm"}
A Case Study
============
JIA is a chronic inflammatory disease, the most common autoimmune disease affecting the musculoskeletal organ system, and a major cause of childhood disability. The disease is relatively rare, with an estimated incidence rate of 12 per 100,000 child-year ([@harrold2013]). There are many treatment options. Currently, the two common approaches are the non-biologic disease modifying anti-rheumatic drugs (DMARDs) and the biologic DMARDs. Limited clinical evidence suggest that early aggressive use of biologic DMARDs may be more effective ([@wallace2014]). Utilizing data collected from a completed prospectively followed up inception cohort research study ([@seid2014]), a retrospective chart review collected medication prescription records for study participants captured in the electronic health record system. This comparative study is aimed at understanding whether therapy using early aggressive combination of non-biologic and biologic DMARDs is more effective than the more commonly adopted non-biologic DMARDs monotherapy in treating children with recently (<6 months) diagnosed polyarticular course of JIA. The study is approved by the investigator’s institutional IRB.
![Balance Check Results for the Cases Study[]{data-label="fig:4"}](Fig4_Case_study_love_plot){width="12cm"}
The primary outcome is the Juvenile Arthritis Disease Activity Score (JADAS) after 6 months of treatment, a disease severity score calculated as the sum of four core clinical measures: physician’s global assessment of disease activity (0-10), patient’s self-assessment of overall wellbeing (0-10), erythrocyte sedimentation rate (ESR, standardized to 0-10), and number of active joint counts (AJC, truncated to 0-10). It ranges from 0 to 40, with 0 indicating no disease activity. Out of the 75 patients receiving either non-biological or the early combination of biological and non-biological DMARDs at baseline, 52 patients were treated by the non-biologic DMARDs and 23 were treated by the early aggressive combination DMARDs. The patients with longer disease duration, positive rheumatoid factor (RF) presence, higher pain visual analog scale (VAS) and lower baseline functional ability as measured by the childhood health assessment questionnaire (CHAQ), higher lost range of motion (LROM) and JADAS score are more likely to receive the biologic DMARDs prescription. The propensity score was derived using the CBPS method applied to the pre-determined important baseline confounders. The derived PS were able to achieve a desired covariate balance within the 0.2 absolute standardized mean difference (Figure \[fig:4\]), and comparable distributions in important confounders (Figure S6).
![Case Study Trace Plot and Histogram[]{data-label="fig:5"}](Fig5_Case_study_trace_plot_histogram){width="12cm"}
------------ -------- ------- -------- ------- --
Naïve -0.338 1.973 -4.205 3.529
QNT\_PS -0.265 0.792 -1.817 1.286
AIPTW -0.639 2.784 -6.094 4.817
LM -2.550 1.981 -6.432 1.332
LM\_PS -2.844 2.002 -6.767 1.079
LM\_sp(PS) -1.664 2.159 -5.896 2.568
BART -2.092 1.629 -5.282 1.155
GPMatch -2.902 1.912 -6.650 0.789
------------ -------- ------- -------- ------- --
: Results of Case Study ATE Estimates with None-Matching Methods[]{data-label="table:3"}
SD = standard deviation; LL = lower limit; UL=upper limit;
Naïve: Student-T two group comparisons;
QNT\_PS: Propensity score sub-classification by quintiles.
AIPTW: augmented inversed probability of treatment weighting;
LM: linear regression modeling $Y \sim X$;
LM\_PS: linear regression modeling with propensity score adjustment.
LM\_sp(PS): linear regression modeling with spline fit propensity score adjustment;
BART: Bayesian additive regression tree;
GPMatch: Bayesian structural model with Gaussian process prior.
The GPMatch model included the baseline JADAS, CHAQ, time since diagnosis at baseline, and time interval between baseline and the six month follow-up visit in modelling the covariance function. These four covariates, along with the binary treatment indicator and an indicator of positive test of rheumatoid factor were used in the partially linear mean function part of the GPMatch. Applying the proposed method, GPMatch obtained the average treatment effect of -2.90 with standard error of 1.91, and the 95% credible interval of (-6.65, 0.79). Figure \[fig:5\] presents the trace plot and histogram of the posterior distribution of the ATE estimate. The results suggest that, the early aggressive combination of non-biologic and biologic DMARDs as the first line of treatment is more effective, leading to a nearly 3 point of reduction in JADAS six months after treatment, compared to the non-biologic DMARDs treatment to children with a newly diagnosed disease. The results of ATE estimates by GPMatch, naive two group comparison and other existing causal inference methods are presented in Table \[table:3\]. The LM, LM\_PS, LM\_sp(PS) and AIPTW include the same five covariates in the model along with the treatment indicator. BART used the treatment indicator and those covariates. While all results suggested effectiveness of an early aggressive use of biological DMARD, the naive, PS sub-classificiton by quintiles, and AIPTW suggested a much smaller ATE effect. The BART and PS adjusted linear regression produced results that were closer to the GPMatch results suggesting 2 or 3 points reduction in the JADAS score if treated by the early aggressive combination DMARDs therapy. None of the results were statistically significant at the 2-sided 0.05 level.
We also applied the covariate matching method to the same dataset based on the same four baseline covariates. Table \[table:4\] presents the results from using different caliper. As expected, as calipers narrow, the number of observations being discarded increases. Since only 10 patients had RF positive, thus, when the calipers were set to 1 or smaller, we cannot matching on RF positive anymore. Thus, for calipers smaller than 1, all subjects with positive RF were being excluded. When calipers were set at 0.5, about 50% observations were discarded. When the calipers were set at 0.2, 62 out of 73 observations were discarded, rendering the results obtained from 11 observations only! The estimate of ATE was sensitive to the choices of calipers, ranged from -6.59 to -3.12, making it difficult to interpret the study results.
------------------- ------------------ -------- -------- -------- -------- -------- --------
ATE -3.117 -4.043 -4.035 -5.577 -6.592 -3.864
SE 2.232 2.075 1.701 1.459 1.092 0.536
\# of obs dropped 1 10 21 34 49 62
**Before Match**
JADAS0 0.675 0.215 0.078 0.079 0.035 0.079 -0.090
Time diagnosed 0.233 0.013 0.020 -0.006 -0.010 0.041 0.048
CHAQ 0.281 0.083 0.079 0.072 0.079 -0.054 -0.057
RF positive 0.643 0.000 0.000 NA\* NA\* NA\* NA\*
------------------- ------------------ -------- -------- -------- -------- -------- --------
: Results of Case Study ATE Estimates with Matching Method in Case Study[]{data-label="table:4"}
Note: \* When the caliper is less than 1, all of the observations with positive RF are excluded.
Conclusions and Discussions
===========================
Bayesian approaches to causal inference commonly consider it as a missing data problem. However, as suggested in [@Ding2018], the causal inference presents additional challenges that are unique in itself than the missing data alone. Approaches not carefully address these unique challenges are vulnerable to model mis-specifications and could lead to seriously biased results. When not considering the treatment-by-indication confounding, naive Bayesian regression approaches could suffer from “regularity induced bias” ([@Hahn2018] ). Because no more than one potential outcome could be observed for a given individual unit, the correlation of $ (Y_i^{(1)} , Y_i^{(0)}) $ is not directly identifiable, leading to “inferential quandary” as suggested in [@Dawid2000] . Extensive simulations presented in [@Kang2007; @Gutman2017; @Hahn2018b] suggested poor operational characteristics observed in many widely adopted causal inference methods.
The proposed GPMatch method offers a full Bayesian causal inference approach that can effectively address the unique challenges inherent in causal inference. First, utilizing GP prior covariance function to model covariance of observed data, GPMatch could estimate the missing potential outcomes much like the matching method. Yet, it avoids pitfalls of many matching methods. No data is discarded, and no arbitrary caliper is required. Instead, the model allows the data to speak by itself via estimating length scale and variance parameters. The SE covariance function of GP prior offers an alternative distance metric, which closely resembles Mahalanobis distance. It matches data points by the degree of matching proportional to the SE distance, without requiring specification of caliper. For this reason, the GPMatch could utilize data information better than matching procedure. Different length scale parameters are considered for different covariates used in defining SE covariance function. This allows the data to select the most important covariates to be matched on, and acknowledge some variable is more important than others. While the idea of using GP prior for Bayesian causal inference is not new. Utilizing GP covariance function as a matching device is a unique contribution of this study. The matching utility of GP covaraince function is presented analytically by considering a setting when matching structure is known. We show that GPMatch enjoys doubly robust properties, in the sense that it correctly estimate the averaged treatment effect when either one of the conditions is true: 1) the mean function of the GPMatch correctly specifies the prognostic function of the potential outcome $Y^{(0)}$; and 2) the GP prior covariance function correctly specifies matching structure. We show that GPMatch estimates the treatment effect by inducing independence between two residuals: the residual from treatment propensity estimate and the residual from the outcome estimate, much like the G-estimation method. Unlike the two-staged G-estimation, the estimations of the parameters in covariance function and the mean function for the GPMatch are performed simultaneously. Therefor, GPMatch regression approach can integrate the benefits of the regression model and matching method and offers a natural way for Bayesian causal inference to address challenges unique to the causal inference problems. The robust and efficient proprieties of GPMatch are well supported by the simulation results designed to reflect the most realistic settings, i.e. no knowledge of matching or functional form of outcome model is available.
The validity of causal inference by GPMatch rests on a weaker version of causal assumptions depicted in the Fig. 1 DAG. Despite the fact that previous literature have questioned the SUTVA assumption (see stochastic consistency suggested by [@Cole2009] and [@VanderWeele2009], and treatment variation discussed in [@Rubin1978]), no approach to our knowledge has explicitly acknowledged it as such. Rather, most of the methods imposing an overly rigid assumption that the treatment from the real world as having exactly the same meaning as those from the randomized and tightly controlled experiments, and that the observed outcome is an exact copy of the corresponding potential outcome. Here, our causal assumptions reflect more realistic setting that outcome could be measured with error, and the treatment received by different individuals may vary, even though the treatment prescribed is identical. The assumption of the ignorable treatment assignment is often used exchangeably with the assumption of no unmeasured confounder in the currently literature. The ignorable treatment assignment assumption is necessary to ensure the validity of causal inference obtained from the observed data. In Fig 1. DAG, we show that unmeasured confounders are admissible under the ignorable treatment assumption. Specifically, it allows for existence of $U_1$ and $U_2$ (both correlated with $A$ and $Y$), as long as they do not open up the back-door path from Y to $A$ conditional of the measured covariates. In other words, the causal effect can be identified without bias if we could observe a minimum sufficient set that block the back-door path from $Y$ to $A$. Thus, it presents a weaker assumption than the assumption of no unmeasured confounder. In case any potential violation of the causal assumptions is suspected, external information is needed and GPMatch can be extended to incoporate such uncertainty. With a weaker version of causal assumptions and by explicitly modeling the mean and covariance functions, the GPMatch is more capable of defending against potential model misspecification in the challenging real world setting.
Full Bayesian modeling approach is particularly useful in comparative effectiveness research. It offers a coherent and flexible framework for incorporating prior knowledge and synthesizing information from different sources. As a full Bayesian causal inference model, the GPMatch offers a very flexible and general approach to address more complex data types and structures natural to many causal inference problem settings. It can be directly extended to consider multilevel or cluster data structure, and to accommodate complex type of treatment such as multiple level treatment, continuous or composite type of treatment. The model could be extend to time-varying treatment setting without much difficulty by following the g-computation formula framework. The post-treatment confounding can be addressed by incorporate the confounding variables into the modeling of mean function. We are already implementing these extensions in an ongoing case study. Although we focused on presenting GPMatch for estimating the average treatment effect (ATE) in this study, the approach is directly applicable to estimation of averaged treatment effect in treated (ATT) and averaged treatment effect in control (ATC). In addition, it can be readily used for modeling treatment effect as a function of pre-specified treatment modifying factors. [@sivaganesan2017subgroup] suggested a Bayesian decision theory based approach for identifying subgroup treatment effect in a randomized trial setting. With GPMatch, the same idea could be applied to identify subgroup treatment analyzing real world data. Studies are ongoing to evaluate its performances for estimating heterogeneous treatment effect. The GP regression has been extended to general types of outcomes including binary and count data ([@rasmussen2004gaussian]). Future studies may further investigate its performance under the general types of outcome and data structures. Our simulation focused on comparing with the commonly used causal inference method. Future studies may consider comparisons of our method with other advanced Bayesian methods such as those proposed by [@roy2017bayesian] and [@Saarela2016], as well as other advanced non-Bayesian approaches like Targeted MLE ([@van2006targeted]).
The GP regression is a very flexible modeling technique, but it is computationally expensive. The time cost associated with GP regression increases at $n^3$ rate, thus it can be challenging with large sample sizes. The Bayesian Gibbs Sampling algorithm we have used makes it even more demanding in computational resources. Some literature has offered solutions by applying GP to large data, such as [@banerjee2008]. Alternatively, one may consider using Bayesian Kernel regression as an approximation. Further studies are needed to improve the computational efficiency and to consider variable selection. It is well known the length scale parameter is hard to estimate. Researchers derived different kinds of priors for GP, for example the objective prior in [@Berger2001], [@Kazianka2012], and [@ren2013objective]. [@Gelfand2005] suggested using uniform prior for the inverse of the scale parameter in a spatial analysis, but we found that using a prior with preference to smooth surface was more suitable for our purpose. Researchers could also blend their knowledge in the prior to obtain a more efficient estimate. Here we considered squared exponential covariance function but different covariance function such as Matérn could also be considered. Simple block compound symmetry with one correlation coefficient parameter could be used as an alternative covariance matrix. Such blocked covariance set up could be useful particularly for a large sample size and where the data has a reasonable clustering structure, such as in the case of a multi-site study. Future study will explore along this direction.
acknowledgements {#acknowledgements .unnumbered}
================
This work was supported by an award from the Patient Centered Outcome Research Institute (PCORI ME-1408-19894; PI. B. Huang) and a P&M pilot award from the Centre for Clinical and Translational Science and Training, which is supported by the National Centre for Advancing Translational Sciences of the National Institutes of Health, under Award Number 5UL1TR001425-03). The Authors declare that there is no conflict of interest.
conflict of interest {#conflict-of-interest .unnumbered}
====================
The authors declare no conflict of interest .
|
---
abstract: 'The metallic surface state of a topological insulator (TI) is not only topologically protected, on curved surfaces. For an electron in the surface state of a spherical or a cylindrical TI (TI nanoparticle or nanowire) a pseudo-magnetic monopole or a fictitious solenoid is effectively induced, encoding the geometry of the system.'
author:
- 'Ken-Ichiro Imura$^1$'
- Yositake Takane$^1$
title:
---
Neither being a metal nor an insulator, the topological insulator has now been recognized as a basic form of solid that exhibits both gapped bulk and gapless surface states [@FuKaneMele; @MooreBalents; @Roy]. Such a classification is well-defined in the continuum limit, while the situation is less trivial in the case of lattice models often employed as a concrete implementation of topological insulators, there remaining a question, “where actually is the surface?” A lattice model is sparse, and in a somewhat extreme point of view, existing only on sites and links, so that its surface is not restricted to the macroscopic boundary of the system, but could be also chosen [*e.g.*]{}, such that it is partly extended to a rectangular-prism-shaped region (RPSR) penetrating into the bulk as depicted Fig. \[JG\]. Or one can also think of an atomic scale closed surface isolated in the bulk [@closed]. However, in reality the protected surface state appears only on its macroscopic surface, exhibiting no symptom of penetrating into the bulk even in the case of sparse lattice systems.
Why is the surface state into the bulk? What prevents it from penetrating into the sparsely filled interior of the lattice models? In this Communication we demonstrate that the existence of a Berry phase $\pi$, or a spin connection associated with what is often called spin-to-surface locking [@Ran_PRB; @Vishwanath; @Mirlin; @Bardarson; @disloc; @aniso], plays a central role in this issue. Though existence of a protected surface state is a defining property of the topological insulator, topological protection does not exclude the possibility of finite-size gap opening. As we have demonstrated previously [@aniso; @spherical], Dirac electrons on the surface of a topological insulator On a cylindrical surface a fictitious solenoid threading the cylinder is effectively induced [@aniso], while in the case of a spherical system, an effective magnetic monopole [@spherical; @Shen] is induced, determining the gapped electronic spectrum on the surface.
A Dirac electron on the surface of a topological insulator, especially, its spin state is susceptible of two types of constraints, and “locked” both in the momentum and real spaces. Spin-to-momentum locking is a direct consequence of the strong spin-orbit coupling in this system. Here, we focus on another phenomenon that manifests on a curved surface, often represented by the term, “spin-to-surface locking”. Through the bulk-boundary correspondence, the entangled nature of the spin and the configuration spaces encoded in the bulk Hamiltonian is transcribed to the surface Dirac equation. The helical surface state thus inherits a geometrical constraint imposed on its spin state, and an electron in this state is susceptible of a specific type of Berry phase, or the spin connection, inducing an effective monopole or a flux tube. In the somewhat special case of cylindrical geometry, the constraint on spin manifests as spin-to-surface locking, [*i.e.*]{}, the spin of the surface state is constrained onto the tangential plane of the curved surface [@Ran_PRB; @Vishwanath; @Mirlin; @Bardarson; @disloc; @aniso]. Appearance of the spin connection in the surface Dirac equation is more universal, unrestricted to the case of specific geometry.
-------------------------------------------------------------------------------------------------
![(Color online) Which is the genuine surface?[]{data-label="JG"}](JG.eps "fig:"){width="45mm"}
-------------------------------------------------------------------------------------------------
There is an inverse effect in of the surface state. Along a flux tube of strength $\pi$ (half unit flux quantum) piercing a TI sample a pair of gapless helical modes bound to the tube is induced. These 1D helical channels are shown to be perfectly conducting [@pcc], and topologically protected as well [@Ran_nphys; @TeoKane; @disloc]. In the presence of a surface at which the flux tube is terminated, how are these 1D channels connected to the 2D helical surface states? In Fig. \[invasive\] we demonstrate that the noninvasive surface state becomes gradually invasive into the bulk with the aid of the flux tube. When the total amount of the flux is not precisely $\pi$, penetration of the surface state into the bulk is exponentially suppressed. When the flux is exactly $\pi$, the surface state can penetrate into the bulk as deeply as the system’s configuration allows it. In a sense the $\pi$-flux drags the surface state into the bulk, making it [*invasive*]{}.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(Color online) Penetration of the surface wave function along a flux tube of strength $\Phi$.[]{data-label="invasive"}](invasive_W2_v3.eps "fig:"){width="90mm"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
This Communication is intended to reveal the nature of this noninvasive metallic state that appears on topological insulator surfaces We start by simulating the behavior of the surface wave function along a flux tube. Then, as a complementary to this, we analytically establish the correspondence between the bulk and the surface descriptions. This is achieved in the second half of the paper, by employing a configuration in which the surface state can partly penetrate into the bulk. To ease analytic treatments the surface is designed to shape a smooth hyperbolic form, which may look like a “drain” \[see Fig. \[drain\], panel (a)\]. Mathematically, this is the locus of a hyperbola depicted in Fig. \[drain\] (b) when it revolves around the $z$-axis. In the limit of sharply edged hole ($R\rightarrow 0$) this reproduces the situation described by the tight-binding model employed in the first part for numerical simulations.
Let us briefly describe the model employed in the tight-binding simulation. The model is based on the following 3D Wilson-Dirac type effective Hamiltonian in the bulk [@Liu_nphys; @Liu_PRB], $$H_{\rm bulk} = m(\bm p)\tau_z +
A(p_x \sigma_x + p_y \sigma_y+ p_z \sigma_z)\tau_x,
\label{H_bulk}$$ where $m(\bm p) = m_0 + m_2 \bm p^2$ are Einstein and Newtonian mass terms encoding a band inversion due to strong spin-orbit coupling. Note that two types of Pauli matrices $\bm \sigma$ and $\bm \tau$ represent physically real and orbital spins. It is then implemented on a cubic lattice with nearest-neighbor hopping terms. Periodic boundary conditions are applied in the $x$- and $y$-directions (no surfaces on the corresponding sides), while the model is restricted in the $z$-direction to $0 \le z \le N_z-1$. We consider a system of $N_x\times N_y\times N_z$ and introduce a pair of flux tubes piercing RPSRs respectively, in the $z$ and $-z$-directions at $(x,y)=\left(\frac{N_x}{2} - \frac{1}{2}, \frac{N_y}{4} - \frac{1}{2}\right)$ and at $(x,y)=\left({N_x \over 2} - {1\over 2}, {3N_y\over 4} - {1\over 2}\right)$. The actual simulation is done in a system of size, $(N_x, N_y, N_z) = (10, 20, 20)$, in which a moderate strength of potential disorder is also included [@DWTI]. Both the 2D surface and 1D helical modes are shown to be robust against disorder.
Depicted in Fig. \[invasive\] is the evolution of the profile of the lowest energy surface wave functions when a magnetic flux of different strength $\Phi$ is introduced. As the flux approaches $\pi$, the surface state tends to penetrate into the bulk along a RPSR (compare different panels of Fig. \[invasive\] in which only a half of the system is shown). When the flux is null, the RPSR is empty. Yet, one can still hypothesize an electronic motion bound to it. But then, its energy levitates because of the spin Berry phase $\pi$; recall half-odd integral quantization of the orbital angular momentum. Here, since the circumference of the RPSR is atomically small ($=4a_0$ with $a_0$ being the lattice constant), the corresponding energy scale of finite-size quantization is huge. Clearly, he is no longer compatible with the gapless (zero-energy) surface state. The gapless surface state, in turn, does not penetrate into the bulk along the RPSR. As the flux is introduced, this Berry phase $\pi$ is either partly or completely cancelled depending on the amount inserted. Then, at least a small portion of 1D state along the flux tube starts to merge with the gapless surface state. From the viewpoint of the surface state, a portion of the wave function is dragged into the RPSR . This effect should be compared with the asymptotic behavior of the analytic formula.
We have seen so far through numerical simulations how the surface state loses its noninvasive character when a flux tube is inserted We have seen that when the strength of the flux is precisely $\pi$, it becomes completely invasive. These imply, in turn, that the noninvasiveness of the surface state stems from the Berry phase $\pi$, which is in a sense omnipresent. Penetration of the surface state into any hypothetical of the lattice is banned by the existence of this Berry phase $\pi$.
To reinforce the above argument we formulate this analytically in the remainder of the paper by solving a corresponding electronic state on the hyperbolic surface as depicted in Fig. \[drain\]. To find the surface Dirac equation on this curved surface it is convenient to introduce a set of curvilinear coordinates $(\xi, \theta, \phi)$ [@kado], defined in terms of the hyperbolic surface: $\left(\sqrt{x_0^2+y_0^2}-a\right)z_0 =R^2$; its cross section in the $xz$-plane is shown in Fig. \[drain\]. The original cartesian coordinates are expressed as $x=r\cos\phi$, $y=r\sin\phi$, $z=\xi \cos\theta + R\sqrt{\tan\theta}$, where $$r=r(\xi,\theta)=\xi\sin\theta +a+R\sqrt{\cot\theta}$$ is an auxiliary parameter dependent on $\xi$ and $\theta$. The derivatives are represented by $$\nabla=\bm e_\xi \partial_\xi - {1\over \eta(\theta)-\xi}\bm e_\theta \partial_\theta
+{1\over r(\xi,\theta)}\bm e_\phi \partial_\phi,
\label{grad}$$ where the unit vectors $\bm e_\xi$, $\bm e_\theta$, $\bm e_\phi$ are those of the standard 3D polar (spherical) coordinates [@polar]. $\eta (\theta)$ represents geometrically the radius of curvature of the hyperbolic curve at $\bm r_0=(x_0, y_0, z_0)$: $$\begin{aligned}
\eta (\theta) =\sqrt{|\partial_\theta \bm r_0|^2}
= {R\over 2}{1\over\sqrt{\sin^3\theta \cos^3\theta}}.
\label{eta}\end{aligned}$$ The subsequent analyses are based on the complex amplitudes of the surface wave function at the point $(\xi, \theta, \phi)$, which is vanishingly small when $\xi$ significantly exceeds the penetration depth. If this is much smaller than $R$, only the range of $\xi \ll R$ is physically relevant. In this regime we focus on hereafter apparent singularities in the expressions of Eq. (\[grad\]) cause no mathematical difficulty.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(Color online) (a) Image of the “drain”. (b) Cross section of the hyperbolic surface on the $xz$-plane (only the $x>0$ part is shown).[]{data-label="drain"}](drain.eps "fig:"){width="70mm"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
With the aid of these new coordinates we deduce the surface Dirac Hamiltonian on the hyperbolic surface from the bulk effective theory. In the standard procedure [@aniso; @spherical] this is done by restricting the space of state vectors $|\psi\rangle$ associated with the bulk Hamiltonian $H_{\rm bulk}$ to a set of surface states, [*i.e.*]{}, those states that are localized in the vicinity of the hyperbolic surface. Any surface solution $|\psi\rangle$ of $H_{\rm bulk}$ can be written as a linear combination of two basis solutions, $|\pm\rangle ={1\over\sqrt{c(\theta)}}
\left(e^{-\kappa_1\xi}-e^{-\kappa_2\xi}\right) |\pm\rangle\rangle$, [*i.e.*]{}, $|\psi\rangle=\psi_+ |+\rangle + \psi_- |-\rangle$, where $\psi_\pm$ are (scalar) functions of $\theta$ and $\phi$. With an appropriate choice of $\kappa_{1,2}$ and $|\pm\rangle\rangle$, $|\pm\rangle$ can be made indeed (two degenerate) zero-energy eigenstates of $H_{\rm bulk}$ at the “Dirac point”. The $\xi$-dependence of the wave function is determined such that it vanishes on the hyperbolic surface. The spinor part of the wave function $|\pm\rangle\rangle$ can be chosen as $$\begin{aligned}
|+\rangle\rangle &=& {1\over\sqrt{2}}
\left[\begin{array}{r}
\cos (\theta/2) \\
e^{i\phi} \sin (\theta/2)
\end{array}\right]
\otimes
\left[\begin{array}{c}
1\\ i
\end{array}\right],
\nonumber \\
|-\rangle\rangle &=& {1\over\sqrt{2}}
\left[\begin{array}{r}
\sin (\theta/2) \\
-e^{i\phi} \cos (\theta/2)
\end{array}\right]
\otimes
\left[\begin{array}{c}
1\\ -i
\end{array}\right].\end{aligned}$$ Notice that here we have chosen this [*single*]{}-valued \[in contrast to the standard SU(2) spinor\] with respect to $\phi\rightarrow\phi +2\pi$. Though this is a confusing point of this formulation, whether the basis is double or single valued is simply a matter of choice [@aniso]. The $\theta$-dependent normalization constant $c(\theta)$ in $|\pm\rangle$ is defined as $$\begin{aligned}
c(\theta) &=&
\int_0^\infty d\xi \
r(\xi, \theta) (\eta (\theta) -\xi) \left(e^{-\kappa_1\xi}-e^{-\kappa_2\xi}\right)^2,
\label{c_theta}\end{aligned}$$ in which $r(\eta -\xi)$ is a measure of the integral associated with the volume integral element $r(\eta-\xi)d\xi d\theta d\phi$. The same measure appears also in the evaluation of the matrix elements, $\langle \pm|H_{\rm bulk}|\pm\rangle$ (see below). The surface Dirac Hamiltonian $H_{\rm surf}$ is obtained in the spirit of $\bm k\cdot\bm p$-theory [@aniso; @spherical; @kado]. Or, in the language of degenerate perturbation theory this can be regarded as a secular equation for the coefficients, $\psi_\pm (\theta,\phi)$; they are solutions of the Dirac equation, $H_{\rm surf}\bm \psi =E\bm \psi$ with $\bm\psi = (\psi_+, \psi_-)^T$. We find the coefficient matrix $H_{\rm surf}$ by evaluating the matrix elements $\langle\pm|H_{\rm bulk}|\pm\rangle$ as $$H_{\rm surf} =
\left[
\begin{array}{cc}
\langle +|H_{\rm bulk}|+\rangle&
\langle -|H_{\rm bulk}|+\rangle\\
\langle +|H_{\rm bulk}|-\rangle&
\langle -|H_{\rm bulk}|-\rangle
\end{array}
\right]
= \left[
\begin{array}{cc}
0 & D_- \\
D_+ & 0
\end{array}
\right],
\label{H_surf1}$$ where $$D_\pm=
\pm A_\theta{\partial_\theta}
\pm{\partial_\theta A_\theta\over 2}
+ A_\phi \left(-i\partial_\phi +{1\over 2}\right),
\label{D_pm}$$ and [@m0] $$\begin{aligned}
A_\theta &=&
{\langle r \rangle \over \langle r (\eta-\xi) \rangle}
\left( A+ {\left\langle {r\over \eta-\xi} \right\rangle \over \langle r\rangle}m_2 \right)
\equiv {\langle r \rangle \over \langle r (\eta-\xi) \rangle} \tilde{A}_\theta,
\nonumber \\
A_\phi &=&
{\langle \eta-\xi \rangle \over \langle r (\eta-\xi) \rangle}
\left( A- {\left\langle {\eta-\xi \over r} \right\rangle \sin\theta \over \langle \eta-\xi \rangle} m_2 \right).
\label{A_theta}\end{aligned}$$ Notice that in Eq. (\[D\_pm\]) the $\phi$-derivative in $D_\pm$ is shifted by $1/2$, which is nothing but the “Berry phase” of amount $\pi$. Since we have [*chosen*]{} the spinor part of wave function [*single*]{}-valued, the orbital angular momentum $L_z$, defined as $\bm \psi (\theta,\phi) = e^{i L_z \phi} \bm Z(\theta)$, takes [*formally*]{} an integral value, $L_z = 0, \pm 1, \pm 2, \cdots$. But due to the Berry phase $\pi$ the [*physical*]{} angular momentum $\tilde{L}_z = L_z +1/2$ becomes [*half-odd*]{} integral [@aniso; @prism]. In Eqs. (\[A\_theta\]) the $\xi$-average $\langle f\rangle$ of a function $f(\xi)$ is defined in terms of a $\xi$-integral similar to Eqs. (\[c\_theta\]), [*i.e.*]{}, $$\langle f\rangle =
{\int_0^\infty d\xi\ f(\xi) \left(e^{-\kappa_1\xi}-e^{-\kappa_2\xi}\right)^2
\over
\int_0^\infty d\xi \left(e^{-\kappa_1\xi}-e^{-\kappa_2\xi}\right)^2}.$$
The effective “Dirac theory” on the hyperbolic surface is prescribed by Eqs. (\[H\_surf1\]), (\[D\_pm\]) and (\[A\_theta\]). We now attempt to construct zero energy solutions of this effective model. To ease physical interpretation of the results it is useful to modify one of the coordinates by using, instead a (dimensionless) angle $\theta$, a linear coordinate $\zeta (\theta)$ having the dimension of length such that $$\zeta (\theta)=\int_{\pi/4}^\theta d\theta'
{\langle r(\xi,\theta') (\eta (\theta') -\xi) \rangle \over \langle r (\xi, \theta') \rangle}.$$ Notice that $(\eta (\theta) - \langle\xi\rangle)d\theta$ is a line integral element associated with the locus of the point $\bm r_0=(x_0, y_0, z_0)$ along a hyperbola at fixed $\phi$. Thus, at a large distance $r \gg R$ on the surface ($xy$-plane) from the $z$-axis ($\theta \ll \pi/4$), $-\zeta (\theta)$ can be identified as the radial component $r$ of the standard 2D polar coordinates $(r,\phi)$, while in the opposite limit of $\theta \gg \pi/4$, $\zeta (\theta)$ can be identified as $z$, the depth into the . Since $d\zeta /d\theta = \langle r (\eta-\xi) \rangle / \langle r\rangle$, the off diagonals in $H_{\rm surf}$ \[see Eq. (\[H\_surf1\])\] becomes in the $(\zeta,\phi)$-basis, $$D_\pm=\pm\tilde{A}_\theta \partial_\zeta
\pm{\partial_\zeta \tilde{A}_\theta\over 2}
+A_\phi \left(-i\partial_\phi +{1\over 2}\right).
\label{D_pm2}$$
How does the wave function penetrate (or not penetrate) into the What happens to the Berry phase $\pi$ on the surface sufficiently away from the Answers to these questions are encoded in the explicit form of $H_{\rm surf}$. Let us focus on the zero energy solutions for comparison with the result of numerical simulations. There are two of such solutions, either with spin up or down, $\bm\psi_{E=0}^{(\pm)} = e^{i L_\pm \phi} Z_\pm (\zeta)\bm e_\pm$, where $\bm e_+ = (1,0)^T$, $\bm e_+ = (0,1)^T$, which satisfy, respectively, $D_\pm \bm\psi_{E=0}^{(\pm)}=0$. This can be readily solved as $$Z_\pm (\zeta) = {1\over\sqrt{\tilde{A}_\theta (\zeta)}}
\exp\left[\mp\tilde{L}_\pm \int_0^\zeta d\zeta'
{A_\phi (\zeta') \over \tilde{A}_\theta (\zeta')}\right],
\label{Z_pm}$$ where $\tilde{L}_\pm = L_\pm+1/2$ [@accumulation]. In the asymptotic limit $\zeta\rightarrow \infty$, $A_\phi / \tilde{A}_\theta$ in the exponent can be readily approximated as ${A_\phi\over \tilde{A}_\theta} \simeq {1\over \langle r\rangle}\left(1-\left\langle {1\over r}\right\rangle {m_2 \over A}\right)$. Deep inside the $\zeta\simeq z$, and also, $\langle r\rangle \simeq a + \langle \xi\rangle$ and $\left\langle {1\over r}\right\rangle \simeq \left\langle {1\over a+\xi}\right\rangle$ become constant, therefore Eqs. (\[Z\_pm\]) decay [*exponentially*]{} under the convergence conditions: $\tilde{L}_+ \geq 1/2$ for $Z_+ (\zeta)$ and $\tilde{L}_- \leq -1/2$ for $Z_- (\zeta)$ [@half-odd]. In this regime, the wave function decays exponentially as it penetrates deeper into the in other words, it actually barely penetrates the bulk (noninvasiveness).
How about the opposite limit, [*i.e.*]{}, on the surface as $\zeta\rightarrow -\infty$? In this limit the profile of the wave functions can be directly compared with those of the 2D Dirac equation solved in terms of the Bessel functions $J_n (|E| r /A)$ with the use of the polar coordinates $(r,\phi)$. And also, we expect that the Berry phase $\pi$ becomes [*ineffective*]{} on the surface, which seems [*a priori*]{} contradictory to Eqs. (\[D\_pm\]) and (\[D\_pm2\]). A clue to resolve this discrepancy is in the normalization of the wave function. On the 2D surface, the wave function $\bm \psi_{2D} (r,\phi)$ should be normalized in terms of the surface integral element, $r dr d\phi$, while in the normalization of $\bm \psi (\zeta, \phi)$ this measure $r$ is not taken into account. Indeed, what should be interpreted as the 2D surface wave function is $\bm \psi_{2D} (\zeta,\phi) =
{\bm \psi (\zeta, \phi)\over \sqrt{\langle r(\zeta) \rangle}}$. Here, the corresponding effective “2D Hamiltonian” $H_{2D}$ for $\bm \psi_{2D}$ is deduced from $H_{\rm surf}$ by the replacement, $D_\pm \rightarrow
{\cal D}_\pm = D_\pm \pm {\tilde{A}_\theta\over 2} \partial_\zeta\log\langle r\rangle$, which can be rewritten as ${\cal D}_\pm=\pm\tilde{A}_\theta \partial_\zeta
\pm{\partial_\zeta \tilde{A}_\theta\over 2}
+A_\phi {\cal L}_\pm$, by noticing $\tilde{A}_\theta\simeq A$ and $A_\phi\simeq -A/\zeta$ in the limit of $\zeta \rightarrow -\infty$, where ${\cal L}_+=L_+$, ${\cal L}_-=L_- +1$. The ${1\over\sqrt{\langle r\rangle}}$ factor in the normalization of $\bm \psi_{2D}$ compensates the effects of Berry phase $\pi$. Thus, as expected, the Berry phase $\pi$ is shown to be [*ineffective*]{} on the flat surface away from the Since in the present limit, $A_\phi / \tilde{A}_\theta\simeq 1/ \langle r\rangle$ and $\langle r(\zeta)\rangle \simeq -\zeta$, the $\zeta'$-integral in the exponent of Eqs. (\[Z\_pm\]) diverges logarithmically, implying ${Z_\pm (\zeta)\over\sqrt{\langle r\rangle}}
\propto |\zeta|^{\pm {\cal L}_\pm}$. These solutions are bounded only when ${\cal L}_+ \leq 0$ for $Z_+ (\zeta)$, and ${\cal L}_- \geq 0$ for $Z_- (\zeta)$. This implies, combined with the convergence conditions for the opposite asymptotics, that the zero energy solution is possible only when $L_+ =0$ for $Z_+ (\zeta)$, and when $L_- =-1$ for $Z_- (\zeta)$. In these two cases $Z_\pm (\zeta)$ becomes constant, consistently with the fact only the zeroth order Bessel function $J_0 (|E| r/A)$ is compatible with the zero energy condition $E=0$.
Let us finally remark how the introduction of a flux tube piercing the modifies the above argument. In the extreme case of $\Phi=\pi$, the Aharonov-Bohm flux $\Phi$ and the Berry phase $\pi$ (the shift of $1/2$) in Eqs. (\[D\_pm\]) and (\[D\_pm2\]) cancel out each other. As a result, the [*bare*]{} angular momentum $L_\pm$ appears in the exponent of the zero energy solutions (\[Z\_pm\]). This modifies the asymptotic condition deeply inside the (in the limit of $\zeta\rightarrow \infty$) to $L_+ \geq 0$ and $L_- \leq 0$. In the opposite limit (on the surface away from the the two solutions behave asymptotically as ${Z_\pm (\zeta)\over\sqrt{\langle r\rangle}}
\propto |\zeta|^{\pm{\cal L}_\pm}$, [*i.e.*]{}, formally as before, but ${\cal L}_\pm$ now replaced with ${\cal L}_\pm=L_\pm \mp1/2$. The two solutions are legitimate only when ${\cal L}_+ \leq 0$ and ${\cal L}_- \geq 0$. The only possible choice of $L_\pm$ compatible with these two asymptotic conditions is $L_+=L_-=0$. This signifies that the wave function deeply inside the stays constant in contrast to the previous case (exponential decay). The $\pi$-flux tube transforms the surface state [*invasive*]{}, penetrating the bulk to attain the opposing surface. The asymptotic behaviors on the surface are modified accordingly, reproducing those of the Bessel function $J_{-1/2} (|E| r/A)$ in the limit of $E\rightarrow 0$.
The surface state of a topological insulator is always cited as being
KI acknowledges Tomi Ohtsuki for stimulating discussions. The authors are supported by KAKENHI; K.I. by the “Topological Quantum Phenomena” (No. 23103511), and Y.T. by a Grant-in-Aid for Scientific Research (C) (No. 24540375).
[99]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , ****, ().
, ****, ().
, ****, ().
Dirac electrons on such a closed surface of topological insulator can be regarded as a condensed-matter realization of a [*magnetic monopole*]{} [@spherical; @Shen].
, , , ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , ****, ().
, (), .
, ****, ().
, , , ****, ().
, ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , (), .
, ****, ().
, , , , , ****, ().
, $\bm e_\xi = (\sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta)$, $\bm e_\theta = (\cos\theta \cos\phi, \cos\theta\sin\phi, \sin\theta)$, $\bm e_\phi = (-\sin\phi, \cos\phi, 0)$.
$m_0 >0$ and $m_2 <0$ are implicitly assumed.
The factor $1/\sqrt{\tilde{A}_\theta}$ arising from the second term of Eq. (\[D\_pm2\]) represents [*accumulation*]{} of the wave function around $\theta\sim\pi/4$ due to velocity renormalization [@aniso; @kado].
Recall that $\tilde{L}_\pm$ are quantized to be half-odd integral values.
|
---
abstract: |
We simulate the growth of large-scale structure, for 3 different cosmological models, an Einstein-de Sitter model (density parameter $\Omega_0=1$), an open model ($\Omega_0=0.2$) and a flat model with nonzero cosmological constant ($\Omega_0=0.2$, cosmological constant $\lambda_0=0.8$), using a cosmological N-body code ($\rm P^3M$) with $64^3$ dark matter particles in a comoving cubic volume of present comoving size 128 Mpc. The calculations start at $z=24$ and end at $z=0$. We use the results of these simulations to generate distributions of galaxies at the present ($z=0$), as follows: Using a Monte-Carlo method based on the present distribution of dark matter, we located $\sim40000$ galaxies in the computational volume. We then ascribe to each galaxy a morphological type based on the local number density of galaxies in order to reproduce the observed morphology-density relation. The resulting galaxy distributions are similar to the observed ones, with most ellipticals concentrated in the densest regions, and most spirals concentrated in low-density regions. By “tying” each galaxy to its nearest dark matter particle, we can trace the trajectory of that galaxy back in time, by simply looking at the location of that dark matter particle at earlier time-slices provided by the N-body code. This enables us to reconstruct the distribution of galaxies at high redshift, and the trajectory of each galaxy from its formation epoch to the present.
We use these galaxy distributions to investigate the problem of morphological evolution. Our goal is to determine whether the morphological type of galaxies is primarily determined by the initial conditions in which these galaxies form, or by evolutionary processes (such as mergers or tidal stripping) occurring after the galaxies have formed, and eventually altering their morphology, or a combination of both effects. Our main technique consists of comparing the environments in which galaxies are at the epoch of galaxy formation (taken to be at redshift $z=3$) with the environment in which the same galaxies are at the present. Making the null hypothesis that the morphological types of galaxies do not evolve, we compare the galaxies that form in low density environments but end up later in high density environment to the ones that form also in low density environment but remain in low density environment. The first group contains a larger proportion of elliptical and S0 galaxies than the second group. We assume that the initial galaxy formation process cannot distinguish a low density environment that will always remain low density from one that will eventually become high density. Therefore, these results are absurd and force us to discard the null hypothesis that morphological evolution does not occur. Our study suggests that $\sim75\%$ of the elliptical and S0 galaxies observed at present formed as such, while the remaining $\sim25\%$ of these galaxies formed as spiral galaxies, and underwent morphological evolution, for all three cosmological models considered (the percentages might be smaller for elliptical than S0 galaxies). These numbers assume a morphological evolution process which converts one spiral galaxy into either a S0 or an elliptical galaxy. If the morphological evolution process involves mergers of spiral galaxies, these numbers be would closer to $85\%$ and $15\%$, respectively. We conclude that most galaxies did not undergo morphological evolution, but a non-negligible fraction did.
author:
- 'Hugo Martel, Premana Premadi, and Richard Matzner'
title: MORPHOLOGICAL EVOLUTION OF GALAXIES
---
1truein
INTRODUCTION
============
Morphological Types
-------------------
Galaxies exist in several forms, elliptical, lenticulars, spirals, and irregulars, usually referred to as [*morphological types*]{}. Elliptical galaxies are featureless, ellipsoidal stellar systems composed of old Population II stars, with no appreciable amount of cold gas or dust. In addition, many of them are known to contain also a disk. Ellipticals galaxies are labeled as E0, E1, and so on, according to their ellipticity. Lenticular galaxies have a prominent, featureless disk, that contains no appreciable amount of cold gas or dust, and no spiral arms. They are very similar to the most elongated, E7 elliptical galaxies. These galaxies are labeled as S0. Spiral galaxies are composed of a disk of Population I stars, cold gas, and dust, arranged in a pattern of spiral arms, and a central bulge of population II stars which resemble small elliptical galaxies. The spiral arms are the site of active star formation, and contain a large number of young O and B stars. Spiral galaxies have flat rotation curves that extend to radii well beyond the visible edge of the galaxy, thus implying that these galaxies are imbedded into large dark matter halo. Spiral galaxies are labeled as Sa, Sb, Sc, and Sd galaxies according to their disk-to-bulge luminosity ratio (D/B), with the bulge dominating the luminosity for Sa galaxies, and the disk dominating for Sd galaxies. Galaxies that do not belong to any of these categories are classified as irregular galaxies. Some irregular galaxies result from collision and merging between galaxies, but the majority of irregular galaxies are small, gas rich galaxies similar to the Magellanic clouds. We label these galaxies as Im.
All the galaxy types described above can be combined into a single sequence, $\rm E0\rightarrow E1\rightarrow\ldots\rightarrow E7
\rightarrow S0\rightarrow Sa\rightarrow Sb
\rightarrow Sc\rightarrow Sd\rightarrow Im$, called the [*Hubble sequence*]{} [^1]. Near the start of the sequence, galaxies are mostly composed of old Population II stars, with no dust and no cold gas, and therefore no active star formation, and a small disk-to-bulge ratio. As we move along the sequence, the preponderance of Population II stars decreases in favor of younger, Population I stars. The amount of dust and cold gas increases, D/B increases, and star formation becomes important.
A successful theory of galaxy formation must be able to explain the existence of the Hubble sequence, the origin of each morphological type, their relative abundance, and their clustering properties. To achieve this goal, we must first identify and understand the physical processes that are involved in the galaxy formation process, as well as the processes that might subsequently alter the structure of galaxies after they are formed. The most important clue for understanding the galaxy formation process is the existence of a [*Morphology-Density Relationship*]{} relating the likelihood of any given galaxy to have a particular morphological type to the [*local*]{} density of the environment in which that galaxy is located.
The Morphology-Density Relation at Present
------------------------------------------
There is a significant difference between the galaxy populations of nearby low-density fields and in the densest regions inside nearby clusters of galaxies. Though all morphological types are present both in clusters and in the field, field galaxies are predominantly spirals, while clusters of galaxies contain a much larger proportion of elliptical and S0 galaxies. Furthermore, population gradients are found inside clusters. Melnick & Sargent (1977) showed that the proportion of spiral galaxies increases as a function of the distance from the cluster center, with a corresponding decrease in the proportion of S0 and elliptical galaxies. Dressler (1980) argued that this morphology-radius relation is applicable only to regular, spherical clusters with a well-defined center. Most clusters are highly irregular, and often contain several high density concentrations, or lumps. The distribution of the various morphological types inside these lumps is similar to the one in the center of the regular, spherical clusters. Dressler (1980) concluded that the correct way to describe the distribution of morphological types is in terms of the local number density of galaxies, and not the distance from the cluster center. Using a sample of 55 rich clusters, he showed that the fraction of elliptical and S0 galaxies increases with increasing surface number density of galaxies, with a corresponding decrease in the fraction of spiral galaxies, over 3 orders of magnitude in surface number density. The lowest density regions in the sample are composed of 80% spirals, 10% S0’s, and 10% ellipticals, while the densest clumps are composed of 10% spirals, 40% S0’s, and 50% ellipticals. Subsequent studies (Bhavsar 1981; de Souza et al 1982; Postman & Geller 1984) confirmed the relations derived by Dressler (1980), and extended them to the low-density field. All these results are summarized in Dressler (1984). The morphology-density relation extends over 5 orders of magnitude in volume number density (Postman & Geller claim 6 orders of magnitude), and is a slowly varying, monotonic relation. The lowest-density regions are composed of 80–90% spirals, while the highest-density regions are composed of 80–90% ellipticals and S0’s. (Notice that a recent paper by Whitmore, Gilmore, & Jones \[1993\] challenges the existence of the morphology-density relation, and claims that the morphology-radius relation is actually the correct one.)
Notice that these various determinations of the morphology-density relation were all based on observations of relatively nearby galaxies. Therefore, this relation is valid only [*at present*]{}. More recent observations with the Hubble Space Telescope (HST) suggest that the morphology-density relation evolves with time, and this actually supports the results we present in this paper. A discussion of the HST results is presented in §9.
The Origin of the Morphological Types
-------------------------------------
Several galaxy formation models have been suggested to explain the origin of the Hubble sequence and the existence of the morphology-density relation. Dressler (1984) has grouped these various models into three classes, based on the relative importance of initial conditions and evolution processes in determining the final morphological type of galaxies. We shall follow the same classification here.
### Morphological Evolution
Models that belong to the first class all assume that galaxies form in similar environments, and therefore the existence of different morphological types does not result from different initial conditions, but instead from evolutionary processes happening after the galaxies have formed. Several models have been suggested to explain the abundance of S0 galaxies and deficiency of spiral galaxies in dense regions. These models all assume that S0 galaxies are spiral galaxies that have lost their gas and dust as a result of some evolutionary process taking place in the dense environments of cluster cores. The various possible physical mechanism for gas stripping include direct collisions (Spitzer & Baade 1951) ram-pressure stripping (Gunn & Gott 1972) and gas evaporation by a hot intracluster gas (Cowie & Songaila 1977). Dressler (1980) pointed out a major problem with these models: the various physical mechanisms suggested are efficient only in the densest regions, inside cluster cores. Though the [*fraction*]{} of S0 galaxies is largest in these regions, the [*actual number*]{} of S0’s galaxies in these regions is small. About 80% of S0 galaxies are located in intermediate-density regions. Spiral galaxies in intermediate-density regions are deficient in gas by a factor of 2-3 relative to field spirals, indicating that gas ablation is important in these regions as well (Giovanelli, Chincarini, & Haynes 1981; Bothun, Schommer, & Sullivan 1982; Kennicutt 1983). However, this effect is much too weak to explain the presence of S0 galaxies, which are gas deficient by a factor of 100 relative to field spirals.
### Initial Conditions Combined with Morphological Evolution.
The second class of models comprises all models in which both initial conditions and morphological evolution play an important role in determining the morphological types of galaxies. Kent (1981) had suggested that the morphology-density relation originates from the “fading” of disks in high density regions. In this model, initial conditions are assumed to be responsible for determining the initial morphological type of disk galaxies, such that disk galaxies with large D/B become predominantly spirals, while disk galaxies with small D/B become predominantly S0’s. The model then assumes that the disks of spiral and S0 galaxies are fainter in high density regions than in low density region (this could result from the dissipation of the disk by tidal interaction, or, if the disks are still in the process of forming, then a large density environment might disrupt this process). The fading of disks causes some spiral galaxies to become too faint to be observable, and others to be identified as S0 galaxies. Furthermore, the fading of the disk of S0 galaxies causes some of these galaxies to be identified as ellipticals. With an appropriate choice of parameters, this model can successfully reproduce all the relations given in Dressler (1980). Larson, Tinsley, & Caldwell (1980) have proposed a similar model, based on the time scale for gas exhaustion via stellar evolution in disks. In their model, the gas exhausted by star formation is constantly replaced by gas infalling from a gaseous envelope surrounding the galaxy. In high-density regions, tidal encounters would disrupt this envelope, resulting in a progressive fading of the disk as stellar evolution proceeds. The various gas-stripping processes mentioned in §1.3.1 could be responsible for transforming spirals galaxies into S0’s inside cluster cores (even though they cannot account for the existence of field S0 galaxies). Byrd & Valtonen (1990) have argued that the interaction of spiral galaxies with the tidal field of the cluster is a more efficient process than ram pressure stripping in depleting these galaxies of their interstellar gas, and eventually turning them into S0 galaxies (but not ellipticals). Their model is supported by the abundance of barred spiral galaxies in the core of the Coma cluster, since the formation of a bar in a normal spiral galaxy can also result from strong tidal interaction.
If the galactic disks are “faded” in high density regions, as these models assume, then the luminosity function inside dense clusters should differ significantly from the one in low density clusters and in the field. However, observations show that the luminosity functions in low- and high-density regions are essentially identical (Dressler 1984, and references therein), though Biviano et al. (1995) recently suggested that this might not be the case for the Coma cluster. In order to maintain the luminosity function unchanged in high-density regions, any “fading” of the disk must then be accompanied by a corresponding brightening of the bulge. Mergers could be responsible for building up large galactic bulges in high-density regions. It has been suggested that elliptical galaxies result from the merging of spiral galaxies (Toomre & Toomre 1972; Toomre 1977; White 1978; 1979; Fall 1979). Ross (1981) has suggested that galaxies form mainly as stellar disks, and that galactic bulges are formed by merging, for all galaxy types. This could explain the fact that the angular momenta of disk and bulge in disk galaxies are almost perfectly aligned (Gerhard 1981). Numerical simulations of galaxy mergers by Mihos & Hernquist (1994a, 1994b) support this model, by showing that mergers trigger infall of material toward the center of the system. This model, if correct, would explain the abundance of [*both*]{} S0 and elliptical galaxies in high-density regions. Numerical simulations (Efstathiou & Jones 1979; Aarseth & Fall 1980) have shown that mergers of galaxies on highly eccentric orbits result in the slow-rotating systems, consistent with measurements of the spin parameter for elliptical galaxies.
Merging events, however, are not expected to occur inside rich clusters, where most ellipticals are found. The velocity dispersion in these regions is quite high, resulting in a significant reduction of the gravitational cross sections of galaxies. More likely, mergers occur inside small groups of galaxies where the velocity dispersion is smaller, and later these groups assemble into clusters (see, e.g., Geller & Beers 1982). Numerical simulations (Aarseth & Fall 1980; Negroponte & White 1983; Noguchi 1988; Barnes 1989; Barnes & Hernquist 1991; 1992) show that galaxy mergers occur naturally inside small groups, and that such mergers result in the formation of spheroidal galaxies with essentially no disk (Barnes & Hernquist 1992). Baugh, Cole, & Frenk (1996) have used a semi-analytical, Monte Carlo model to describe galaxy mergers in a standard Cold Dark Matter (CDM) universe. Their model produces a distribution of D/B which are consistent with observations, when the values of D/B are used to ascribe morphological types. Moore et al. (1996) have suggested that morphological evolution of spirals actually occurs inside dense clusters, in spite of the large velocity dispersion. In their model, called “galaxy harassment,” spiral galaxies are disrupted by the cumulative effect of several high velocity close encounters with other galaxies.
The various studies of mergers described above consider the merging of two or more galaxies of comparable size. A completely separate problem is the merging of a disk galaxy with a satellite galaxy of much smaller mass. These merging events can modify the structure of the disk, but the effect is too small to result in actual morphological evolution (that is, spiral galaxies will remain spiral after “swallowing” a satellite). Numerical simulations (Quinn & Goodman 1986; Quinn, Hernquist, & Fullagar 1993; Tóth & Ostriker 1992) have shown that a merger between a disk galaxy and a satellite having a mass equal to 1/10 of the mass of the disk results in a important thickening of the disk, which is ruled out by observations. However, these simulations ignored the possibility that the satellite might dissolve significantly before the actual merging takes place. More recent simulations (Carlberg 1995; Huang 1995) have suggested that the main effect of these mergers is a tilt of the disk, accompanied by a transient warp, with no substantial thickening.
There are several problems with models involving mergers. Elliptical and spiral galaxies have different globular cluster luminosity functions (Harris 1981). Since merging events would unlikely affect the structure of globular clusters, this result argues against elliptical galaxies being formed from the merging of spirals, [*if the number of globular clusters remains constant during the merging process*]{}. However, Ashman & Zepf (1992) have argued that the merging of two galaxies results in the formation of additional globular clusters. Also, dwarf ellipticals presumably [*do not*]{} result from mergers, so the continuity of properties for dwarf ellipticals to regular ellipticals (Sandage 1983) suggests that large elliptical do not result from mergers either. Merging events would most likely ruin tight correlations existing among various parameters for elliptical galaxies, such as color and luminosity (Bower, Lucey, & Ellis 1992) and effective radius, central velocity dispersion, and mean surface brightness (the “fundamental plane,” Djorgovski & Davis 1987; , Franx, & 1996). Another possible problem is that stars are much more strongly concentrated in elliptical galaxies than in spirals (Combes et al. 1995). However, recent SPH simulations of galaxy mergers (Steinmetz 1995; Barnes & Hernquist 1996 and references therein) show that the merger of two spiral galaxies often results in the formation of much denser systems, sometimes too dense to be ellipticals galaxies.
### Initial Conditions
The third class of models comprises models in which the initial conditions are primarily responsible for determining the morphological type of galaxies, with subsequent morphological evolution playing little role or no role at all. Numerous models have been proposed (see Dressler 1984, and references therein), in which the morphological type is determined either by the local density, or the local amount of angular momentum. Such models could successfully explain the observed morphology-density relation only if galaxies have formed near their present location. This could be the case in cosmological models which have more power at large scale than small scale. In such models, clusters would form first, and then fragment into individual galaxies, in which case galaxies could actually be located at present near the location were they where formed. The alternative is that galaxies, at the epoch of their formation, somehow “know” the kind of environment in which they will be located at the present. This can be achieved if there is some kind of coupling between the perturbations responsible for forming the galaxies and the ones responsible for forming the clusters in which these galaxies end up.
The problem with these scenarios is that they all invoked cosmological models that are usually considered “marginal.” These models constitute interesting alternatives to the more standard CDM model with Gaussian initial conditions, but there is at present no strong, absolute evidence favoring such models over the standard ones. To our knowledge, the most serious alternatives, at present, to the standard CDM models are the models with Cold + Hot Dark Matter (CHDM), models with a nonzero cosmological constant, and models with a tilted power spectrum. None of these models feature coupling between long- and short-wavelength modes in their initial conditions. Hence, following Dressler (1984), we will regard these types of galaxy formation models as “last resort.”
Past History of Galaxies
------------------------
In order to identify the correct galaxy formation model, we must reconstruct the past history of the presently observed galaxies, and in particular we need to know the kind of environment in which galaxies were located at various epochs. We are assuming that galaxies at their formation epoch have no knowledge of the future environment in which they will end up. We are therefore rejecting all “class three” models, unless galaxies form near their present location. Hence, if we find that elliptical and S0 galaxies located in the dense cluster cores were always located in high density environment, at all epochs up to the galaxy formation epoch (redshifts $z$ of order 3–5), it would argue in favor of the initial conditions being responsible for determining the morphological type (class three models), and against morphological evolution. If, to the contrary, many of these elliptical and S0 galaxies are found at early time in low density environments, it would argue in favor of morphological evolution (class one or two models). The goal of this paper is to settle this question.
THE MODELS
==========
We consider three different cosmological models: an Einstein-de Sitter model with $\Omega_0=1$, $\lambda_0=0$, an open model with $\Omega_0=0.2$, $\lambda_0=0$, and a low-density flat model with $\Omega_0=0.2$, $\lambda_0=0.8$, where $\Omega_0$ and $\lambda_0$ are the present values of the density parameter and cosmological constant, respectively. We set the present value $H_0$ of the Hubble constant equal to $50\,\rm km/s/Mpc$ to avoid conflict between the models and the measurements of globular cluster ages. With these parameters, the age of the universe $t_0$ is 13.0 Gyr, 16.6 Gyr, and 21.04 Gyr for the Einstein-de Sitter, open, and cosmological constant models, respectively.
We assume that the initial fluctuations originate from a Gaussian random process. The initial density contrast can then be expressed as a superposition of plane waves with random phases. Our simulations assume periodic boundary conditions. This restricts the range of possible values for the wavenumber ${\bf k}$ to multiples of the fundamental wavenumber $k_0\equiv2\pi/L_{\rm box}$, where $L_{\rm box}$ is the size of the computational volume. The density contrast can then be expressed as $$\delta({\bf x})=\sum_{\bf k}\delta_{\bf k}
e^{-i{\bf k\cdot x}}\,,$$
where $\delta_{\bf k}$ is the amplitude of the ${\bf k}$-mode, and the sum is over all values of ${\bf k}=(l,m,n)k_0$, with $l$, $m$, $n$ integers. The requirement that $\delta({\bf x})$ is real implies $\delta_{\bf k}=\delta_{-\bf k}^*$. The phases of the amplitudes are random, and the norms $|\delta_{\bf k}|$ are related to the power spectrum $P(k)$ by $$P(k)={V_{\rm box}\over(2\pi)^3}|\delta_{\bf k}|^2\,,$$
where $V_{\rm box}=L_{\rm box}^3$ is the computational volume. The power spectrum we use can be expressed as $$P(k)=AkT(k)^2\,,$$
where $A$ is the amplitude and has dimension of $\rm(length)^4$, and $T(k)$ is the transfer function. This equation describes an “untilted” power spectrum which reduces to the Harrison-Zel’dovich power spectrum $P(k)\propto k$ at large scale, as $T(k)\rightarrow1$ for $k\rightarrow0$. The value of the amplitude is fixed by the value of the cosmic microwave background temperature anisotropy, as measured by COBE (Smoot et al. 1992), $$A={1\over(2\pi)^3}{6\pi^2\over5}Q_2^2R_H^4\,,$$
where $Q_2$ is the temperature quadrupole anisotropy and $R_H$ is the radius of the horizon. For all simulations, we used the value $A=8.16\times10^5h^{-4}\rm Mpc^4=1.3056\times10^7\rm Mpc^4$ given by Bunn, Scott, & White (1995) for standard CDM models. We also use the transfer function given by Bardeen et al. (1986), $$T(k)={\cal L}(z){\ln(1+2.34q)\over2.34q}
\big[1+3.89q+(16.1q)^2+(5.46q)^3+(6.71q)^4\big]^{-1/4}\,,$$
where $$q={k\theta^{1/2}\over(\Omega_Xh^2{\rm Mpc^{-1}})}\,,$$
where $\Omega_X$ is the density parameter of the dark matter (non-baryonic) component, $\theta=1$ for models with 3 flavors of relativistic neutrinos, and ${\cal L}(z)$ is the linear growth factor between the initial state and the present, given by $${\cal L}(z)={\delta_+(0)\over\delta_+(z)}\,,$$
where $\delta_+$ is the linear growing mode of the perturbation. Notice that several different notations are commonly used in the literature. Several authors do not include the factor $(2\pi)^3$ in equations (2) and (4), and instead include a factor of $(2\pi)^{3/2}$ in equation (1). Other authors use a redshift-independent transfer function, without the ${\cal L}(z)$ factor, and include a factor of ${\cal L}^2(z)$ in equation (3).
In all models, we assume that the baryon content of the universe has a density parameter $\Omega_B=0.0625$. For the Einstein de-Sitter model, this gives a density parameter $\Omega_X=\Omega_0-\Omega_B=0.9375$ for the dark matter. For the other two models considered, $\Omega_0=0.2$, and therefore $\Omega_X$ should be equal to 0.1375, resulting in a shift of the power spectrum through the relation between $q$ and $k$ given by equation (6). Instead, we decided to use the same relationship between $k$ and $q$ for all three models by setting $\Omega_X=0.9375$ in equation (6), thus introducing an inconsistency. Our motivation for doing this is the following: Our goal is not to find which model fits the observations of the present universe better. Instead, we want to select cosmological models that will bracket the behavior of the large-scale structure formation process. Using for our initial conditions a power spectrum that differs among the various models only through the model-dependent linear growth factor ${\cal L}$ allows us to investigate directly the effects of the growth rate and the age of the universe on the evolution of galaxy clustering. In the same spirit, we are considering open models and models with a cosmological constant that are somewhat too extreme to agree with observations, which suggests that $\Omega_0$ is more likely to be somewhere in the range 0.25–0.5 (Ostriker & Steinhardt 1995; Martel, Shapiro, & Weinberg 1998, and references therein). Models with a larger $\Omega_0$ and/or a smaller $\lambda_0$ would reproduce observations better, but would resemble the Einstein-de Sitter model more than the ones we are considering, thus providing less insight on the effect of the cosmological parameters on the formation of clusters. The reader should therefore keep in mind that the power spectrum we are using for the open and cosmological constant model is not consistent with a standard CDM model, and is chosen only for practical considerations.
The growing modes $\delta_+(z)$ appearing in equation (7) are obtained by solving the linear perturbation equation in the zero-pressure limit. For the Einstein-de Sitter model ($\Omega_0=1$, $\lambda_0=0$), the growing mode is $$\delta_+(z)=(1+z)^{-1}\,.$$
For open models ($\Omega_0<1$, $\lambda_0=0$), the growing mode is $$\delta_+(z)=1+{3\over x}+3\biggl({1+x\over x^3}\biggr)^{1/2}
\ln\big[(1+x)^{1/2}-x^{1/2}\big]$$
(Peebles 1980), where $$x=(\Omega_0^{-1}-1)(1+z)^{-1}\,.$$
Finally, for flat models with a cosmological constant ($\Omega_0+\lambda_0=1$), the growing mode is given by $$\delta_+(z)=\biggl({1\over y}+1\biggr)^{1/2}
\int_0^y{dw\over w^{1/6}(1+w)^{3/2}}$$
(Martel 1991b), where $$y={\lambda_0\over\Omega_0}(1+z)^{-3}\,.$$
THE CALCULATIONS
================
The $\bf P^3M$ Algorithm
------------------------
All N-body simulations presented in this paper are done using the [*Particle-Particle/Particle-Mesh*]{} (or P$^3$M) algorithm (Hockney & Eastwood 1981; Efstathiou & Eastwood 1981; Efstathiou et al. 1985, hereafter EDFW). The calculations evolve a system of gravitationally interacting particles in a cubic volume with triply periodic boundary conditions, comoving with Hubble flow. The forces on particles are computed by solving Poisson equation on a $128\times128\times128$ grid using a Fast Fourier Transform method. The resulting force field represents the Newtonian interaction between particles down to a separation of a few mesh spacings. At shorter distances the computed force is significantly smaller than the physical force. To increase the dynamical range of the code, the force at short distance is corrected by direct summation over pairs of particles separated by less than some cutoff distance $r_e$. With the addition of this so-called [*short-range correction*]{}, the code accurately reproduces the Newtonian interaction down to the softening length $\eta$. In all calculations, $\eta$ and $r_e$ were set equal to 0.3 and 2.7 mesh spacing, respectively. With these particular values, the code has a dynamical range of three orders of magnitude in length (EDFW). The particular version of P$^3$M we used in this paper uses the so-called tilde coordinates (Shandarin 1980; Martel & Shapiro 1997). The system is evolved forward in time using a second order Runge-Kutta time-integration scheme with a variable time step. We define a system of units by setting the mass $M_{\rm sys}$ of the system, the comoving side $L_{\rm box}$ of the computational volume, and the gravitational constant $G$ equal to unity.
In all cases, the comoving length of the computational volume is $L_{\rm box}=128{\rm Mpc}$ (present length units). The total mass of the system is $M_{\rm sys}=3H_0^2\Omega_0L_{\rm box}^3/8\pi G=1.455\times10^{17}
\Omega_0{\rm M}_\odot$. We use $64^3=262,144$ equal mass particles. The mass per particle is therefore $M_{\rm part}=M_{\rm sys}/64^3=5.551\times10^{11}{\rm M}_\odot$ for the Einstein-de Sitter model and $1.110\times10^{11}{\rm M}_\odot$ for the other two models.
Initial Conditions
------------------
The method we use to set up initial conditions is fairly standard. We lay down $64^3=262,144$ particles on a uniform cubic lattice, and displace them from their initial position in order to represent the initial density fluctuations. We then compute the initial peculiar velocities using the linear perturbation solutions for a pure growing mode, which are given by equations (8)–(12).
The particle displacements are given by $$\Delta{\bf x}=-2\sum_{\bf k}
{\delta_{\bf k}{\bf k}\over2\pi k^2}\sin(2\pi{\bf k}\cdot{\bf x}
-\phi_{\bf k})\,,$$
where $\bf x$ the unperturbed position, $\phi_{\bf k}$ is a random phase between 0 and $2\pi$, and the sum extends over one half of the $\bf k$-volume (the sine function and the factor of 2 come from grouping terms in eq. \[1\] by pairs with equal and opposite wavenumbers). In computational units, $k=1$ is the fundamental mode, whose wavelength is equal the the size $L_{\rm box}$ of the computational volume, and all modes up to the Nyquist frequency $k=32$ are included. [^2]
To compute the initial peculiar velocity field, we assume that the initial time of the calculation is early enough for the perturbation to be in the linear regime, but late enough so that the linear decaying mode can be neglected. The initial peculiar velocity of the particles are then related to their displacements by $${\bf v}_i={\dot\delta_+(z_i)\over\delta_+(z_i)}\Delta{\bf x}\,,$$
where $\Delta{\bf x}$ is computed using equation (10), $\delta_+$ is the linear growing mode of the perturbation, defined by equations (8)–(12), and $z_i$ is the initial redshift of the simulations.
The Simulations
---------------
We ran 5 simulations for each of the three cosmological models, for a total of 15 simulation. For each model, the 5 simulations differ only in the ensemble of random phases used in equation (13) to generate the initial particle displacements. To identify these various simulations, we shall use the following nomenclature: The simulations for the Einstein-de Sitter model ($\Omega_0=1$, $\lambda_0=0$), the open model ($\Omega_0=0.2$, $\lambda_0=0$), and cosmological constant model ($\Omega_0=0.2$, $\lambda_0=0.8$) will be called EdSX, OX, and LX, respectively, where ${\rm X}=1$, 2, 3, 4, 5 identifies the various runs for each model. All simulations start at an initial redshift $z_i=24$, and end at $z=0$.
THE PRESENT GALAXY DISTRIBUTIONS AND MORPHOLOGICAL TYPES
========================================================
The Galaxy Locations
--------------------
The P$^3$M algorithm simulates the growth of density fluctuations resulting in the formation of large-scale structure in an expanding universe. The only physical interaction present in these simulations is gravity. Hence, all the hydrodynamical and radiative processes which certainly play an important role in the galaxy formation process are ignored. Various authors have used P$^3$M codes to simulate galaxy formation, either by using a static (Davis et al 1985) or dynamic (Martel 1991a) criterion for identifying “luminous” particles, by making particles “stick” to each others in order to simulate dissipation of kinetic energy by hydrodynamical processes (Carlberg 1988), or by combining the P$^3$M algorithm with a hydrodynamical algorithm such as Smoothed Particles Hydrodynamics (Evrard 1988). In our simulations, we use a much simpler approach. We consider the large-scale structure at present ($z=0$) resulting from the P$^3$M simulations, and design an empirical Monte-Carlo method for locating galaxies in the computational volume, based on the constraints that (1) galaxies should be predominantly located in the densest regions, and (2) the resulting distribution of galaxies should resemble the observed distribution on the sky.
One possibility consists of using a Monte-Carlo rejection method. We could generate locations at random inside the computational volume, and decide whether or not to put a galaxy in these location, based on the local density of matter. The likelihood of locating a galaxy in a particular location should not be a linear function of the local density, however. Galaxy formation is believed to be biased toward forming galaxies in high density regions (Kaiser 1984). So in order to use this method, we would need to know the precise relationship between the matter density and the likelihood of forming a galaxy. The best currently available theories for biased galaxy formation could provide such a relationship, but using this relationship for locating galaxies would be an overkill. Biased galaxy formation theories could only provide relationships that involve the [*actual*]{} matter distribution in the universe. We are dealing instead with a [*simulated*]{} matter distribution, which is only an approximation of the actual matter distribution. In particular, CDM models normalized to COBE are known to produce too much structure on small scales.
Considering these various difficulties, we chose a much simpler method for locating galaxies. We divide the present computational volume into $128^3$ cubic cells of size $1{\rm Mpc}^3$, and compute the matter density $\rho$ at the center of each cell, using the same mass assignment as in the P$^3$M code. We then choose a particular density threshold $\rho_{\rm t}$. We locate $N$ galaxies in each cell, where $N$ is given by $$N={\rm int}\biggl({\rho\over\rho_{\rm t}}\biggr)\,.$$
The actual location of each galaxy is chosen to be the center of the cell, plus a random offset of order of the cell size. This reduces any spurious effect introduced by the use of a grid. We then experiment with various values of the density threshold $\rho_{\rm t}$ until the total number of galaxies comes out to be of order 40000. This gives a number density of $\sim0.02\,{\rm galaxies}/{\rm Mpc}^3$.
In Figure 1, we take one simulation for each of the three models, and plot the location at $z=0$ of the P$^3$M particles (left panels) and the galaxies (right panel) inside a slice of size $128\times128\times8\,{\rm Mpc}$. The Einstein-de Sitter model has too much power on small scales, resulting in the formation of very dense clumps. The cosmological constant model is slightly less evolved, and shows a large number of average-size clusters that have not yet merged into larger ones as in the Einstein-de Sitter model. In this model, the small value $\Omega_0=0.2$ of the density parameter results in a small growing rate of the density fluctuations, but this effect is partly compensated by the presence of the cosmological constant $\lambda_0$, which increases the age of the universe and thus allows the fluctuations to grow for a longer period of time. The open model O1 forms significantly less structure than the other two.
The galaxies are mostly concentrated in the highest density regions. The use of a density threshold in equation (15) approximates quite well the effect of biased galaxy formation by not locating galaxies in low density regions. The galaxy distribution for the open model resembles the observed galaxy distribution. The galaxies in the other 2 models are too much clustered. To quantify this point, we compute the 2-point correlation function $\xi(r)$ from the simulated galaxy distributions, for the Einstein-de Sitter and open models (we omitted the cosmological constant model for clarity). The results are shown in Figure 2. The correlation function for the open model (triangles) matches the observed power law $\xi(r)=(r/5.4h^{-1}{\rm Mpc})^{-1.77}$ (Peebles 1993) (dotted line), for separations $4\,{\rm Mpc}<r<40\,{\rm Mpc}$. The correlation function for the Einstein-de Sitter model (filled circles) is too large by a factor of 3 over the same range. This is consistent with results obtained by various authors who have used more sophisticated methods for generating galaxy distributions (see Ostriker 1993, and references therein). Hence, the overclustering of galaxies in our Einstein-de Sitter model should not be regarded as a flaw in our empirical method for locating galaxies, but rather as a weakness of the CDM model normalized to COBE. We attribute the excess of correlation at separations $r<4\,\rm Mpc$ for the open model to the same overmerging problem.
Since there is too much galaxy clustering at present in our Einstein-de Sitter model, we can expect that earlier time slices will resemble observations better than the present ones. Using linear perturbation theory, we can approximate the evolution of the correlation function as $\xi[r/(z+1),z]\approx(1+z)^{-2}\xi(r,z=0)$. Since the correlation function for the Einstein-de Sitter model is too large by a factor of 3 at $z=0$, this relation predicts that the $z=3^{1/2}-1\sim0.7$ time slice should match observations better than the present time slice, which is indeed the case, as shown by the open circles in Figure 2.
One drawback of our empirical scheme for biased galaxy formation is that it works “too well,” by totally preventing galaxy formation inside voids. In the real universe, even the deepest voids like Bootes contain some galaxies, and the existence of these galaxies is significant since is essentially rules out some cosmological models like Hot Dark Matter. This limitation of our biasing algorithm is of little consequence for the argument we present in §6, however, simply because the actual number of galaxies located in low density regions is quite small.
The Morphological Types
-----------------------
As we mentioned in §1.1, there is a tight relation between the distribution of morphological types and the number density of galaxies (Dressler 1984, and references therein). This morphology-density relation is reproduced in Figure 3, by the solid curves. By combining this relation with a Monte-Carlo method, we can ascribe a morphological type to each galaxy, as follows. We first compute the volume number density of galaxies $\rho_{\rm gal}$ around each galaxy, using $$\rho_{\rm gal}={n+1\over4\pi d_n^3/3}\,,$$
where $n$ is a positive integer, and $d_n$ is the distance of the $n^{\rm th}$ nearest neighboring galaxy. In all cases, we choose $n=12$. In the case of a spatially uniform distribution of galaxies with a density $\rho_{\rm uniform}$, this formula gives the correct answer $\rho_{\rm gal}\approx\rho_{\rm uniform}$ for a galaxy located inside the distribution, and $\rho_{gal}\approx\rho_{\rm uniform}/2$ for a galaxy located at the edge of the distribution, since that galaxy has neighbors on one side only. Notice that Dressler (1980) used essentially the same technique to compute the surface number density of galaxies around each galaxy in his sample.
Once the densities are computed, we compute the fractions $f_{\rm Sp}(\rho_{\rm gal})$, $f_{\rm S0}(\rho_{\rm gal})$, and $f_{\rm Ell}(\rho_{\rm gal})$ from the morphology-density relation. We then ascribe a morphological type to each galaxy by generating a random number $x$ between 0 and 1 (with uniform probability). The galaxy is a spiral if $x<f_{\rm Sp}$, a S0 if $f_{\rm Sp}<x<f_{\rm Sp}+f_{\rm S0}$, and an elliptical if $x>f_{\rm Sp}+f_{\rm S0}$. Table 1 shows the percentages of galaxies of each type for each run. Notice that the fluctuations among different runs within each model are very small. The fluctuations among different models are larger, and reflect the differences in the amount of clustering at $z=0$. As we see in Figure 1 (left panels), there is more clustering in the Einstein-de Sitter model than in the cosmological constant model, and significantly more in these two models than in the open model. This results in a slight excess of elliptical and S0 galaxies in the Einstein-de Sitter model compared to the cosmological constant model, and a bigger excess in these two models compared to the open one.
Once the morphological types have been assigned, we can compute the resulting morphology-density relation, and compare it to the one we were attempting to reproduce. Figure 3 shows the results for the EdS runs. The error bars indicate the range of values amongst the 5 different runs for that model, with the symbols indicating the results obtained by combining all runs together (this is not the same as the average among the runs, since the various runs contain different numbers of galaxies). The results reproduce the desired relations quite well, except at the largest density, where small number statistics lead to large fluctuations.
TRACING GALAXIES BACK IN TIME
=============================
The P$^3$M algorithm provides us with the distributions of particles at various intermediate redshifts between the initial redshift $z=24$ and final redshift $z=0$. By combining these particle distributions with our simulated galaxy distributions at present, we can trace galaxies back in time and reconstruct their trajectory. To do this, we simply find the nearest particle $p_i^{(1)}$ of each galaxy $g_i$ at present. Then we “tie” the galaxy $g_i$ to that nearest particle. The location of the galaxy $g_i$ at any redshift $z$ is then given by: $${\bf r}[g_i,z]={\bf r}\Big[p_i^{(1)},z\Big]+{\bf r}'\,,$$
where ${\bf r}'$ is a small random offset, which we introduce to avoid the unfortunate situation of having two galaxies located at the top of each other because they happen to by tied to the same particle. This allows us to construct galaxy distributions at any redshift, and, more importantly, to follow the history of each galaxy as cluster formation and merging is taking place. Of course, if we trace galaxies back to redshifts larger than 3–5, we then end up, strictly speaking, with distributions of [*protogalaxies*]{}. In Figure 4, we plot the galaxies located inside a slice of comoving thickness $32{\rm Mpc}$ (that is, one quarter of the computational volume) at various redshifts, for the run EdS1.
MORPHOLOGICAL EVOLUTION
=======================
Elliptical Galaxies
-------------------
Knowing the location of each galaxy at various epoch, we can then study the local environment in which each galaxy is located, and how this environment evolves with time. The basic idea is the following: If elliptical galaxies located in the dense cores of clusters at $z=0$ were always located in high density environment, it will argue against morphological evolution, and suggest that galaxies formed in such high density environment form predominantly as ellipticals. If on the contrary, many of these elliptical galaxies were located in low density environment at, say, $z=3$, it will argue in favor of morphological evolution, with these galaxies forming as spiral and later on becoming elliptical as they find themselves in high density environment.
To investigate this question, we compute the number density of galaxies around each galaxy for all 15 runs (3 models with 5 runs for each), at $z=0$ and $z=3$, using the method described in §4.2. We are making the null hypothesis that there is no morphological evolution, hence an elliptical at $z=0$ is also elliptical at $z=3$. We then sort each list of $\sim40000$ galaxies in increasing order of the local number density of galaxies.
First, we divide galaxies into low-density environments (L) and high-density environments (H), both at $z=0$ and $z=3$, based on the median value of the density at that epoch. That is, each list contains $\sim20000$ galaxies in low-density environments and the same number in high-density environments. We then divide galaxies into 4 bins according to the type of environments (L or H) in which they are located at $z=3$ [*and*]{} $z=0$. The results are shown in Table 2, where the $\rm L\rightarrow L$ bin contains all elliptical galaxies located in low-density regions at $z=3$ and $z=0$, the $\rm L\rightarrow H$ bin contains the ones located in low-density regions at $z=3$ and high-density regions at $z=0$, and so on. By definition the $\rm L\rightarrow H$ and $\rm H\rightarrow L$ counts are equal if [*all*]{} galaxies are considered, but for now we are only considering elliptical galaxies.
These results show that galaxies are moving through environments of different number densities between $z=3$ and $z=0$. In the Einstein-de Sitter model (runs EdS1 – EdS5), for instance, only 70% of the ellipticals are either $\rm L\rightarrow L$ or $\rm H\rightarrow H$. The numbers are very similar among different runs within each model, showing that these results are statistically significant. In all cases, the $\rm L\rightarrow H$ count exceeds the $\rm H\rightarrow L$ count. In order to appreciate the significance and implications of this result, let us consider a simple, probabilistic model in which the probability that an elliptical galaxy is located in similar environments at $z=3$ and $z=0$ is $1/2+p$. We obtain the following relations: $$\begin{aligned}
{\rm H}_0&=&{\rm H}_3\biggl({1\over2}+p\biggr)
+{\rm L}_3\biggl({1\over2}-p\biggr)\,,\\
{\rm L}_0&=&{\rm L}_3\biggl({1\over2}+p\biggr)
+{\rm H}_3\biggl({1\over2}-p\biggr)\,,\end{aligned}$$
where ${\rm H}_z$ and ${\rm L}_z$, $z=0,3$ are the number of elliptical galaxies in high- and low-density environments, respectively, at redshift $z$. This model has two extreme and opposite limits, which we shall refer to as the “no mixing limit” and the “complete mixing limit.” In the no mixing limit, defined by $p=1/2$, each galaxy is located at present at or very near the location (in comoving coordinates) where it was initially formed. Galaxies are therefore in identical environments at $z=3$ and $z=0$, and furthermore, they have the same neighbors. In the complete mixing limit, defined by $p=0$, all memory of the location where galaxies were formed has been lost through chaotic mixing. Any given galaxy can end up at present either in a low- or high-density environment, with equal probability, no matter in which kind of environment it was formed. In this limit, ${\rm H}_0={\rm L}_0$, so if the number of galaxies in high- and low-density environments at present is actually different, there is a finite minimum probability $p_{\min}$. We refer to the case $p=p_{\min}$ as the “maximum mixing limit.” We can use this model to analytically compute the $\rm L\rightarrow H$ and $\rm H\rightarrow L$ counts and compare them to the ones given in Table 3, as follows: We assume that the distribution of galaxies are known at present, but instead of tracing these galaxies back in time, we now use equations (18) and (19) to compute the galaxy distributions at $z=3$, we get $$\begin{aligned}
{\rm H}_3&=&{{\rm H}_0(1/2+p)-{\rm L}_0(1/2-p)\over2p}\,,\\
{\rm L}_3&=&{{\rm L}_0(1/2+p)-{\rm H}_0(1/2-p)\over2p}\,.\end{aligned}$$
by imposing that ${\rm H}_3$ and ${\rm L}_3$ are nonnegative, we can solve for the minimum probability, $$p_{\min}={|{\rm H}_0-{\rm L}_0|\over2({\rm H}_0+{\rm L}_0)}\,.$$
Equations (20) and (21) can be solved for any value of $p$ between $p_{\min}$ (maximum mixing limit) and 1/2 (no mixing limit). The $\rm L\rightarrow H$ and $\rm H\rightarrow L$ counts are then given by ${\rm L}_3(1/2-p)$ and ${\rm H}_3(1/2-p)$, respectively. We plot the results as a function of $p$ in Figure 5, for all three cosmological models (the values of ${\rm H}_0$ and ${\rm L}_0$ used in eqs. \[20\] and \[21\] were obtained by averaging over all five runs within each model).
In all cases, the $\rm L\rightarrow H$ count is lower than the $\rm H\rightarrow L$ count, for all values of $p$. The cases $p=p_{\min}$ and $p=1/2$ constitute two extreme and opposite limits, no mixing and maximum mixing, and interestingly [*these two extreme limits do not bracket our results*]{}. The reason is that [*an excess of $\rm L\rightarrow H$ count over $\rm H\rightarrow L$ count should not occur “naturally” unless there are more galaxies in low-density environments than in high-density environments at $z=3$*]{}, which is clearly not the case for ellipticals in our simulations.
This seemingly absurd result is based on the assumption that there is no morphological evolution. If we relax this assumption, we can solve the “$\rm L\rightarrow H$ excess” problem. If some elliptical galaxies located in high-density environments at present were actually formed as spiral galaxies in low-density environments, and eventually became ellipticals as they found themselves in higher-density environments at later time, then we are overestimating the $\rm L\rightarrow H$ count by ignoring morphological evolution. In order to bring the $\rm L\rightarrow H$ count down to the value of the $\rm H\rightarrow L$ count or lower, we must speculate that at least 1/4 of the elliptical galaxies in the $\rm L\rightarrow H$ bin were formed as spiral galaxies, and underwent morphological evolution between $z=3$ and $z=0$ that transformed them into elliptical galaxies.
The same probabilistic model can be applied to other morphological types. Since the [*total*]{} $\rm L\rightarrow H$ and $\rm H\rightarrow L$ counts must be equal by definition, at least one type of galaxy must have an excess of $\rm H\rightarrow L$ over $\rm L\rightarrow H$ to compensate for the ellipticals. This is indeed the case for the spirals. Applying the same probabilistic model to the spirals, we would find that, for all allowed values of $p$, the “natural” tendency for spirals is to move from low density regions to high density regions, simply because there are more spirals in low density regions to start with. We can solve this “$\rm H\rightarrow L$ excess” among spirals by assuming that some spiral galaxies turned into ellipticals as they moved into high density regions, leading to an underestimate of the $\rm L\rightarrow H$ count.
All Morphological Types
-----------------------
In this subsection, we consider galaxies of all types (not only ellipticals) that have formed in low-density environments. The results are shown in Table 3, where the numbers in parentheses are the percentages for each type. We are still making the null hypothesis of no morphological evolution. The percentages are different for the $\rm L\rightarrow L$ and $\rm L\rightarrow H$ bins, which is of course totally absurd: It implies that, somehow, the galaxy formation process is able to “distinguish” a low density environment at $z=3$ that will remain low-density at all times from one that will eventually become high-density. Since we assume there is no “fortune teller” at $z=3$ that can “tell” the galaxy formation process what will happen in the future, thus excluding class 3 models, we must conclude that morphological evolution is present. We can reconcile the numbers presented in Table 3 by assuming that spiral galaxies evolve either into S0 or ellipticals galaxies. For instance, we can reconcile the percentages for the EdS1 run by “transferring” 534 galaxies from the $\rm L\rightarrow H$ S0 bin to the $\rm L\rightarrow H$ Spiral bin, and 159 galaxies from the $\rm L\rightarrow H$ Elliptical bin to the $\rm L\rightarrow H$ Spiral bin. The percentages would then be the same as for the $\rm L\rightarrow L$ bins. This would imply that 21% of these S0 galaxies (534 out of 2565) and 17% of these elliptical galaxies (159 out of 924) were formed as spiral galaxies and underwent morphological evolution at a later time.
One possible problem with this interpretation of the results is our definition of low-density and high-density environments. At $z=3$, for the Einstein-de Sitter model, the number density of galaxies around each galaxy varies from $1.3\times10^3$ to $2.6\times10^7$ per unit computational volume with the median being $2\times10^5$ (the comoving number density is obtained by dividing these numbers by $[128\,{\rm Mpc}]^3$; the physical number density is obtained by dividing these numbers by $[128(1+z)^{-1}\,{\rm Mpc}]^3$). Hence, the number density in “low density environments” defined as the bottom half of the distribution, varies over 2 orders of magnitude. Then we could argue that the galaxies ending up in low- and high-density environments at $z=0$ come from different “parts” of the low-density environments at $z=3$.
To solve this problem, we define a “very low density” (VL) environment, comprising all galaxies located in the bottom 1/20 of the number density distribution, that is, the $\sim2000$ of these $\sim40000$ galaxies that are located in the least dense environments. The number density at $z=3$ around these galaxies varies from $1.3\times10^3$ to $2.8\times10^4$, but if we ignore a small number of galaxies in [*extremely*]{} low density environments (about 50), the range becomes $5.0\times10^3-2.8\times10^4$. The physical conditions for galaxy formation in these regions should be quite uniform, hence the percentages of spirals, S0’s, and ellipticals should be essentially the same everywhere within these regions. We then look at the location of these galaxies at $z=0$. The results are shown in Table 4.
Again, these percentages are different depending on whether the galaxies end up in low- or high-density environments at $z=0$ (the results for the open models are statistically insignificant, because only a few galaxies ended up in high-density environments). Since the galaxy formation process cannot predict which galaxies will end up in high or low-density environments at $z=0$, we are forced to reject the null hypothesis of no morphological evolution.
Again, we can reconcile the numbers presented in Table 4 by “transferring” galaxies from the $\rm VL\rightarrow H$, S0 and Elliptical bins to the $\rm VL\rightarrow H$ Spiral bin. For the run EdS1, transferring 23 galaxies from the $\rm VL\rightarrow H$ S0 bin to the $\rm VL\rightarrow H$ Spiral bin, and 5 galaxies from the $\rm VL\rightarrow H$ Elliptical bin to the $\rm VL\rightarrow H$ Spiral bin would make the percentages the same as for the $\rm VL\rightarrow L$ bins. Hence, 28% (23 out of 82) of these S0 galaxies and 17% (5 out of 30) of these elliptical galaxies formed as spiral galaxies. Notice the similarity of these percentages with the ones computed from Table 3.
These numbers are smaller if we assume that morphological evolution involves galaxy collision and merging. In the simplest case, morphological evolution transforms 2 interacting spiral galaxies into one S0 or elliptical galaxy. Hence, for each S0 or elliptical galaxy we “remove” from their $\rm VL\rightarrow H$ bin, we need to add 2 spiral galaxies, instead of only one, to the Spiral bin. In this case, to reconcile the percentages for the EdS1 run, we need to remove 18 S0 galaxies and 2 elliptical galaxies, thus adding 40 spiral galaxies ($2\times[18+2]$). The fractions of S0 and elliptical galaxies that were formed by mergers then becomes 22% (18 out of 82) and 7% (2 out of 30).
THE EVOLUTION OF CLUSTERING
===========================
The results of the previous section suggest that some elliptical and S0 galaxies were formed as spiral galaxies and underwent morphological evolution at some epoch between redshifts of $z=3$ and $z=0$. Assuming that the morphological evolution process is triggered by an increase in the galaxy number density resulting from the formation and merging of clusters, we can attempt to estimate the epoch of galaxy evolution by monitoring the evolution of the number density of galaxies around the galaxies that might have undergone such morphological evolution.
In Figure 6, we plot, as a function of redshift, the number density $n$ of galaxies (in $\rm galaxies/Mpc^3$) around each elliptical galaxy located in the $\rm VL\rightarrow H$ bin of Table 4 (runs O2, O4, and O5 do not contain any such galaxy, and are therefore omitted). The number of curves in each panel can be read from the last column of Table 4. All panels show a similar pattern. Initially, the number density decreases with time, indicating that these regions are still expanding (though slower than Hubble flow). The number densities reach a minimum at epochs between $z=0.4$ and $z=0.8$, indicating that the regions surrounding these galaxies have finally turned back and started to recollapse. The number densities then increase by 2 to 3 orders of magnitude between the turnaround epoch and the present, except for the open model (runs O1 and O3) for which the density increase is smaller than one order of magnitude. Some galaxies in the EdS runs (and also one in the L3 run) follow a different history, with the number density starting to rise at $z\sim1$, then dropping and rising again. The initial increase is caused by the formation of a dense cluster at $z\sim1$, resulting from the collapse of a particularly large density fluctuation. The subsequent drop in number density is caused by the tidal disruption of that cluster by more massive clusters formed at later epochs. These cases constitute a minority.
These plots indicate that the morphological evolution process, if real, most probably takes place at redshifts smaller than $z=0.6$, after the number density of galaxies has started to increase. Furthermore, the number densities reach the same value they had at $z=3$ at a redshift of order $z\sim0.2$. It is tempting to argue that, for morphological evolution to occur, the number density has to get larger than it was at the galaxy formation epoch, and therefore it must occur between $z=0.2$ and $z=0$. This argument is not valid because it assumes that the morphological evolution process depends [*directly*]{} on the number density of galaxies, which is presumably not the case. If morphological evolution results from galactic collisions or tidal stripping, then the likelihood for this process actually happening will depend upon the probability of having close encounters between galaxies, which is larger in regions of high number density. However, for the same number density, the likelihood of having close encounters between galaxies is much larger at $z=0.2$ than at $z=3$. Not only are galaxies more clustered at $z=0.2$ (see Fig. 4 for a good illustration of that), but in addition the galaxies, overall, are moving apart from one another at $z=3$, whereas they are approaching each other at $z=0.2$. Hence, we cannot rule out the possibility that morphological evolution takes place between redshifts $z=0.8$ and $z=0.2$ on the basis that the number densities at these epochs are smaller than they are at $z=3$.
Several authors have claimed that merging events were more frequent in the past, based either on observations (see, e.g. Carlberg 1995 and references therein) or analytical arguments (Toomre 1977; Aarseth & Fall 1980). These results do not contradict our claim that morphological evolution does not occur at redshifts $z>0.8$, simply because we are focusing our attention to [*very low density regions*]{}. In particular, the analytical arguments aforementioned assume that merging involves pairs of galaxies which are already on bound orbits, which is clearly not the case in the regions we are considering, which are still dominated by an overall expansion at $z=3$. Also, galaxy merging is only one of many physical processes that could possibly result in morphological evolution. In this section, we are making no assumption on the nature of the actual physical process involved. We are merely arguing that morphological evolution in very low density regions does not occur until $z\sim0.8$, simply because at earlier time all galaxies in these regions are moving away from one another. Notice that this result is based on galaxies located in VL regions at $z=3$. Higher density regions would turn back at larger redshift.
DISCUSSION OF THE METHOD
========================
In this section, we review and discuss the strengths and weaknesses of our numerical simulations, and the interpretation of the results.
Weak Points
-----------
The weakest point of this entire work is certainly the cosmological simulations themselves. The Einstein-de Sitter model with CDM spectrum normalized to COBE is known to produce too much structure at small scale. This is reflected in the two-point correlation function which is too large by a factor of 3 in the range of 1 – 10 Mpc. Only in the open model does the distribution of galaxies actually resemble the present universe. Since the 2-point correlation function evolves roughly as $a(t)^2\propto(1+z)^{-2}$ in the linear regime, we can estimate that $z\sim0.7$ (that is, $3^{1/2}-1$) time-slices for these models would be a better representation of the actual present universe. We looked at these time-slices, and they indeed resemble the present universe more closely than the $z=0$ time-slice.
It is difficult to estimate the consequences of this excess of small-scale structure. We argue that the effect is not so significant, and does not affect our conclusion. The main point is that we get consistent results among all three models, including the open model which [*does*]{} reproduce the present universe fairly well. Also, it is hard to see how the excess of structure formation in the Einstein-de Sitter model and the cosmological constant model could possibly affect the conclusion. Cluster merging happens continuously in CDM models, all the way to the present. The excess of power simply increases the amount of merging taking place between $z=0$ and $z=3$. We divide the regions in which galaxies end up at $z=0$ into low density and high density environment, without taking into account how high the number density gets inside these regions of high density. Hence, late cluster mergers are unlikely to have a strong effect on the results shown in Tables 2–4. As long as we are not interested in galaxies located in “very high density environments” at $z=0$, the excess of structures at small scale is probably unimportant.
We traced the motion of galaxies back in time by following the motion of the nearby dark matter particles. This assumes that the velocity field of galaxies and dark matter are the same. This assumption is certainly valid at early times. However, numerical simulations (Carlberg 1994) have shown the existence of a velocity bias between galaxies and dark matter inside clusters of galaxies. This is the result of an evolutionary process taking place inside the clusters. Hence, our method for tracing galaxies back in time might be partly flawed if this velocity bias is real.
Finally, our biasing scheme for galaxy formation is quite crude. This is certainly an aspect of the algorithm that could use some improvement. Unfortunately, not much can be done until the cosmological models themselves are improved. No biasing scheme will ever be satisfactory as long as the cosmological simulations produce too much structure at small scale.
Strong Points
-------------
The strongest point of this entire work is that the conclusions do not depend on the details of the initial galaxy formation and morphological evolution processes. The only assumptions we make concerning the initial galaxy formation process are that (1) it takes place before $z=3$, and (2) it has no “knowledge” of the future. As for the morphological evolution process, the only assumptions we make are that it converts spiral galaxies into S0 and elliptical galaxies, but not the other way around, and that it takes place in high density environments. The detailed physical processes involved in the initial galaxy formation and morphological evolution processes are irrelevant to this work, and this only makes our results more robust.
The second strongest point is the consistency of our results, first among different simulations for a same cosmological model, and then among the various models. The percentages shown in Tables 3 and 4 (with the exception of the Open model on Table 4) have error bars much smaller than the differences among these various percentages, which is what our argument is based on. Also, the fact that all three cosmological models show a trend toward morphological evolution strongly suggests that this effect is real.
There is a potential problem with the technique we use for tracing galaxies back in time. If dense clusters form by assembling matter taken from distant regions of the universe (which might be the case when cluster mergers are involved), then our approach of tying each galaxy to the nearest dark matter particle becomes ambiguous. A given galaxy might have formed in any of these distant regions, and by following the trajectory of the nearest dark matter particle, we are “forcing” that galaxy to have formed in one particular region, when it could actually have formed in another one.
To estimate the importance of this effect, we go back to the run EdS1, [^3] and recompute the trajectories of the galaxies, except that we replace $p_i^{(1)}$, the nearest dark matter particle to each galaxy, by $p_i^{(2)}$, the second-nearest particle, in equation (17). We label this new calculation EdS1$^*$. In Figure 7a, we plot the $x$-coordinate, in computational units, of the galaxies at $z=3$ for the EdS1$^*$ run, versus the same coordinates for the EdS1 run. Even though there is some scatter, most galaxies are located near the diagonal, indicating that the differences between the two runs are small for most galaxies (the concentrations of galaxies in the upper left and lower right corners of the figure are an artifact of the periodic boundary conditions). Plots of the $y$- and $z$-coordinates are similar. For brevity, we are omitting them in this paper.
Figure 7b shows an histogram of the 3-dimensional separation, in computational units, between each galaxy at $z=3$ in the EdS1 run and its counterpart in the EdS1$^*$ run. More than 1/3 of the galaxies are located in the first bin, having separations less than 1/40 \[corresponding to a physical separation of $128\,{\rm Mpc}(z+1)^{-1}/40=800\,\rm kpc$\], and the first seven bins contain 92% of the galaxies. Hence, only a few galaxies end up in significantly different regions when we track the second-nearest particle instead of the first one.
Using the galaxy locations for the run EdS1$^*$, we perform the same analysis as for the other runs. The results are given in the last line of Table 4. The numbers for the run EdS1$^*$ are remarkably similar to the ones for the EdS1 run. The most important difference is in the fraction of elliptical galaxies in the $\rm VL\rightarrow H$ bin, which is 30% in one case and 42% in the other. But actually, the EdS1$^*$ run is closer to the average amongst EdS runs than the EdS1 run is. Therefore, following the trajectory of the second-nearest particle instead of the nearest one does not affect our final conclusion in any way.
Finally, our method is based on comparing the number of galaxies in various bins (for instance, the number of elliptical galaxies in $\rm VL\longrightarrow L$ and $\rm VL\longrightarrow H$ bins). The fact that our conclusions are based, not on the galaxy counts themselves, but on comparisons between counts, offsets some drawbacks of the cosmological models. The CDM model normalized to COBE produces an excess of dense regions, and as a result our simulations contain more elliptical and S0 galaxies than the real universe. If we had fewer elliptical galaxies in our models, the counts in the $\rm VL\longrightarrow L$ and $\rm VL\longrightarrow H$ bin would most likely be reduced [*by the same factor*]{}, and our conclusion would be the same. Our method uses elliptical galaxies as mass tracers, and having an excess of such tracers simply improves statistics.
CONCLUSION AND PROSPECTS
========================
We conclude that a small but non-negligible fraction (of order 10%–20%) of the S0 and elliptical galaxies we observe today in the dense parts of clusters were not formed as S0’s and ellipticals, but rather as spiral galaxies, and underwent morphological evolution between $z=3$ and $z=0$, presumably during cluster formation and merging. Since the fraction of galaxies involved in morphological evolution is neither 0% nor 100%, initial conditions and morphological evolution processes must [*both*]{} play an important role in determining the morphological type of galaxies.
Our simulations predict that the proportion of spiral galaxies should increase form the present observed value of $\sim50\%$ to larger values as one looks back in time, that is, at larger redshifts. However, they cannot predict at what redshift this effect would manifest itself, and consequently we cannot predict the shape of the morphology-density relation at high redshift. To make a theoretical prediction, we need first to understand the details of the morphological evolution process. Also, the epoch of galaxy formation most certainly depends upon the cosmological model, so before we can make quantitative predictions, we first need to settle the question of which cosmological model properly describes the formation of large-scale structures in the universe.
However, a large amount of observational evidence supporting the existence of morphological evolution in dense environments at redshifts $z<0.5$ has been accumulated in recent years. Butcher & Oemler (1978, 1984) discovered a large excess of blue objects in clusters located at redshift $z\gtrsim0.4$. Subsequent ground-based observations (Dressler & Gunn 1982, 1983; Couch et al. 1983; Couch & Newell 1984; Dressler, Gunn, & Schneider 1985; Ellis et al. 1985; Lavery & Henry 1986; Henry & Lavery 1987; Couch & Sharples 1987; MacLaren, Ellis, & Couch 1988; Soucail et al. 1988; Aragón-Salamanca, Ellis, & Sharples 1991; Aragón-Salamanca et al. 1993) have shown that this “Butcher-Oemler effect” results from short-lived bursts of star formation affecting a subset of the cluster members. These starbursts could be triggered by the ram pressure of the intracluster gas when a galaxy first enters the cluster, by violent interaction between galaxies, or by mergers, (see Bothun & Dressler 1986, and references therein; Oemler 1992, and references therein; Mihos & Hernquist 1994a, 1994b). Recent Hubble Space Telescope observations of high redshifts clusters $z\sim0.3-0.5$ revealed that the blue starburst objects are low luminosity spiral galaxies, with as many as $\sim50\%$ of them being disturbed by what appears to be either tidal disruption or merging (Dressler et al. 1994a, 1994b; Couch et al. 1994; Barger et al. 1995). The galaxy populations of these clusters differ significantly form the ones of nearby clusters, and resemble the ones seen in the nearby small groups and field.
The difference between the galaxy populations of high-redshift and low-redshift clusters and the importance of dynamical interaction in high-redshift clusters compared to low-redshift ones provide strong evidence that morphological evolution has occurred inside rich clusters. Studies of galaxy populations in the field (Colless et al. 1990; Griffiths et al. 1994; Mobasher et al. 1996) and in small groups (Allington-Smith et al. 1993), reveal that no significant morphological evolution has occurred in these environments between redshift $z=0.5$ and the present, at least among luminous galaxies. (Driver et al. \[1995\], however, found an excess of [*faint*]{} late type galaxies in the field.) These results rule out any model in which the morphological evolution of a galaxy is driven by an internal physical process. The morphological evolution process depends upon the richness of the environment, and thus results in a steepening of the morphology-density relation with time.
The most recent studies (Dressler & Smail 1996; Smail et al. 1997; Dressler et al. 1997, and references therein) of high-redshift clusters, which include 10 rich clusters ($0.36<z<0.57$) comprising 1857 galaxies, show that the excess of spiral galaxies in high-redshift clusters is compensated by an underabundance of S0 galaxies, but not ellipticals. This implies that if morphological evolution is responsible for forming both some S0 and some elliptical galaxies, as our numerical simulations suggest, then the process of converting spirals galaxies into ellipticals must have occurred [*before*]{} $z=0.57$. This hypothesis constitutes an observational challenge, since testing it requires observations of even more distant clusters, in the range $z\sim0.5-0.8$, and a theoretical challenge as well, finding a model that explains why morphological evolution produces S0 and elliptical galaxies at different epochs.
All these observational results are consistent with our conclusion that a fraction of the elliptical and S0 galaxies result from morphological evolution processes taking place between redshifts of order unity and the present. The observations and our numerical simulations both indicate that the correct galaxy formation model ought to be a “class two” model, in which both initial conditions and morphological evolution play an important role. Finding the correct galaxy formation model will most likely require a better understanding of the physical processes involved and the cosmological context in which they are taking place, as well as observations and determination of morphological types in clusters beyond redshift $z\sim0.5$.
This work benefited from stimulating discussions with Alan Dressler, Inger Jørgensen, George Lake, and Paul Shapiro. We are pleased to acknowledge the support of NASA Grant NAG5-2785, NSF Grants PHY93 10083 and ASC 9504046, the University of Texas High Performance Computing Facility through the office of the vice president for research, and Cray Research.
Aarseth, S. J., & Fall, S. M. 1980, , 236, 43
Allington-Smith, J. R., Ellis, R. S., Zirbel, E. L., & Oemler, A. 1993, , 404, 521
Aragón-Salamanca, A., Ellis, R. S., Couch, W. J., & Carter, D. 1993, M.N.R.A.S., 262, 764
Aragón-Salamanca, A., Ellis, R. S., & Sharples, R. M. 1991, M.N.R.A.S., 248, 128
Ashman, K. M., & Zepf, S. E. 1992, , 384, 50
Bardeen, J. M., Bond, J. R., Kaiser, N., & Szalay, A. S. 1986, , 304, 15
Barger, A. J., Aragón-Salamanca, A., Ellis, R. S., Couch, W. J., Smail, I., & Sharples, R. M. 1995, M.N.R.A.S., 279, 1
Barnes J. E. 1989, Nature, 338, 123
Barnes, J. E., & Hernquist, L. 1991, , 370, L65
Barnes, J. E., & Hernquist, L. 1992, Nature, 360, 715
Barnes, J. E., & Henrquist, L. 1996, , 471, 115
Baugh, C. M., Cole, S., & Frenk, C. S. 1996, M.N.R.A.S., 282, L27
Bhavsar, S. P. 1981, , 246, L5
Biviano, A., Durret, F., Gerbal, D., Le Fèvre, O., Lobo, C., Mazure, A., & Slezak, E. 1995, , 297, 610
Bothun, G. D., & Dressler, A. 1986, , 301, 57
Bothun, G. D., Schommer, R. A., & Sullivan, W. T. 1982, M.N.R.A.S., 87, 731
Bower, R. G., Lucey, J. R., & Ellis, R. S. 1992, M.N.R.A.S., 254, 601
Bunn, E. F., Scott, D., & White, M. 1995, , 441, L9
Butcher, H., & Oemler, A. 1978, , 219, 18
Butcher, H., & Oemler, A. 1984, , 285, 426
Byrd, G., & Valtonen, M. 1990, , 350, 89
Carlberg, R. G. 1988, , 332, 26
Carlberg, R. G. 1994, , 433, 468
Carlberg, R. G. 1995, in Galaxies in the Young Universe, eds H. Hippelein, et al. (New York:Springer), p. 207
Colless, M., Ellis, R. S., Taylor, K., & Hook, R. N. 1990, M.N.R.A.S., 244, 408
Combes, F., Boissé, P., Mazure, A., & Blanchard, A. 1995, in Galaxies and Cosmology (New York:Springer) pp. 202–204
Couch, W. J., Ellis, R. S., Godwin, J., & Carter, D. 1983, M.N.R.A.S., 262, 764
Couch, W. J., Ellis, R. S., Sharples, R. M., & Smail, I. 1994, , 430, 121
Couch, W. J., & Newell, E. B. 1984, , 56, 143
Couch, W. J., & Sharples, R. M. 1987, M.N.R.A.S., 229, 42
Cowie, L. L., & Songaila, A. 1977, Nature, 266, 501
Davis, M., Efstathiou, G., Frenk, C. S., & White, S. D. M. 1985, , 292, 371
de Souza, R. E., Capelato, H. V., Arakaki, L., & Logullo, C. 1982, , 263, 557
Djorgovski, S., & Mavis, M. 1987, , 313, 59
Dressler, A. 1980, , 236, 351
Dressler, A. 1984, Ann.Rev.A.A. 22, 185
Dressler, A., & Gunn, J. E. 1982, , 263, 533
Dressler, A., & Gunn, J. E. 1983, , 270, 7
Dressler, A., Gunn, J. E, & Schneider, D. P. 1985, , 294, 70
Dressler, A., Oemler. A., Butcher, H. R., & Gunn, J. E. 1994a, , 430, 107
Dressler. A., Oemler, A., Sparks, W. B., & Lucas, R. A. 1994b, , 435, L23
Dressler, A., et al. 1997, , in press. preprint astro-ph/9707232
Dressler, A., & Smail, I. 1996, to appear in Proceedings of the $37^{\rm th}$ Herstmonceux Conference “HST and the High Redshift Universe”
Driver, S. P., Windhorst, R. A., Ostrander, E. J., Keel, W. C., Griffiths, R. E., & Ratnatunga, K. U. 1995, , 449, L29
Efstathiou, G., Davis, M., Frenk, C. S., & White, S. D. M. 1985, , 57, 241
Efstathiou, G., & Eastwood, J. W. 1981, M.N.R.A.S., 194, 503
Efstathiou, G., & Jones, B. J. T. 1979, M.N.R.A.S., 186, 133
Ellis, R. S., Couch, W. J., MacLaren, I., & Koo, D. C. 1985, M.N.R.A.S., 217, 239
Evrard, A. E. 1988, M.N.R.A.S., 235, 911
Fall, S. M. 1979, Nature, 281, 200
Geller, M. J., & Beers, T. C. 1982, , 94, 421.
Gerhard, O. E. 1981, M.N.R.A.S., 197, 179
Giovanelli, R., Chincarini, G. L., & Haynes, M. P. 1981, , 247, 383
Griffiths, R. E. et al. 1994, , 435, L19
Gunn, J. E., & Gott, J. R. 1972, , 176, 1
Harris, W. E. 1981, , 251, 497
Henry, J. P., & Lavery, R. J. 1987, , 323, 473
Hockney, R. W., & Eastwood, J. W. 1981, Computer Simulation Using Particles (New York: McGraw-Hill)
Huang, S. 1995, Ph.D thesis, University of Toronto.
Jørgersen, I., Franx, M., & Kjaergaard, P. 1996, M.N.R.A.S., 280, 167
Kaiser, N. 1984, , 284, L9
Kennicutt, R. C. 1983, , 88, 483
Kent, S. M. 1981, , 245, 805
Larson, R. B, Tinsley, B. M., & Caldwell, C. N. 1980, , 237, 692
Lavery, R. K., & Henry, J. P. 1986, , 304, L5
MacLaren, I., Ellis, R. S., & Couch, W. J. 1988, M.N.R.A.S., 230, 249
Martel, H. 1991a, , 366, 353
Martel, H. 1991b, , 377, 7
Martel, H., & Shapiro, P. R. 1997, in preparation.
Martel, H., Shapiro, P. R., & Weinberg, S. 1998, , 492, 000
Melnick, J. & Sargent, W. L. W. 1977, , 215, 401
Mihos, J. C., & Hernquist, L. 1994a, , 425, L13
Mihos, J. C., & Hernquist, L. 1994b, , 431, L9
Mobasher, B., Rowan-Robinson, M., Georgakakis, A., & Eaton, N. 1996, M.N.R.A.S., 282, L7
Moore, B., Katz, N., Lake, G., Dressler, A., & Oemler, A. 1996, Nature, 379, 613
Negroponte, J., & White, S. D. M. 1983, M.N.R.A.S., 205, 1009
Noguchi, M. 1988, , 203, 259
Oemler, A. 1992, in Clusters and Superclusters of Galaxies, ed. A. C. Fabian (Dordrecht:Kluwer), p. 29
Ostriker, J. P. 1993, Ann.Rev.A.A., 31, 689
Ostriker, J. P., & Steinhardt, P. J. 1995, Nature, 377, 600
Peebles, P. J. E. 1980, The Large Scale Structure of The Universe (Princeton:Princeton University Press)
Peebles, P. J. E. 1993, Principles of Physical Cosmology (Princeton:Princeton University Press)
Postman, M., & Geller, M. J. 1984, , 281, 95
Quinn, P. J., & Goodman, J. 1986, , 309, 472
Quinn, P. J., Hernquist, L., & Fullagar, D. P. 1993, , 403, 74
Ross, N. 1981, , 95, 349
Sandage A. 1983, in Internal Kinematics and Dynamics of Galaxies, ed. E. Athanassoula (Dordrecht:Reidel), p. 55
Shandarin, S. F. 1980, Astrofizika, 16, 769
Smail, I., Dressler, A., Couch, W. J., Ellis, R. S., Oemler, A., Butcher, H., & Sharples, R. M. 1997, , in press
Smoot, G. F. et al. 1992, , 396, L1
Soucail, G., Mellier, Y., Fort, B., & Cailloux, M. 1988, , 73, 471
Spitzer, L., & Baade, W. 1951, , 113, 413
Steinmetz, M. 1995, in New Light on Galaxy Evolution, IAu Symp. No 171, p. 259
Toomre, A., & Toomre, J. 1972, , 178, 623
Toomre, A. 1977, in Evolution of Galaxies and Stellar Populations, eds. B. M. Tinsley and R. B. Larson (New Haven:Yale University Observatory), p. 401
Tóth, G., & Ostriker, J. P. 1992, , 389, 5
White, S. D. M. 1978, M.N.R.A.S., 184, 185
White, S. D. M. 1979, M.N.R.A.S., 189, 831
Whitmore, B. C., Gilmore, D. M., & Jones, C. 1993, , 407, 489
[cccc]{} EdS1 & 14.5 & 38.5 & 47.0 EdS2 & 14.7 & 38.7 & 46.6 EdS3 & 14.6 & 38.6 & 46.8 EdS4 & 14.8 & 38.6 & 46.6 EdS5 & 14.8 & 38.7 & 46.5 O1 & 13.3 & 35.9 & 50.9 O2 & 13.4 & 35.6 & 51.0 O3 & 13.3 & 35.0 & 51.7 O4 & 13.4 & 35.7 & 50.9 O5 & 13.1 & 35.1 & 51.8 L1 & 14.4 & 37.8 & 47.8 L2 & 14.7 & 38.0 & 47.3 L3 & 14.3 & 37.7 & 48.0 L4 & 14.3 & 37.8 & 47.9 L5 & 14.6 & 38.1 & 47.3
[ccccc]{} EdS1 & 1823 & 924 & 785 & 2440 EdS2 & 1758 & 976 & 820 & 2434 EdS3 & 1704 & 1048 & 840 & 2355 EdS4 & 1780 & 1079 & 845 & 2467 EdS5 & 1769 & 1076 & 817 & 2517 O1 & 1686 & 754 & 552 & 2412 O2 & 1689 & 745 & 600 & 2391 O3 & 1634 & 718 & 623 & 2423 O4 & 1675 & 759 & 630 & 2446 O5 & 1618 & 764 & 640 & 2217 L1 & 1834 & 945 & 762 & 2441 L2 & 1826 & 995 & 752 & 2534 L3 & 1772 & 929 & 781 & 2514 L4 & 1790 & 947 & 759 & 2426 L5 & 1774 & 1028 & 774 & 2502
[ccccccc]{} EdS1 & 7813 (53.9) & 4863 (33.5) & 1823 (12.6) & 2576 (42.5) & 2565 (42.3) & 924 (15.2) EdS2 & 7596 (52.9) & 4994 (34.8) & 1758 (12.3) & 2518 (41.4) & 2584 (42.5) & 976 (16.1) EdS3 & 7487 (53.2) & 4879 (34.7) & 1704 (12.1) & 2692 (42.6) & 2586 (40.9) & 1048 (16.6) EdS4 & 7671 (53.0) & 5010 (34.6) & 1760 (12.3) & 1686 (42.2) & 2603 (40.9) & 1079 (16.9) EdS5 & 7699 (53.0) & 5067 (34.9) & 1769 (12.2) & 2699 (41.4) & 2743 (42.1) & 1076 (16.5) O1 & 8963 (58.6) & 4654 (30.4) & 1686 (11.0) & 2322 (45.7) & 2000 (39.4) & 754 (14.9) O2 & 9018 (59.4) & 4480 (29.5) & 1689 (11.1) & 2437 (46.4) & 2072 (39.4) & 746 (14.2) O3 & 9101 (60.0) & 4429 (29.2) & 1634 (10.8) & 2433 (47.8) & 1948 (38.1) & 718 (14.1) O4 & 9057 (58.9) & 4655 (30.3) & 1675 (10.9) & 2423 (46.4) & 2035 (39.0) & 760 (14.6) O5 & 8803 (59.1) & 4473 (30.0) & 1618 (10.9) & 2421 (46.8) & 1987 (38.4) & 764 (14.8) L1 & 8055 (54.7) & 4842 (32.9) & 1834 (12.4) & 2614 (42.9) & 2528 (41.5) & 945 (15.5) L2 & 7956 (54.2) & 4901 (33.4) & 1826 (12.4) & 2586 (42.6) & 2493 (41.0) & 995 (16.4) L3 & 8139 (55.1) & 4906 (33.4) & 1723 (11.7) & 2589 (43.3) & 2462 (41.2) & 929 (15.5) L4 & 8083 (54.8) & 4872 (33.0) & 1790 (12.1) & 2584 (43.4) & 2424 (40.7) & 947 (15.9) L5 & 8136 (55.2) & 4830 (32.8) & 1774 (12.0) & 2595 (42.5) & 2487 (40.7) & 1028 (16.8)
[ccccccc]{} EdS1 & 1124 (61.1) & 499 (27.1) & 217 (11.8) & 104 (48.1) & 82 (38.0) & 30 (13.9) EdS2 & 1076 (60.2) & 489 (27.4) & 221 (12.4) & 113 (44.1) & 107 (41.8) & 36 (14.1) EdS3 & 1018 (58.6) & 514 (29.6) & 205 (11.8) & 129 (42.7) & 126 (41.7) & 47 (15.6) EdS4 & 1066 (58.3) & 562 (30.7) & 202 (11.0) & 99 (39.3) & 101 (40.1) & 52 (20.6) EdS5 & 1133 (61.7) & 529 (28.8) & 174 ( 9.5) & 106 (39.4) & 122 (45.4) & 41 (15.2) O1 & 1443 (71.2) & 391 (19.3) & 193 ( 9.5) & 2 (20.0) & 4 (40.0) & 4 (40.0) O2 & 1491 (73.1) & 367 (18.0) & 183 ( 9.0) & 2 (66.7) & 1 (33.3) & 0 ( 0.0) O3 & 1429 (70.7) & 367 (18.2) & 225 (11.1) & 1 (25.0) & 2 (50.0) & 1 (25.0) O4 & 1474 (71.8) & 374 (18.2) & 206 (10.0) & 5 (83.3) & 1 (16.7) & 0 ( 0.0) O5 & 1435 (72.0) & 373 (18.7) & 184 ( 9.2) & 9 (64.3) & 5 (35.7) & 0 ( 0.0) L1 & 1214 (64.3) & 459 (24.3) & 215 (11.4) & 91 (47.2) & 68 (35.2) & 34 (17.6) L2 & 1246 (64.5) & 478 (24.8) & 207 (10.7) & 71 (49.3) & 49 (34.0) & 24 (16.7) L3 & 1238 (65.3) & 477 (25.2) & 180 ( 9.5) & 72 (40.2) & 85 (47.5) & 22 (12.3) L4 & 1251 (65.2) & 470 (24.5) & 198 (10.3) & 71 (47.0) & 50 (33.1) & 30 (19.9) L5 & 1272 (65.5) & 455 (23.4) & 215 (10.3) & 62 (43.4) & 63 (44.1) & 18 (12.6) EdS1$^*$ & 1119 (62.1) & 489 (27.2) & 193 (10.7) & 105 (41.2) & 108 (42.4) & 42 (16.5)
Figure Captions
[^1]: The Hubble sequence is actually a “tuning fork” with two branches, one for [*unbarred*]{} spirals and one for [*barred*]{} spirals. In this paper, we ignore the difference between barred and unbarred spirals, thus collapsing the tuning fork into a rod. Hence, “Sa” designates both Sa and SBa galaxies, and so on.
[^2]: The resulting initial conditions are not truly Gaussian, because of the discreteness of the sum in equation (13). This can be corrected by choosing the amplitudes $\delta_{\bf k}$ randomly (EDFW), using a Rayleigh distribution. We decided to ignore this refinement, since, for the particular combination of particle number and box size we are using, these discreteness effects are negligible at scales that are nonlinear at $z=0$.
[^3]: We choose a run from the EdS model because the structures are more evolved in this model than in the other ones, with more cluster merging happening at late time. Hence, the effect we are trying to measure is likely to be more important in this model.
|
---
abstract: 'We study anisotropic antiferromagnetic one-layer films with dipolar and nearest-neighbor exchange interactions. We obtain a unified phase diagram as a function of effective uniaxial $D_e$ and quadrupolar $C$ anisotropy constants. We study in some detail how spins reorient continuously below a temperature $T_s$ as $T$ and $D_e$ vary.'
author:
- 'Juan J. Alonso'
- 'Julio F. Fernández'
title: Continuous spin reorientation in antiferromagnetic films
---
Considerable attention has been devoted to the magnetic properties of ultrathin magnetic films in the last years. An interesting feature of magnetic films is discontinous spin reorientation (DSR), i.e., thermally driven switching between perpendicular and in-plane spin alignment at a temperature $T_r$ below the ordering temperature $T_0$. DSR has been studied both experimentally and theoretically. It has been established that DSR depends on the competition between dipolar interactions and uniaxial anisotropy often found in films. Continuous spin reorientation (CSR) is also very interesting. A thermally driven CSR transition was first observed experimentally in bulk systems at some temperature $T_s$ well below $T_0$[@gyor]. Below $T_s$, all spins rotate continuously as a whole as $T$ is varied in these systems. After these experiments Horner and Varma[@var] proposed an early phenomenological model in which higher anisotropes, which compete with the uniaxial anisotropy, were required for obtaining CSR. More recently, CSR has been observed in ferromagnetic thin films [@san]. Some nonhomogeneus multilayer models have been proposed to explain CSR[@usadel].
![Magnetic phase diagram for dipolar systems in their ground states for $J\leq 0$. $\mathfrak s$ and $\mathfrak c$ states correspond to exchange dominated systems with $J<-1.61\varepsilon_d$ and dipolar dominated systems with $-1.61\varepsilon_d<J<0$, respectively. Full and dashed thick lines stand for first– and second–order transitions, respectively. SR stands for the spin reorientation phase, in which $0<\theta<\pi/2$. []{data-label="fig1"}](f1.eps){width="80mm"}
The aim of this paper is to report numerical results on CSR for one-layer antiferromagnetic films. It is important to note that CSR is always associated with a SR phase defined by its own broken symmetries[@landau; @ours]. For instance, the order parameter $\bf m$ may be perpendicular ($\theta=0$) to the film plane in the $T_s<T<T_0$ range, and tilt away ($0<\theta<\pi$) from the easy magnetization axis below $T_s$, thus breaking additional symmetries[@ours]. To our knowledge, such SR magnetic phase has not been observed in numerical simulations of antiferromagnetic films.
![Tilt angle $\theta$ versus $T/T_0$ for a pure dipolar system $(J=0)$ of $32 \times 32 \times 1$ spins with $C=-\varepsilon_d$. The system is ordered for $T<T_0$. Curves correspond from top to bottom to $D_e/C=0.9, 0.8, 0.7, 0.6, 0.5$, and $0.4$ respectively. Similar plots have been obtained for $J<0$. In the inset, $\theta$ versus $-D_e/C$ for $T=0$ is shown. Solid line corresponds to $tan\theta=\sqrt{D_e/(C-D_e)}$. $\times$ ($\circ$) stand for systems of $32 \times 32$ spins with $J=0$ and variable $D$ ($D=0$ and variable $J$), respectively. .[]{data-label="fig2"}](f2.eps){width="80mm"}
We consider a system of classical unit spins ${\{\bf S}_i\}$ in a square lattice with hamiltonian $H=H_J+H_d+H_a$ where $H_J$, $H_d$, and $H_a$ are for short range exchange, long range dipolar, and anisotropy interactions, respectively. We use periodic boundary conditions. The exchange and dipolar energy between two antiparallel out of plane nearest neighbors spins is $J<0$ and $-\varepsilon_d$, respectively. Furthermore, there is site uniaxial $-D(S_i^z)^2$ plus fourfold $-C[(S_i^x)^4+(S_i^y)^4]$ anisotropy energies. Similar models have been studied for $C=0$, and some DSR between degenerate in-plane and out-of-plane states have been found as $T, J$ and $D$ vary[@isaac]. For ferromagnetic films, on the other hand, Jensen et al. have considered $C\ne0$ but obtained [@benne] phase diagrams only for $T=0$.
Ground state configurations obtained from Monte Carlo simulations are exhibited in Fig. 1. States designated with a $\mathfrak c$, can be defined [@ours] by $S_i^z=\tau_i^{z}\cos \theta,\; S_i^y=\tau_i^{y}\sin \theta \sin\phi,\;
S_i^x=\tau_i^{x}\sin \theta \cos \phi$ where $\tau_i^x=(-1)^{y(i)}$, $\tau_i^y=(-1)^{x(i)}$, and $\tau_i^z=(-1)^{x(i)+y(i)}$. On the other hand, states designated with a $\mathfrak s$ are defined by $\tau_i^{x}=\tau_i^y=\tau_i^z=(-1)^{x(i)+y(i)}$. A suitable order parameter for both $\mathfrak c$ and $\mathfrak s$ is $
m^\alpha =N^{-1}\sum_i S_i^\alpha \tau^{\alpha}_i$.
We calculate energies for these configurations as in Ref. \[6\], and find a surface anisotropy energy $\Delta$ that behaves as an easy axis anisotropy. We find that $\Delta=2\varepsilon_d$ for $\mathfrak s$ states and $\Delta=-1.23\varepsilon_d+2J$ for $\mathfrak c$ states. This suggests we can define an effective anisotropy as $D_e=D+\Delta$ and obtain a unified phase diagram for both $\mathfrak s$ and $\mathfrak c$ states for $T=0$, as shown in Fig. 1. $\mathfrak s$ configurations give a lower energy for $J<-1.61\varepsilon_d$ while $\mathfrak c$ states are more favorable for $-1.61\varepsilon_d<J<0$. Interesting experimental realizations of the latter condition could be found in Ref. \[9\].
In both cases we find a z-collinear phase ($\theta=0$) if both $D_e>0$ and $D_e>C$ are fulfilled. In-plane configurations ($\theta=\pi/2$) are prefered for $D_e<C$. More interestingly, we obtain a spin reorientation phase for $C<D_e<0$ in which $\theta$ covers the $0<\theta<\pi/2$ range. Minimization of the total energy gives $tan\theta=\sqrt{D_e/(C-D_e)}$ and therefore spins rotate continuously from $\theta=0$ to $\pi/2$ as $D_e$ varies from $0$ to $C$, as shown in the inset of Fig. 2. Symbols in the same inset correspond to numerical data obtained by cooling from the paramagnetic phase to $T<<\varepsilon_d, J$ for different values of $D_e/C$.
![Phases of pure dipolar ($J=0$) films for $C=-\varepsilon_d$ obtained from MC simulations. $\circ$ ($\times$) stand for systems of $16\times 16$ ($32\times 32$) spins respectively. Transition temperatures have been obtained from peaks observed in the specific heat. Systems have been cooled in $\Delta T =-0.01 \varepsilon_d/k_B$ steps of $4 \times 10^5$ MC sweeps each. Similar phase diagrams have been obtained for $J<0$.[]{data-label="fig3"}](f3.eps){width="80mm"}
We have explored temperature driven CSR in the SR phase by MC simulations for both exchange and dipolar dominated systems. For that purpose we calculate the order parameter $(m^x, m^y, m^z)$ and the energy as a function of $T$. We find two different regions (see figs. 2 and 3). Upon cooling below $T_0$, in-plane configurations appear for $D_e/C>0.65$, and spins rotate towards the $z$ axis below a second order transition at $T_s<T_0$. On the other hand, for $D_e/C<0.65$ spins point out of the plane below $T_0$, and rotate towards the $xy$ plane as $T$ decreases below $T_s$. Finally, we find, that the ratio $T_s/T_0$, as in $d=3$ systems[@ours], seems to depend mainly on $D_e/C$ and not on $J$ or $\varepsilon_d$.
E. M. Gyorgy, J. P. Remeika and F. B. Hagedorn, J. Appl. Phys. [**39**]{}, 1369 (1968). H. Horner and C. M. Varma, Phys. Rev. Lett. [**20**]{}, 845 (1968). M. Farle et al., Phys Rev. B [**55**]{}, 3708 (1997); G. Garreau, E. Beaurepaire, K. Oujnadela and M. Farle, Phys. Rev. B [**53**]{}, 1083 (1996); R. Sellmann et al., Phys. Rev. Lett. [**64**]{}, 054418 (2001). A. Moshel and K. D. Usadel, Phys. Rev. B [**51**]{}, 16111 (1995); L. Udvardi et al., Philos. Mag. B [**81**]{}, 613 (2001). L. D. Landau and E. M. Lifshitz, [ *Electrodynamics of Continuous Media*]{}, 2nd ed. (Pergamon, Oxford, 2004), pp. 159-162. J. F. Fernández and J. J. Alonso, Phys. Rev. B [**73**]{}, 024412 (2006) K. De’Bell, A. B. MacIsaac, J. P. Whitehead, Rev. Mod. Phys. [**72**]{}, 225 (2000) and references therein. P. J. Jensen and K. H. Bennemann Phys. Rev. [**42**]{}, 849 (1990). G. Ahlers, A. Kornblit, and H. J. Guggenheim, Phys. Rev. Lett. [**34**]{}, 1227 (1975); G. Mennenga, L. J. de Jongh, and W. J. Huiskamp, J. Magn. Magn. Mater. [**44**]{}, 59 (1984); M. R. Roser and L. R. Corruccini, Phys. Rev. Lett.[**65**]{}, 1064 (1990); D. Bitko, T. F. Rosenbaum, and G. Aeppli, Phys. Rev. Lett. [**77**]{}, 940 (1996).
|
---
author:
- 'D.Chakraborty [^1]'
title: Persistence in Random Walk in Composite Media
---
The phenomenon of persistence in various stochastic processes has been well documented over the past decade[@1]-[@13]. Even the most simple of all stochastic processes, a random walker in a homogeneous and infinite media, exhibits the phenomenon of persistence and the non trivial persistence exponent $\theta$ has the value $1/2$ [@9]. In an experimental setup finite boundaries become important and we have recently investigated the effect of finite boundaries on the survival probability of a random walker in an homogeneous system[@16]. Its then a natural question to ask as to how the survival probability behaves for a random walker in an heterogeneous system. Although random walk in spatially disordered media has been already studied [@17]-[@19] and in few cases exact results are known for a similar quantity ,the first passage time[@20]-[@23], little is known about the persistence probability. We shall consider one such class of heterogeneous system which is known as composite media and is often encountered in experimental science. A composite system essentially comprises of segments of different homogeneous media which differ in their macroscopic properties, such as diffusion coefficients. Redner has already investigated the first passage properties for a diffusive process in such a composite system[@24]. He considers a linear chain of $N$ blocks each of length $L_i$ having diffusivities $D_i$. The mean first passage time in such a system is calculated to be $$\label{1}
\langle t \rangle =\frac{1}{2} \sum_{i=1}^{N} \frac{L_i^2}{D_i} + \sum_{i<j}^{N} \frac{L_i L_j}{D_j}.$$
While the first passage probability is simply the probability that the particle escapes from one of the boundaries, the persistence probability is different and is defined as the probability that the random walker has not crossed the origin up to time $t$. If the system was homogeneous then the persistence probability of a random walker would be simply $p(t) \sim t^{-\theta}$ with $\theta=1/2$. A composite system is slightly different in the sense that near the boundaries the difference in diffusivities tend to give a bias to random walker.
The simplest of composite media that can be constructed is the one with two homogeneous segments with different diffusivities. We shall first derive the result for such a system and later generalize this result for different types of composite media.
Consider two homogeneous media of diffusivities $D_1$, henceforth called medium 1 and $D_2$, medium 2. A slab of medium 1 is placed between $-L$ and $+L$ and the rest of the space is filled with medium 2 as shown in Fig. 1. Since the diffusion coefficients are different, it follows that the stochastic noise correlator will be different and in particular they are $$\label{2}
\langle \eta_i(t_1) \eta_j(t_2) \rangle =2D_i \delta_{ij} \delta(t_1-t_2),$$ where $i,j$ are the medium indices $1$ and $2$ and $t_1>t_2$. For a random walker the the probability that the walker is at $(x,t)$ starting from $(x_0,0)$ simply obeys the diffusion equation in two segments as
\[3\] =D\_1 -Lx L\
=D\_2 -L > x > L
The exact dynamics of the problem can solved by considering the Laplace transform of Eq.(\[3a\]) and Eq.(\[3b\]) in which case the solution to the equations become
\[4\] P(x,s)=A\_1 (-|x|)\
+A\_2 (|x|) -L x L\
P(x,s)=A\_3 (-|x|)
The coefficients $A_1, A_2, A_3$ are found from the boundary conditions that the probability $P(x,t)$ and the current $-D\frac{\partial P}{\partial x}$ is continuous across the boundary. Finally, the third unknown coefficient is found from the normalization of the probability. The resulting expression, however, is complicated and it is difficult to extract any information from it.
We, instead, take a different approach to derive our result. The equation of motion for the random walker is not changed in spite of the heterogeneity of the system and is simply
\[5\] = \_1(t) -L x L,\
= \_2(t) -L > x > L.
Let the time required for a random walker to reach the boundary $x=\pm L$ be $\tau$. In which case we can write down the solution for the equation of motion in the two different regions. For $-L<x<L$ the solution is simply $$\label{6}
x(t)=\int_{0}^{t} \mathrm{d} t' \eta_1(t'),$$ whereas for $-L>x>L$, when the particle is in medium 2 the solution for $x(t)$ becomes $$\label{7}
x(t)= \int_{0}^{\tau} \mathrm{d} t' \eta_1(t') +\int_{\tau}^{t}
\mathrm{d} t' \eta_2(t').$$ Eq.(\[7\]) simply states the fact the random walker has spent time $\tau$ in medium 1 and the rest of the time in medium 2. Both Eq.(\[7\]) and Eq.(\[8\]) are valid when the walker is deep inside medium 1 or medium 2, since none of the equations considers the hopping across the boundary. When deep inside either media the multiple boundary crossings are rare events whereas when the walker is near the boundary multiple crossings are frequent and it is due to these multiple crossing events the crossover is not sharp and there will be two crossover time scales in the problem.
The correlation $\langle x(t_1) x(t_2) \rangle$ can now be worked out carefully. First consider the case $-L<x(t_1)<L$, $-L<x(t_2)<L$ and $t_1>t_2$. In this case the correlator becomes $$\label{8}
\langle x(t_1) x(t_2) \rangle=\int_{0}^{t_1}\mathrm{d}t'_1\int_{0}^{t_2}
\mathrm{d}t'_2 \quad \langle \eta_1(t'_1) \eta(t'_2) \rangle=2D_1 t_2\\$$ where we have used Eq.(\[2\]) for the noise correlator. If, however, $-L>x(t_1)>L$, $-L<x(t_2)<L$ and $t_1>t_2$ then we have $$\begin{aligned}
\label{9}
\nonumber
\langle x(t_1) x(t_2) \rangle = \langle\left[\int_{0}^{t_2} \mathrm{d} t'_2 \eta_1(t'_2) \right]\times \\
\left[ \int_{0}^{\tau} \mathrm{d} t'_1 \eta_1(t'_1) +\int_{\tau}^{t} \mathrm{d} t'_1 \eta_2(t'_1) \right]\rangle\end{aligned}$$ Since $\langle \eta_1(t)\eta_2(t) \rangle=0$, the above expression simplifies to $$\label{10}
\langle x(t_1) x(t_2) \rangle =\int_{0}^{\tau}\mathrm{d} t'_1 \int_{0}^{t_2} \mathrm{d} t'_2 \langle \eta_1(t'_2) \eta_1(t'_1) \rangle$$ As $\tau >t_2$ the correlation $\langle x(t_1) x(t_2) \rangle$ becomes $$\label{11}
\langle x(t_1) x(t_2) \rangle = 2D_1 t_2$$ Finally, for $-L>x(t_1)>L$ and $-L>x(t_2)>L$, $t_1>t_2 $, the correlator becomes $$\begin{aligned}
\label{12}
\nonumber
\langle x(t_1) x(t_2) \rangle =\langle\left[\int_{0}^{\tau} \mathrm{d} t'_1 \eta_1(t'_1) +\int_{\tau}^{t} \mathrm{d} t'_1 \eta_2(t'_1) \right]\times\\
\left[ \int_{0}^{\tau} \mathrm{d} t'_2 \eta_1(t'_2) +\int_{\tau}^{t} \mathrm{d} t'_2 \eta_2(t'_2)\right]\rangle\end{aligned}$$ Since the cross-correlation of the noise is zero we arrive at $$\begin{aligned}
\label{13}
\nonumber
\langle x(t_1) x(t_2) \rangle = \int_{0}^{\tau} \mathrm{d}t'_1 \int_{0}^{\tau} \mathrm{d}t'_2 \langle \eta_1(t'_1) \eta_1(t'_2) \rangle\\
+ \int_{\tau}^{t} \mathrm{d} t'_1 \int_{\tau}^{t} \mathrm{d} t'_2 \langle\eta_2(t'_1) \eta_2(t'_2)\rangle\end{aligned}$$ The first term in Eq.(\[13\]) is easy to calculate and gives us $$\label{14}
\int_{0}^{\tau} \mathrm{d}t'_1 \int_{0}^{\tau} \mathrm{d}t'_2 \langle \eta_1(t'_1) \eta_1(t'_2)=2D_1\tau$$ To evaluate the second term we make a transformation of variable $t''=t'-\tau$ and we have $$\label{15}
\int_{0}^{t_1-\tau} \mathrm{d} t''_1 \int_{0}^{t_2-\tau} \mathrm{d} t''_2 2D_2 \delta(t''_1-t''_2) = 2D_2 (t_2-\tau)$$ Hence, the complete correlator is $$\label{16}
\langle x(t_1)x(t_2)\rangle = 2(D_1-D_2) \tau+2D_2t_2$$
Of all the quantities in Eq.(\[16\]) the only unknown is $\tau$. Since Eq.(\[16\]) gives us noise averaged quantities we might as well replace $\tau$ by an average value, which is simply $\tau=\frac{L^2}{2D_1}$, the average time for a random walker to reach $x=\pm L$.
It is clear from Eq.(\[11\]) and Eq.(\[16\]) that there are two relevant time scales in the problem. The first one is $\tau_1=\left(\frac{D_1-D_2}{D_1}\right)\frac{L^2}{2D_1}$ whereas the second one is $\tau_2=\left(\frac{D_1-D_2}{D_1}\right)\frac{L^2}{2D_2}$. It is between these two time scales when the random walker undertakes multiple hoppings across the boundary, as a result of which the temporal regime $\tau_1<t<\tau_2$ gives the crossover region in the system.
The complete correlator now takes the form $$\begin{aligned}
\label{17}
\nonumber
\langle x(t_1)x(t_2)\rangle &=& 2 D_1 t_2 \phantom{2 cm} \textrm{for}\quad t_2<\tau_1 ,\\
\nonumber
&=& \left(\frac{D_1-D_2}{D_1}\right)L^2+2D_2 t_2 \quad \textrm{for}\quad
t_2>\tau_2,\\\end{aligned}$$
In the limit $D_1=D_2=D$ we get the correct result for a homogeneous medium.
A numerical simulation of the system for two different values of $L$ and two different sets of $D_1$ and $D_2$ has been done. Simulation result for the mean square displacement $\langle x^2(t)\rangle$ is shown in Fig.2 and Fig.3. Configuration averaging of $10^4$ has been done for both the systems to obtain the result.
To calculate the survival probability in the two regimes we use Eq.(\[17\]) and with suitable transformations both in $x$ and $t$ convert the process to a Gaussian stationary process. Define a normalized variable $\bar{X}(t)=x(t)/\sqrt{\langle x^2(t) \rangle}$. The correlator in this normalized variable, $\langle \bar{X}(t_1)\bar{X}(t_2)\rangle$, is then given by $$\begin{aligned}
\label{18}
\nonumber
\langle \bar{X}(t_1)\bar{X}(t_2)\rangle &=& \sqrt{\frac{t_2}{t_1}} \quad \textrm{for}\quad t_2<\tau_1 \\
\nonumber
&=&\sqrt{\frac{\beta L^2+2D_2 t_2}{\beta L^2+2D_2 t_1}}\quad \textrm{for}\quad t_2>\tau_2 ,\\\end{aligned}$$ with $\beta=\frac{D_1-D_2}{D_1}$. For $t<\tau_1$ we define the usual transformation in time, $T=\ln t$ and the two time correlation function in the new time variable becomes $$\label{19}
\langle \bar{X}(T_1) \bar{X}(T_2)\rangle = e^{-1/2(T_1-T_2)},$$ and the survival probability $p(t)$ for this temporal regime, in real time, is then $$\label{20}
p(t)\sim t^{-1/2}.$$
For $t>\tau_2$ we define a new time variable $T$ as $$\label{21}
e^{T}=\beta L^2+2D_2 t.$$ The correlation function $\langle \bar{X}(T_1) \bar{X}(T_2) \rangle$ takes the form of Eq(\[19\]), except that the time transformations are different. Since the process is a Gaussian stationary process and the correlator is exponentially decaying, the survival probability in the new time variable, $P(T)$, is $$\label{22}
P(T)=e^{-T/2}.$$ In real time the survival probability $p(t)$ takes the form $$\label{23}
p(t)\sim \frac{1}{\sqrt{\beta L^2+2D_2 t}}.$$
A plot of the survival probability, Eq.(\[20\]) and Eq.(\[23\]) for the two time regimes is shown in Fig. 4 and Fig. 5. The crossover timescales $\tau_1$, $\tau_2$ and the crossover regimes are also indicated in the figures. Configuration averaging of $10^6$ has been done to obtain the numerical results of Fig. 4 and Fig 5. Theoretical and numerical values of $\tau_1$ and $\tau_2$ are presented in Table A for two different set of values of $L$, $D_1$ and $D_2$.
**Table A**
Parameter Values $\tau_1^{th}$ $\tau_1^{nu}$ $\tau_2^{th}$ $\tau_2^{nu}$
------------------ --------------- --------------- --------------- ---------------
$L=20$
$2D_1=20$ 18 17.161 180 179.988
$2D_2=2.0$
$L=30$
$2D_1=30$ 27 27.228 270 271.899
$2D_2=3.0$
\
In this section we consider a composite system that is made of three homogeneous media with diffusivities $D_1$, $D_2$ and $D_3$. The medium with diffusivity $D_1$ is placed between $\pm L_1$ while the second medium with diffusivity $D_2$ is placed symmetrically between $-(L_2+L_1)<x<-L_1$ and $L_1<x<(L_1+L_2)$ as shown in the figure.
For a random walker in region I the average time to reach the boundary $\pm L_1$ is $\tau=\frac{L_1^2}{2D_1}$. When the random walker is in region II the average time to cross a region the length $L_2$ is once again $\tau'=\frac{L_2^2}{2D_2}$. Thus, for $t<\tau$ the particle spends its time in region I, for $\tau<t<\tau'$ the walker is in region II while for $t>\tau'$ the walker escapes to region III. The equation of motion in all the three region are $$\label{25}
\frac{\mathrm{d}x}{\mathrm{d}t}=\eta_i(t) \quad \textrm{with i=1,2,3 for three media,}$$ with the noise correlator $$\label{26}
\langle \eta_i(t) \eta_j(t') \rangle =2 D_i \delta_{ij} \delta(t-t'),$$ where $i,j$ are the medium indices running from $1$ to $3$.
The solutions to Eq(\[25\]) for the three regions are respectively $$\label{27}
x(t)=\int_0^t \mathrm{d}t'\eta_1(t')\quad \textrm{for $-L_1<x(t)<L_1$}\\$$
$$\label{28}
x(t)=\int_0^{\tau} \mathrm{d}t'\eta_1(t')
+\int_{\tau}^t \mathrm{d}t'\eta_2(t')$$
for $L_1<x(t)<L_2$ and $-L_2<x(t)<-L_1$,
$$\begin{aligned}
\label{29}
\nonumber
x(t)=\int_0^{\tau} \mathrm{d}t'\eta_1(t')
+\int_{\tau}^{\tau+\tau'}\mathrm{d}t'\eta_2(t')+\int_{\tau+\tau'}^t \mathrm{d} t'
\eta_3(t')\\
\nonumber
\textrm{for $x(t)>L_2$ and $x(t)<-L_2$}.\\\end{aligned}$$
The two time correlation function $\langle x(t_1) x(t_2) \rangle$ can be worked out carefully and for both $x(t_1)$ and $x(t_2)$ lying in region I, with $t_1>t_2$ is $$\label{30}
\langle x(t_1) x(t_2) \rangle =2 D_1 t_2$$
For $x(t_1)$ and $x(t_2)$ lying in region II, $\langle x(t_1) x(t_2) \rangle$ takes the form $$\label{31}
\langle x(t_1) x(t_2) \rangle =\beta L_1^2 +2D_2 t_2$$
while for $x(t_1)$ and $x(t_1)$ lying in region III, using the fact that the cross correlations of the noise is zero, the correlator becomes $$\begin{aligned}
\label{32}
\nonumber
\langle x(t_1) x(t_2) \rangle=\int_{0}^{\tau} \mathrm{d}t'_1 \int_{0}^{\tau} \mathrm{d}t'_2 \langle \eta_1(t'_1) \eta_1(t'_2) \rangle\\
\nonumber
+ \int_{\tau}^{\tau+\tau'} \mathrm{d} t'_1 \int_{\tau}^{\tau+\tau'} \mathrm{d} t'_2 \langle\eta_2(t'_1) \eta_2(t'_2)\rangle \\
+\int_{\tau+\tau'}^{t_1} \mathrm{d} t'_1 \int_{\tau+\tau'}^{t_2} \mathrm{d} t'_2 \langle\eta_3(t'_1) \eta_3(t'_2)\rangle.\end{aligned}$$
The first integral is simply $2D_1\tau$. The second integral is performed by making use of the transformation $t''=t'-\tau$ and the integral reduces to $$\nonumber
\int_{0}^{\tau'}\mathrm{d} t'_1 \int_{0}^{\tau'} \mathrm{d} t'_2 \langle\eta_2(t'_1) \eta_2(t'_2)\rangle =2D_2\tau'$$
while for the third integral we use the transformation $t''=t'-(\tau+\tau')$ and the integral is evaluated to be $2D_3 (t_2-\tau-\tau')$. Hence the correlator becomes $$\begin{aligned}
\label{33}
\nonumber
\langle x(t_1) x(t_2) \rangle&=&2D_1 \tau +2D_2 \tau' +2D_3 (t_2-\tau-\tau')\\
\nonumber
&=& \beta_1 L_1^2+\beta_2 L_2^2 +2D_3 t_2 ,\\\end{aligned}$$ with $\beta_1=(D_1-D_3)/D_1$ and $\beta_2=(D_2-D_3)/D_2$. It is clear from Eq.(\[30\]), Eq.(\[31\]) and Eq.(\[33\]) that there four relevant time scales in the problem. The first one is obviously $\tau_1=\beta \frac{L_1^2}{2D_1}$. The second one is $\tau_2=\beta \frac{L_1^2}{2D_2}$. The temporal regime $\tau_1<t<\tau_2$ represents the crossover regime from region I to region II, when the walker feels the effect of the inhomogeneity. Similarly, the third time scale is $\tau_3=\frac{1}{2D_2}(\beta_1 L_1^2+\beta_2 L_2^2)$ and the fourth time scale is $\tau_4=\frac{1}{2D_3}(\beta_1 L_1^2+\beta_2 L_2^2)$. $\tau_3<t<\tau_4$ is the crossover regime from region II to region III and it is during this time when the walker spends most of its time near the boundary of region II and region III. Thus the proper time scales for which Eq.(\[30\]), Eq.(\[31\]) and Eq.(\[33\]) are valid are respectively $0<t<\tau_1$, $\tau_2<t<\tau_3$ and $t>\tau_4$ while the time intervals $\tau_1<t<\tau_2$ and $\tau_3<t<\tau_4$ represents the two crossover regimes.
The mean square displacement $\langle x^2(t) \rangle$ is then $$\begin{aligned}
\label{34}
\nonumber
\langle x^2(t) \rangle&=&2D_1t \quad\textrm{for $0<t<\tau_1$}\\
\nonumber
&=& \beta L_1^2 + 2D_2 t_2 \quad \textrm{for $\tau_2<t<\tau_3$}\\
\nonumber
&=& \beta_1 L_1^2+\beta_2 L_2^2+2D_3 t \\
\quad \textrm{for $t>\tau_4$}.\end{aligned}$$
Note that for $D_2=D_3$ we recover the first case, that is a composite media with two homogeneous components while for $D_1=D_2=D_3=D$ we recover the case for a homogeneous system.
To obtain the survival probability from Eq.(\[30\]), Eq.(\[31\]) and Eq.(\[33\]) we follow the usual procedure of defining suitable transformations in space and time as in the earlier section. Thus, we define a normalized variable $\bar{X}(t)$ as $\bar{X}(t)=\frac{x(t)}{\sqrt{\langle x^2(t) \rangle}}$ and the correlator in the normalized variable becomes $$\begin{aligned}
\label{35}
\nonumber
\langle \bar{X}(t_1)\bar{X}(t_2)\rangle &=&\sqrt{\frac{t_2}{t_1}} \quad \textrm{for $0<t<\tau_1$}\\
\nonumber
&=& \sqrt{\frac{\beta L_1^2+2D_2 t_2}{\beta L_1^2+2D_2 t_1}}\quad \textrm{for $\tau_2<t<\tau_3$}\\
\nonumber
&=& \sqrt{\frac{\beta_1 L_1^2+\beta_2 L_2^2+2D_3 t_2}{\beta_1 L_1^2+\beta_2 L_2^2+2D_3 t_1}} \quad \textrm{for $t>\tau_4$} \\\end{aligned}$$
The time transformation $t\rightarrow T$ for the three different regimes are defined in the following way $$\begin{aligned}
\label{36}
\nonumber
e^{T} &=& t \quad \textrm{for $0<t<\tau_1$} \\
\nonumber
&=& \beta L_1^2 + 2D_2 t \quad \textrm{for $\tau_2<t<\tau_3$}\\
\nonumber
&=& \beta_1 L_1^2+\beta_2 L_2^2 + 2D_3 t \quad \textrm{for $t>\tau_4$}\\\end{aligned}$$
and the correlator $\langle \bar{X}(T_1) \bar{X}(T_2) \rangle$ becomes $$\begin{aligned}
\label{37}
\langle \bar{X}(T_1) \bar{X}(T_2) = e^{-\frac{1}{2}(T_1-T_2)}\end{aligned}$$ for all the three temporal regimes, the difference being that the time transformations are different in the three regimes. The process is now a Gaussian stationary process.
Since the correlator is exponentially decaying, the survival probability in the transformed time variable is simply $$\label{38}
P(T)=e^{-T/2}.$$ In real time, using Eq.(\[38\]), the survival probability becomes $$\begin{aligned}
\label{39}
\nonumber
p(t)&\sim&t^{-1/2} \quad \textrm{for $0<t<\tau_1$}\\
\nonumber
&\sim& \frac{1}{\sqrt{\beta L_1^2+2D_2t}} \quad \textrm{for $\tau_2<t<\tau_3$}\\
&\sim& \frac{1}{\sqrt{\beta_1 L_1^2+\beta_2 L_2^2+2D_3t}} \\
\nonumber
\quad \textrm{for $t>\tau_4$}.\end{aligned}$$
![Plot of survival probability $p(t)$ vs time in log-log scale for $L_1=30$, $L_2=210$, $2D_1=100$, $2D_2=10$ and $2D_3=5$. The crossover time scales and the crossover regime is indicated in the figure. The circular points are actual data from numerical simulation and the solid lines are fit of Eq.(\[41\]).](fig8.EPS){height="7cm" width="8cm"}
A plot of the survival probability for two different set of values of $L_1$, $L_2$, $D_1$, $D_2$ and $D_3$ is shown in Fig. 5 and Fig. 6. Configuration averaging of $10^6$ has been done to obtain the numerical results of Fig. 5 and Fig. 6. Theoretical and numerical values of the time scales are presented in Table D.
**Table D**
[|c|c|]{} Parameter Values & Time Scales\
$L_1=30$ & $\tau_1^{th}=8.1$, $\tau_1^{nu}=8.68$\
$L_2=210$& $\tau_2^{th}=81$, $\tau_2^{nu}=80.4393$\
$2D_1=100$& $\tau_3^{th}=1705.5$, $\tau_3^{nu}=1702.58$\
$2D_2=10$, $2D_3=5$ & $\tau_4^{th}=3411$, $\tau_4^{nu}=3421.27$\
$L_1=60$ & $\tau_1^{th}=16.2$, $\tau_1^{nu}=16.01$\
$L_2=360$& $\tau_2^{th}=162$, $\tau_2^{nu}=162.096$\
$2D_1=200$& $\tau_3^{th}=4228.2$, $\tau_3^{nu}=4245$\
$2D_2=20$, $2D_3=2$ & $\tau_4^{th}=42282$, $\tau_4^{nu}=42329.8$\
To conclude, we have investigated the phenomenon of persistence for the case of a random walker in a composite media with two and three homogeneous components. We have presented a very simplified theory to explain the survival probability of a random walker in such inhomogeneous systems. For the two component system, analytical results show that there are two relevant time scales in the problem and this time interval is the crossover regime for the problem. Similarly, for the three component systems there are four relevant time scales and two crossover regimes. The fact that the crossover regimes are not sharp is due to the multiple hoppings that a random walker undergoes near the boundary.
[99]{} Majumdar,S.N., Cire,C.J., Bray,A.J. and Cornell,S.J., Phys. Rev. Lett., 1996,77,2867. Derrida,B., Bray,A.J. and Godrèche,C., J. Phys. A, 1994,27, L357. Derrida,B., Hakim,V. and Pasquier,V., Phys. Rev. Lett., 1995,75,751. Krug,J., Kallabis,H., Majumdar,S.N., Cornell,S.J., Bray,A.J. and Sire,C., Phys. Rev. E, 1997,56,2702. Lee,B.and Rutenberg,A.D., Phys. Rev. Lett., 1997,79,4842. Kallabis,H. and Krug,J., Euro. Phys. Lett., 1999,45(1),20. Majumdar,S.N. and Sire,C., Phys. Rev. Lett., 1996,77,1420. Majumdar,S.N. and Bray,A.J., Phys.Rev. Lett., 1998,81,2626. Majumder, S.N., Current Science, 1999, 77,340. Manoj, G. and Ray, P., Phys. Rev E, 2000,62,7755. Dougherty,D.B., Lyubinetsky,I., Williams,E.D, Constantin,M., Dasgupta,C. and Das Sarma,S., Phys. Rev. Lett., 2002,89,136102. Dasgupta,C., Constantin,M., Das Sarma,S. and Majumdar,S.N., Phys. Rev. E, 2004,69,022101. Constantin,M., Dasgupta,C., Punyindu Chatraphorn,P., Majumdar,S.N. and Das Sarma,S., 2004,69,061608. Chakraborty, D., Bhattacharjee, J.K., Phys. Rev. E, 2007, 75,011111. Alexander, S., Bernasconi, J., Schneider, W.R. and Orbach, R., Rev. Mod. Phys. 1981, 53, 175. Havlin, S. and ben-Avraham, D., Adv. Phys., 1987, 36,695. Bouchaud, J.P. and Georges, A., Phys. Rep., 1990, 195, 127. Noskowicz, S.H. and Goldhirsch, I., Phys. Rev. Lett., 1988, 61, 500. Murthy, K.P.N. and Kehr, K.W., Phys. Rev. A, 1989, 40, 2082. Le Doussal, P., Phys. Rev. Lett., 1989, 62, 3097. Le Doussal, P., Monthus, C. and Fisher, D.S., Phys. Rev. E., 1999, 59, 4795. Redner, S., A guide to first-passage processes, Cambridge University Press.
[^1]: email: tpdc2@mahendra.iacs.res.in
|
---
abstract: 'This paper considers a semiparametric generalized autoregressive conditional heteroscedastic (S-GARCH) model. For this model, we first estimate the time-varying long run component by the kernel estimator, and then estimate the non-time-varying parameters in short run component by the quasi maximum likelihood estimator (QMLE). We show that the QMLE is asymptotically normal with the parametric convergence rate. Next, we provide a consistent Bayesian information criterion for order selection. Furthermore, we construct a Lagrange multiplier test for linear parameter constraint and a portmanteau test for model checking, and obtain their asymptotic null distributions. Our entire statistical inference procedure works for the non-stationary data with two important features: first, our QMLE and two tests are adaptive to the unknown form of the long run component; second, our QMLE and two tests share the same efficiency and testing power as those in variance target method when the S-GARCH model is stationary.'
author:
-
-
-
title: Adaptive inference for a semiparametric generalized autoregressive conditional heteroscedastic model
---
,
Introduction
============
Since the seminal work of @Engle:1982 and @Bollerslev:1986, the generalized autoregressive conditional heteroscedastic (GARCH) model is perhaps the most influential one to capture and forecast the volatility of economic and financial return data. However, the GARCH model is often used under the stationarity assumption. Due to business cycle, technological progress, preference change and policy switch, the underlying structure of data may change over time (see @Hansen:2001). Hence, a non-stationary GARCH model with time-varying parameters seems more appropriate to fit the return data in applications; see, for example, @MS:2004, @SG:2005, @ER:2008, @FSR:2008, @PR:2014, @Truquet:2017 and the references therein.
In this paper, we consider a semiparametric GARCH (S-GARCH) model of order $(p, q)$:
\[semi\_model\] y\_t=&u\_t \_t=(),\
\[garch\_model\]u\_t=&\_t g\_t=\_0+\_[i=1]{}\^q\_[i0]{} u\_[t-i]{}\^2+\_[j=1]{}\^[p]{}\_[j0]{} g\_[t-j]{},
for $t=1,..., T$, where $\tau(x)$ is a positive smoothing deterministic function with unknown form on the interval $[0,1]$, $u_t$ is a covariance stationary GARCH$(p, q)$ process with $\omega_0>0$, $\alpha_{i0}\geq0$ and $\beta_{j0}\geq 0$, and $\{\eta_t\}$ is a sequence of independent and identically distributed (i.i.d) random variables with $E\eta_t^2=1$. The specification that $\tau_t$ is a function of ratio $t/T$ rather than time $t$ is initiated by @Robinson:1989, and since then, it has become a common scaling scheme in the time series literature; see, for example, @DR:2006, @CT:2007, @XP:2008, @ZW:2009, @ZhangW:2012, @ZS:2013, and @Zhu:2019 to name just a few. In (\[semi\_model\])–(\[garch\_model\]), the smooth long run component $\tau_t$ is to depict time-varying parameters, and the GARCH-type short run component $u_t$ is to capture the temporal dependence.
By using different specified forms of $\tau(x)$, the S-GARCH model nests many often used models, including, for example, the standard GARCH model in @Bollerslev:1986, the spline-GARCH model in @ER:2008, and the smooth-transition GARCH model in @AT:2013. The statistical inference for these models has been well studied. However, when the specification of $\tau(x)$ is unspecified, the statistical inference for the S-GARCH model has been less attempted. For $p=q=1$, @HL:2010 considered the estimation for the S-GARCH model. For $q=0$ (that is, $\beta_{j0}\equiv0$), @PR:2014 constructed a score test to check the nullity of all $\alpha_{i0}$, and @Truquet:2017 later proposed a projection-based estimation and a related Wald test to detect the nullity of some of $\alpha_{i0}$. For the general S-GARCH model, the statistical inference methodologies, including estimation, testing and model checking, are not available in the literature.
In this paper, we provide an entire inference procedure for the S-GARCH model to fill this gap. First, we give a two-step estimation for the model: the function $\tau(x)$ is estimated by the kernel estimator at step one, and the unknown parameter vector in the parametric process $u_t$ is estimated by the quasi maximum likelihood estimator (QMLE) at step two. Although the non-parametric estimator at step one has a slower convergence rate, we show that the QMLE at step two is asymptotically normal with a parametric convergence rate. Moreover, we consider the Bayesian information criterion (BIC) for order selection, construct a new Lagrange multiplier (LM) test for detecting the linear parameter constraint, and propose a new portmanteau test for model checking. The consistency of the BIC and the asymptotic null distributions of the LM and portanteau tests are established. Since our entire inference methodologies allow for unspecified form of $\tau(x)$ and higher order $(p, q)$, they avoid the potential model-misspecification, leading to a broad application scope to handle the non-stationary data.
Our two-step estimation was previously adopted by @HL:2010 to study the S-GARCH($1, 1$) model. Unlike @HL:2010, we find a much more simple expression for the asymptotic variance of the QMLE, making the related inference methodologies easy-to-implement. Meanwhile, we find that the asymptotic variance of the QMLE is adaptive to the unknown form of $\tau(x)$. Consequently, the efficiency of the QMLE and the power of its related LM and portmanteau tests are invariant regardless of the form of $\tau(x)$. Our two-step estimation also shares the similar idea as the variance target (VT) estimation in @FHZ:2011, which is only applicable for the stationary S-GARCH model (that is, $\tau(x)\equiv \tau_0$). The difference is that our first step estimator of $\tau(x)$ is non-parametric, while the first step estimator of $\tau_0$ in the VT method is parametric. It turns out that our method requires more involved proof techniques. Interestingly, when the S-GARCH model is stationary, our QMLE is asymptotically as efficient as the QMLE in the second step estimation of the VT method, although the first step estimator of our method has a slower convergence rate than that of the VT method. On the contrary, when the S-GARCH is non-stationary, our QMLE is still valid with the same efficiency as the stationary case due to its adaption feature, while the QMLE in the VT method is not applicable any more.
The remainder of the paper is organized as follows. Section 2 presents the two-step estimation procedure, establishes its related asymptotics, and studies the order selection. Section 3 gives a LM test for the linear parameter constraint. Section 4 introduces a portmanteau test and obtains its limiting null distribution. Section 5 makes a comparison with other estimation methods. Simulation results are reported in Section 6, and applications are given in Section 7. Concluding remarks are offered in Section 8. Proofs of all theorems are relegated to the Appendix.
Two-step estimation
===================
Let $\theta=(\alpha_{1},...,\alpha_{q},\beta_{1},...,\beta_{p})'\in\Theta$ be the parameter vector in model (\[garch\_model\]), and $\theta_0=(\alpha_{10},...,\alpha_{q0},\beta_{10},...,\beta_{p0})'\in\Theta$ be its true value, where $\Theta\subset \mathbb{R}_+^{p+q}$ is the parameter space, and $\mathbb{R}_+=(0,\infty)$. This section gives a two-step estimation procedure for the S-GARCH model in (\[semi\_model\])–(\[garch\_model\]). Our procedure first estimates the nonparametric function $\tau(x)$ in (\[semi\_model\]), and then estimates the parameter vector $\theta_0$ in (\[garch\_model\]).
Estimation of $\tau(x)$
-----------------------
This subsection provides a (Nadaraya-Watson) kernel estimator of $\tau(x)$. To accomplish it, we first need the following assumption for the identification of $\tau_t$:
\[ident\_tau\] $(\mathrm{i})$ $\sum_{i=1}^{q}\alpha_i+\sum_{j=1}^{p}\beta_j<1$; $(\mathrm{ii})$ $\omega=1-\sum_{i=1}^{q}\alpha_i-\sum_{j=1}^{p}\beta_j$.
Assumption \[ident\_tau\](i) is equivalent to the covariance stationarity of model (\[garch\_model\]), and Assumption \[ident\_tau\](ii) is to ensure $Eu_t^2=1$. Under Assumption \[ident\_tau\], we have that $Ey_t^2=\tau_t(Eu_{t}^2)=\tau_t$, from which it is reasonable to estimate $\tau(x)$ by
(x)=,
where $K_h(\cdot)=K(\cdot/h)/h$ with $K(\cdot)$ being a kernel function and $h$ being a bandwidth. Furthermore, since $(1/T)\sum_{s=1}^{T}K_h(x-s/T)=1+O(1/(Th))$ under mild conditions, it is more convenient to estimate $\tau(x)$ by
\[est\_tau\] (x)=.
To obtain the asymptotic distribution of $\widehat{\tau}(x)$, the following three assumptions are imposed:
\[ass\_tau\] $(\mathrm{i})$ $\tau:[0,1]\to \mathbb{R}_{+}$ is twice continuously differentiable; $(\mathrm{ii})$ $0<\underline{\tau}\leq \inf_{x\in[0,1]}\tau(x)\leq \sup_{x\in[0,1]}\tau(x)\leq \overline{\tau}$, where $\underline{\tau}$ and $\overline{\tau}$ are two positive constants.
\[ass\_kernel\] $(\mathrm{i})$ $K:[-1,1]\to \mathbb{R}_+$ is symmetric about zero, bounded and Lipschitz continuous with $\int_{-1}^{1}K(x)dx=1$ and $C_r=\int_{-1}^{1}x^rK(x)dx$; $(\mathrm{ii})$ $h\to0$ and $Th\to\infty$ as $T\to \infty$.
\[ass\_ut\] $Eu_{t}^4<\infty$.
Assumption \[ass\_tau\](i) imposes a smoothness condition on $\tau(x)$, and similar conditions have been used in @DR:2006, @HL:2010, and @CH:2016. Assumption \[ass\_tau\](ii) is in line with the condition that the intercept term in the GARCH model has positive lower and upper bounds. Assumption \[ass\_kernel\](i) holds for many often used kernels, and the bounded support condition on $K(x)$ is just to simplify analysis. Assumption \[ass\_kernel\](ii) requires that $h$ converges to zero at a slower rate than $T^{-1}$, and later a more restrictive $h$ is needed for the asymptotics of the estimator of $\theta_0$. Assumption \[ass\_ut\] is stronger than Assumption \[ident\_tau\](i), and it is used to ensure the asymptotic variance of $\widehat{\tau}(x)$ is well defined.
Let $z_t=u_t^2-1$. The asymptotic normality of $\widehat{\tau}(x)$ is given below:
\[thm\_kernel\] Suppose Assumptions \[ident\_tau\]–\[ass\_ut\] hold. Then, for any $x \in (0,1)$, $$\sqrt{Th}\big(\widehat{\tau}(x)-\tau(x)-h^2b(x)\big)\to_{\mathcal{L}}N(0,V(x))\mbox{ as }T\to\infty,$$ where $$b(x)=\frac{C_2}{2}\frac{\partial^2 \tau(x)}{\partial x^2}\mbox{ and }V(x)=\tau^2(x)\Big\{\int_{-1}^{1} K^2(x)dx\Big\}\sum_{j=-\infty}^{\infty}E(z_tz_{t-j}).$$
Based on $\widehat{\tau}(x)$ in (\[est\_tau\]), we estimate $\tau_t$ by $\widehat{\tau}_t=\widehat{\tau}(t/T)$. In practice, $\widehat{\tau}_t$ may have the boundary problem. To overcome this problem, we follow @CH:2016 to adopt the reflection method proposed by @HW:1991. That is, we generate pseudo data $y_t=y_{-t}$ for $-[Th]\leq t\leq -1$ and $y_t=y_{2T-t}$ for $T+1\leq t\leq T+[Th]$, and then modify $\widehat{\tau}_t$ as
\[boundary\_tau\] \_t=\_[s=t-\[Th\]]{}\^[t+\[Th\]]{}K\_h()y\_s\^2.
Intuitively, the reflection method makes the boundary points behave similarly as the interior ones. Similar to @CH:2016, it can be seen that the reflection method gives a bias term of order $O(h^2)$, and hence it does not affect the asymptotics of the estimator of $\theta_0$. Although $\widehat{\tau}_t$ in (\[boundary\_tau\]) is used for numerical calculations, our proofs will be based on $\widehat{\tau}_t=\widehat{\tau}(t/T)$ in the sequel to ease the presentation.
Estimation of $\theta_0$
------------------------
This subsection considers the quasi maximum likelihood estimator (QMLE) of $\theta_0$. Based on Assumption \[ident\_tau\](ii), we write the parametric $g_t$ in (\[garch\_model\]) as $$\label{ideal_g}
g_t(\theta)=\Big(1-\sum_{i=1}^{q}\alpha_i-\sum_{j=1}^{p}\beta_j\Big)+\sum_{i=1}^{q}\alpha_iu_{t-i}^2+\sum_{j=1}^{p}\beta_jg_{t-j}(\theta).$$ By assuming that $\eta_{t}\sim N(0, 1)$, the log-likelihood function (multiplied by negative two and ignoring constants) of $\{y_t\}$ is $$\label{ideal_llf}
L_T(\theta)=\sum_{t=1}^{T}l_t(\theta)\quad \text{with}\quad l_t(\theta)=\frac{u_t^2}{g_t(\theta)}+\log g_t(\theta).$$ However, $L_T(\theta)$ is infeasible for computation, since $\{u_t\}$ are unobservable. Therefore, we have to replace $\{u_t\}$ by $\{\widehat{u}_{t}\}$, and consider the following feasible log-likelihood function: $$\label{f_mle}
\widehat{L}_T(\theta)=\sum_{t=1}^{T}\widehat{l}_t(\theta)\quad \text{with}\quad \widehat{l}_t(\theta)=\frac{\widehat{u}_t^2}{\widehat{g}_t(\theta)}+\log\widehat{g}_t(\theta),$$ where $\widehat{u}_{t}=y_{t}/\sqrt{\widehat{\tau}_{t}}$, and $\widehat{g}_t(\theta)$ is computed recursively by $$\label{hatg}
\widehat{g}_t(\theta)=\Big(1-\sum_{i=1}^{q}\alpha_i-\sum_{j=1}^{p}\beta_j\Big)
+\sum_{i=1}^{q}\alpha_i\widehat{u}_{t-i}^2+\sum_{j=1}^{p}\beta_j\widehat{g}_{t-j}(\theta)$$ with given constant initial values $$\widehat{u}_0=u_0,..., \widehat{u}_{1-q}=u_{q-1}, \widehat{g}_0(\theta)=g_0,..., \widehat{g}_{1-p}(\theta)=g_{1-p}.$$
Based on $\widehat{L}_T(\theta)$ in (\[f\_mle\]), our QMLE of $\theta_0$ is defined as $$\widehat{\theta}_T=\arg\min_{\theta\in\Theta}\widehat{L}_T(\theta).$$ To establish the asymptotics of $\widehat{\theta}_{T}$, the following additional assumptions are imposed:
\[ass\_garch\] $(\mathrm{i})$ $\Theta$ is compact; $(\mathrm{ii})$ $\theta_0$ is an interior point of $\Theta$; $(\mathrm{iii})$ if $p>0$, the polynomials $\sum_{i=1}^{q}\alpha_{0i}z^i$ and $1-\sum_{j=1}^{p}\beta_{0j}z^j$ have no common root.
\[ass\_u\] $E|u_t|^{4(1+\delta_0)}<\infty$ for some $\delta_0>0$.
\[ass\_eta\] $(\mathrm{i})$ $\eta_t$ has a continuous and almost surely positive density on $\mathbb{R}$ with $E\eta_t^2=1$; $(\mathrm{ii})$ $E|\eta_t|^{4+4/\delta_0+\delta_1}<\infty$ for some $\delta_1>0$, where $\delta_0>0$ is defined as in Assumption \[ass\_u\].
\[ass\_bandwidth\] $h=c_hT^{-\lambda_h}$ for some $1/4<\lambda_h<1/2$ and $0<c_h<\infty$.
We often some remarks on the aforementioned assumptions. Assumption \[ass\_garch\] is regular, and it has been used by @HK:2003 and @FZ:2004 to study the QMLE for the stationary GARCH model. Assumption \[ass\_u\] is stronger than Assumption \[ass\_ut\], which is needed for the variance target estimator in @FHZ:2011 but not for the QMLE in @FZ:2004. Assumption \[ass\_eta\](i) gives the identification condition for $\theta_0$ based on the QMLE, and ensures that the GARCH process $u_t$ is $\beta$-mixing (see @CC:2002). Assumption \[ass\_eta\](ii) is stronger than the condition that $E\eta_t^4<\infty$, which is necessary to derive the asymptotic normality of the QMLE for the stationary GARCH model (see @HY:2003). We resort to the stronger conditions of $u_t$ and $\eta_t$ in Assumptions \[ass\_u\] and \[ass\_eta\](ii) due to the existence of $\tau(x)$ in the S-GARCH model. Note that if $\eta_t$ has a light tail (for example, $\eta_t\sim N(0, 1)$), Assumption \[ass\_eta\](ii) holds for a small value of $\delta_0$, and $u_t$ (or the data $y_t$) in Assumption \[ass\_u\] thus is allowed to be heavy-tailed distributed. Assumption \[ass\_bandwidth\] requires a more restrictive condition on the bandwidth $h$ than Assumption \[ass\_kernel\](ii), and similar conditions have been adopted by @HL:2010, @PR:2014, and @Truquet:2017. The reason is because an undersmoothing $h$ is needed to make the estimation bias from $\widehat{\tau}_t$ negligible so that the $\sqrt{T}$-convergence of $\widehat{\theta}_{T}$ holds.
Denote $\kappa=E\eta_{t}^{4}$, $g_t=g_{t}(\theta_0)$, $\psi_t=\psi_t(\theta_0)$ with $\psi_t(\theta)=\{\partial g_t(\theta)/\partial \theta\}/g_t(\theta)$, and
\[J\] J\_[1]{}&=E(\_t\_t’), J\_[2]{}=E(g\_t\^2)E()E().
Now, we are ready to give the asymptotics of $\widehat{\theta}_T$ in the following theorem:
\[thm\_garch\] Suppose Assumptions \[ident\_tau\]–\[ass\_kernel\] and \[ass\_garch\]–\[ass\_eta\] hold. Then,
$\mathrm{(i)}$ $\widehat{\theta}_{T}\to_{p} \theta_0$ as $T\to\infty$;
$\mathrm{(ii)}$ furthermore, if Assumption \[ass\_kernel\] is replaced by Assumption \[ass\_bandwidth\], $$\sqrt{T}(\widehat{\theta}_T-\theta_0)\to_{\mathcal{L}} N(0,\Sigma)\mbox{ as }T\to\infty,$$ where $\Sigma=(\kappa-1)J_1^{-1}(J_1+J_2)J_{1}^{-1}$, and $J_1$ and $J_2$ are defined in (\[J\]).
\[rem\_2\] We can simply estimate $\Sigma$ by its sample version $\widehat{\Sigma}_{T}$, where
\[est\_Sigma\] \_[T]{}=(\_[T]{}-1)\_[1T]{}\^[-1]{}(\_[1T]{}+\_[2T]{})\_[1T]{}\^[-1]{}
with
\[residual\_eta\]
\_[T]{}=\_[t=1]{}\^[T]{}\_t\^4,\_[1T]{}=\_[t=1]{}\^[T]{}\_t\_t’ \_[2T]{}=(\_[t=1]{}\^[T]{}\_t\^2)(\_[t=1]{}\^[T]{}) (\_[t=1]{}\^[T]{}).
Here, $\widehat{\eta}_{t}=\widehat{\eta}_{t}(\widehat{\theta}_{T})$ with $\widehat{\eta}_{t}(\theta)=\widehat{u}_{t}/\sqrt{\widehat{g}_{t}(\theta)}$, $\widehat{\psi}_{t}=\widehat{\psi}_{t}(\widehat{\theta}_{T})$ with $\widehat{\psi}_{t}(\theta)=\{\partial \widehat{g}_t(\theta)/\partial \theta\}/\widehat{g}_t(\theta)$, and $\widehat{g}_{t}=\widehat{g}_{t}(\widehat{\theta}_{T})$. Under the conditions of Theorem \[thm\_garch\], we have that $\widehat{\Sigma}_{T}\to_{p}\Sigma$ as $T\to\infty$.
Interestingly, the preceding theorem shows that the asymptotic variance of $\widehat{\theta}_T$ is independent of $\tau(x)$. Following the viewpoint of @Robinson:1987, it means that $\widehat{\theta}_T$ is adaptive to the unknown form of $\tau(x)$. This adaption feature ensures that the efficiency of $\widehat{\theta}_T$ and the power of its related tests are unchanged regardless of the form of $\tau(x)$.
Order selection
---------------
To use the S-GARCH model in practice, we need determine suitable orders $p$ and $q$. This subsection studies the Bayesian information criteria (BIC) for this purpose. Based on $\{\widehat{u}_{t}\}$, we compute $\widehat{\theta}_{T,(p, q)}$ (that is, the QMLE for a given $(p, q)$), and then define the BIC as follows: $$BIC(p,q,\widehat{\theta}_{T,(p,q)})=\widehat{L}_T(\widehat{\theta}_T)+(p+q)\log(T),$$ where $\widehat{L}_T(\theta)$ is defined in (\[f\_mle\]). Denote the true values of $p$ and $q$ as $p_0$ and $q_0$, respectively. Based on the BIC, our selected order $(\widehat{p}, \widehat{q})$ is defined as $$\label{BIC}
(\widehat{p}, \widehat{q})=\arg\min_{p,q}BIC(p,q,\widehat{\theta}_{T,(p,q)}).$$ The consistency of $(\widehat{p},\widehat{q})$ is given in the following theorem.
\[thm\_BIC\] Suppose the conditions in Theorem \[thm\_garch\] hold. Then, $$P(\widehat{p}=p_0,\,\,\widehat{q}=q_0)\to 1\mbox{ as }T\to\infty.$$
The LM test
===========
Since @Engle:1982 and @Bollerslev:1986, testing for the nullity of the parameters in the GARCH model is important in applications. This problem can be further generalized to consider the following linear constraint hypothesis: $$\label{null}
\mathbb{H}_0: R\theta_0=r,$$ where $R$ is a given $d\times (p+q)$ matrix of rank $d$, and $r$ is a given $d\times 1$ constant vector. In this section, we construct a Lagrange multiplier (LM) test statistic $LM_T$ for $\mathbb{H}_0$, where $$LM_T=\frac{1}{T}\frac{\partial\widehat{L}_T(\widehat{\theta}_{T|0})}{\partial\theta'}
\widehat{J}_{1T|0}^{-1}R'\big(R\widehat{\Sigma}_{T|0}R'\big)^{-1}R\widehat{J}_{1T|0}^{-1}
\frac{\partial\widehat{L}_T(\widehat{\theta}_{T|0})}{\partial\theta}.$$ Here, $\widehat{\theta}_{T|0}$ is the constrained QMLE of $\theta_0$ under $\mathbb{H}_0$, and $\widehat{J}_{1T|0}$ and $\widehat{\Sigma}_{T|0}$ are defined in the same way as $\widehat{J}_{1T}$ and $\widehat{\Sigma}_{T}$, respectively, with $\widehat{\theta}_{T}$ replaced by $\widehat{\theta}_{T|0}$. The following theorem gives the limiting null distribution of $LM_T$:
\[LMtest\] Suppose the conditions in Theorem \[thm\_garch\] hold. Then, under $\mathbb{H}_0$, $$LM_T\to_{\mathcal{L}}\chi^2_d\mbox{ as }T\to\infty,$$ where $\chi^2_{s}$ is the chi-squared distribution with the degrees of freedom $s$.
Based on Theorem \[LMtest\], we can set the rejection region of $LM_{T}$ at level $\alpha$ as $\{LM_T>\chi_d^2(\alpha)\},$ where $\chi_d^2(\alpha)$ is $\alpha$-upper percentile of $\chi^2_d$.
As $\widehat{\theta}_{T}$, our $LM_T$ has the adaption feature, and it has a much broader application scope than the existing LM tests. Specifically, the LM test in @Bollerslev:1986 is only applicable for the stationary GARCH model, but our $LM_T$ has the superior ability to tackle the non-stationary S-GARCH model. For the case of $p=0$, the score test in @PR:2014 can detect the null hypothesis that all $\alpha_i$ are zeros, and the Wald test in @Truquet:2017 can check the null hypothesis that some of $\alpha_i$ are zeros. However, these two tests are not applicable for the general cases, and their extensions to include GARCH parameters $\beta_j$ seems non-trivial. Besides $LM_T$, the Wald and likelihood ratio tests could also be constructed for $\mathbb{H}_0$. When some of $\alpha_i$ or $\beta_j$ are allowed to be zeros as in our setting, the Wald and likelihood ratio tests render non-standard limiting null distributions (see @FZ:2007 for general discussions), which have to be simulated by the bootstrap method. For practical convenience, we thus only focus on the LM test in this paper, and the consideration of Wald and likelihood ratio tests is left for future study.
Portmanteau test
================
Since @LB:1978, the portmanteau test and its variants have been a common tool for checking the model adequacy in time series analysis. For the stationary GARCH model, @LM:1994 proposed a portmanteau test for model checking. However, their test is invalid for the non-stationary S-GARCH model. In this section, we follow the idea of @LM:1994 to construct a new portmanteau test to check the adequacy of S-GARCH model, and our test seems the first formal try in the context of semi-parametric time series analysis.
Let $\widehat{\eta}_{t}$ be the model residual defined as in (\[residual\_eta\]). The idea of our portmanteau test is based on the fact that $\{\eta_{t}^2\}$ is a sequence of uncorrelated random variables under (\[semi\_model\])–(\[garch\_model\]). Hence, if the S-GARCH model is correctly specified, it is expected that the sample autocorrelation function of $\{\widehat{\eta}_{t}^2\}$ at lag $k$, denoted by $\widehat{\rho}_{T,k}$, is close to zero, where $$\widehat{\rho}_{T,k}=\frac{\sum_{t=k+1}^{T}\big(\widehat{\eta}^2_{t}
-\overline{\widehat{\eta}^2}\big)\big(\widehat{\eta}^2_{t-k}-\overline{\widehat{\eta}^2}\big)}{\sum_{t=1}^{T}\big(\widehat{\eta}^2_{t}
-\overline{\widehat{\eta}^2}\big)^2}$$ with $\overline{\widehat{\eta}^2}$ being the sample mean of $\{\widehat{\eta}_{t}^2\}$. Let $\widehat{\rho}_T=(\widehat{\rho}_{T,1},...,\widehat{\rho}_{T,\ell})'$ for some integer $\ell\geq1$, and
\_[P1]{}&=(I\_, -H,-DJ\_1\^[-1]{})\^[(+1+p+q)]{},\[4\_1\]\
\_[P2]{}&= (
(-1)I\_& N & D-NE()\
\*&Eg\_t\^2&-Eg\_t\^2E()\
\*&\*&J\_1+J\_2
)\^[(+1+p+q)(+1+p+q)]{}\[4\_2\]
be a symmetric matrix, where $D=(D_1',..., D_\ell')'$ with $D_k=E\{(\eta_{t-k}^2-1)\psi_t'\}$, $H=(H_1,..., H_\ell)'$ with $H_k=E\{g_t^{-1}(\eta_{t-k}^2-1)\}$, and $N=(N_1,...,N_{\ell})'$ with $N_k=E\{g_t(\eta_{t-k}^2-1)\}$. To facilitate our portmanteau test, we need the limiting distribution of $\widehat{\rho}_T$ in the following theorem:
\[thm\_port\] Suppose the conditions in Theorem \[thm\_garch\] hold. Then, if the S-GARCH model in (\[semi\_model\])–(\[garch\_model\]) is correctly specified, $$\sqrt{T}\widehat{\rho}_T\to_{\mathcal{L}} N(0,\Sigma_{P})\mbox{ as }T\to\infty,$$ where $\Sigma_{P}=(\kappa-1)^{-1}\Sigma_{P1}\Sigma_{P2}\Sigma_{P1}'$, and $\Sigma_{P1}$ and $\Sigma_{P2}$ are defined in (\[4\_1\])–(\[4\_2\]).
As in Remark \[rem\_2\], $\Sigma_{P}$ can be consistently estimated by its sample version $\widehat{\Sigma}_{P}$. Based on $\widehat{\Sigma}_{P}$, our portmanteau test is defined as $$Q_{T}(\ell)=T\widehat{\rho}_T'\widehat{\Sigma}_P^{-1}\widehat{\rho}_T.$$ If the S-GARCH model is correctly specified, we have $Q_{T}(\ell)\to_{\mathcal{L}}\chi^2_{\ell}$ as $T\to\infty$ by Theorem \[thm\_port\]. Therefore, if the value of $Q_{T}(\ell)$ is larger than $\chi_\ell^2(\alpha)$, the fitted S-GARCH model is inadequate at level $\alpha$. Otherwise, it is adequate at level $\alpha$. We shall hightlight that $Q_{T}(\ell)$ also has the adaption feature as $LM_T$, and it is essential to detect the adequacy of the short run GARCH component $u_t$ but not the long run component $\tau_t$, since the form of $\tau_t$ is unspecified in the S-GARCH model. In practice, the choice of lag $\ell$ depends on the frequency of the series, and one can often choose $\ell$ to be $O(\log(T))$, which delivers 6, 9 or 12 for a moderate $T$.
Comparisons with other estimation methods {#VT}
=========================================
This section compares our two-step estimation method with the three-step estimation method in @HL:2010 and the variance target (VT) estimation method in @FHZ:2011.
Comparison with three-step estimation method
--------------------------------------------
Our two-step estimation method is the same as the first two estimation steps in @HL:2010, where they gave the following asymptotic normality result for the S-GARCH($1, 1$) model: $$\sqrt{T}(\widehat{\theta}_T-\theta_0)\to_{\mathcal{L}} N(0,\Sigma_{\dag})\mbox{ as }T\to\infty,$$ where $\Sigma_{\dag}=J_1^{-1}[(\kappa-1)J_1+J_3+J_4+J_4']J_1^{-1}$ with $J_3=(M-E\psi_t)(M-E\psi_t)'Z_1$, $J_4=Z_2(M-E\psi_t)'$,
M=\_[j=0]{}\^\_[10]{}\_[10]{}\^[j]{}E(), Z\_[1]{}=\_[j=-]{}\^E(z\_tz\_[t-j]{})Z\_[2]{}=\_[j=0]{}\^E{z\_[t]{}(\_[t-j]{}\^2-1)\_[t-j]{}}.
Indeed, we can show that $\Sigma_{\dag}$ and $\Sigma$ are equivalent. Since $\Sigma_{\dag}$ involves three infinite summations $M$, $Z_1$ and $Z_2$, it is not easy for estimation. On the contrary, our $\Sigma$ has a much more simple expression, and it can be directly estimated as shown in Remark \[rem\_2\].
In @HL:2010, they further proposed an updated estimator at step three, and claimed this updated estimator can achieve the semiparametric efficiency bound when $\eta_t\sim N(0, 1)$. Following their idea, we can also update our estimator $\widehat{\theta}_{T}$ to $\widetilde{\theta}_{T}$ at step three, where $$\label{improve}
\widetilde{\theta}_T=\widehat{\theta}_T-\Big\{\frac{\partial^2\widehat{L}_T^*(\widehat{\theta}_T)}{\partial\theta\partial\theta'}\Big\}^{-1}
\frac{\partial\widehat{L}_T^*(\widehat{\theta}_T)}{\partial\theta}$$ with
&=\_[t=1]{}\^[T]{}{ -\_[T]{}()}{1-\_t\^2()},\
&= \_[t=1]{}\^[T]{}{ -\_[T]{}()\_[T]{}’() },
where $\mathcal{G}_{T}(\theta)=\frac{1}{T}\sum_{s=1}^{T}\frac{1}{\widehat{g}_s(\theta)}\frac{\partial\widehat{g}_s(\theta)}
{\partial\theta}$, and $\widehat{\eta}_t(\theta)$ and $\widehat{g}_{t}(\theta)$ are defined as in (\[residual\_eta\]). Below, we give the limiting distribution of $\widetilde{\theta}_T$.
\[thm\_improve\] Suppose the conditions in Theorem \[thm\_garch\] hold. Then, $$\sqrt{T}(\widetilde{\theta}_T-\theta_0)\to_{\mathcal{L}} N(0,\Sigma^*)\mbox{ as }T\to\infty,$$ where $\Sigma^{*}=(\kappa-1)J_1^{*-1}(J_1^*+J_2^*+J_3^*+J_3^{*'})J_{1}^{*-1}$ with $J_1^*=E\{(\psi_t-E\psi_t)(\psi_t-E\psi_t)'\}$ and
J\_2\^\*&=Eg\_t\^2,\
J\_3\^\*&=E\_t{E()-E()E\_t}’.
The preceding theorem shows that $\widetilde{\theta}_T$ can not achieve the semiparametric efficiency bound, since $J_2^*+J_3^*+J_3^{*'}\not\equiv 0$. Hence, it seems unnecessary to consider the third estimation step in @HL:2010. Note that the updating procedure in (\[improve\]) was also given by @BKRW:1993, in which they showed the updated estimator can achieve the semiparametric efficiency bound when the data are independent. However, when the data are dependent, their conclusion may not be true as demonstrated by Theorem \[thm\_improve\]. The failure of $\widetilde{\theta}_T$ in our case possibly results from the violation of the following condition:
\[bkrw\_cond\] {- }=o\_p(1),
where $\frac{\partial L_T^*(\theta)}{\partial\theta}$ is defined in the same way as $\frac{\partial\widehat{L}_T^*(\theta)}{\partial\theta}$ in (\[improve\]) with $\widehat{u}_{t}$ replaced by $u_t$. In @BKRW:1993, a similar condition as (\[bkrw\_cond\]) was proved for the independent data. However, their technical treatment does not work in our time series data setting, since our kernel estimator $\widehat{\tau}_t$ using the data $\{y_i^2\}_{i=t-[Th]}^{t+[Th]}$ is correlated with $y_t^2$, while this is not the case if $\{y_t^2\}$ are independent.
Comparison with VT estimation method
------------------------------------
Our two-step estimation method also has a linkage to the VT estimation method in @FHZ:2011, and this aspect has not been explored before. The VT method is designed for the following covariance stationary GARCH($p, q$) model:
\[vt\_model\]
&y\_t=\_t\
& h\_t=\_0(1-\_[i=1]{}\^[q]{}\_[i0]{}-\_[j=1]{}\^[p]{}\_[j0]{})+\_[i=1]{}\^q\_[i0]{} y\_[t-i]{}\^2+\_[j=1]{}\^[p]{}\_[j0]{} h\_[t-j]{},
where $\tau_0$ is a positive parameter, and $\alpha_{i0}$, $\beta_{j0}$ and $\eta_t$ are defined as before. Indeed, model (\[vt\_model\]) is just our stationary S-GARCH model, and it is also an alternative reparametrization version of the conventional covariance stationary GARCH model. Since $Ey_t^2=\tau_0$ under model (\[vt\_model\]), the VT method first estimates $\tau_0$ by $\overline{\tau}_{T}$, and then estimates $\theta_0$ by the QMLE $\overline{\theta}_{T}$, where
\[vt\_qmle\]
\_[T]{}=\_[t=1]{}\^[T]{} y\_[t]{}\^2 \_T=\_\_T() \_T()=\_[t=1]{}\^[T]{}+\_t().
Here, $\overline{u}_{t}=y_{t}/\sqrt{\overline{\tau}_T}$, and $\overline{g}_{t}(\theta)$ is defined in the same way as $\widehat{g}_t(\theta)$ in (\[hatg\]) with $\widehat{u}_{t}$ replaced by $\overline{u}_{t}$. Clearly, the difference of two methods is that our method estimates the unknown function $\tau(x)$ nonparametrically, while the VT method estimates the unknown constant parameter $\tau_0$ by the sample mean of $y_{t}^2$. It turns out that two methods require different technique treatments and give different application scopes. From a statistical point of view, the proof techniques for VT method rely on the facts that the objective function $\overline{L}_T(\theta)$ is differential around $\tau_0$ and the first step estimator $\overline{\tau}_T$ is $\sqrt{T}$-consistent. However, neither of these facts holds for our method, and we thus need develop new proof techniques based on more restrictive conditions for $u_t$ and $\eta_t$. From a practical point of view, our method works for the either stationary or non-stationary S-GARCH model, while the VT method does only for the stationary S-GARCH model. Hence, our method has a much broader application scope than the VT method.
By re-visiting Theorem 2.1 in @FHZ:2011, we further find that the asymptotic variance of $\overline{\theta}_T$ is the same as the one of $\widehat{\theta}_{T}$ in Theorem \[thm\_garch\]. That is, our QMLE $\widehat{\theta}_{T}$ and the QMLE $\overline{\theta}_T$ in the VT method have the same asymptotic efficiency, although our first step estimator has a slower convergence rate $\sqrt{Th}$ than the parametric convergence rate $\sqrt{T}$. This novel feature has not been discovered in the literature, and it makes our two-step method more attractive than the VT method, since our QMLE does not suffer any efficiency loss for the stationary S-GARCH model, and at the same time, our QMLE can still work with the same efficiency (due to the adaption feature) for the non-stationary S-GARCH model. As expected, similar features also hold for our tests $LM_{T}$ and $Q_{T}(\ell)$, and these findings will be further illustrated by simulations in the next section.
Simulations
===========
This section gives the simulation studies for the QMLE $\widehat{\theta}_{T}$ and the tests $LM_{T}$ and $Q_{T}(\ell)$. To facilitate it, we first show how to choose the bandwidth $h$.
Choice of bandwidth {#bandwidth}
-------------------
The practical implementation of our entire methodologies needs to choose the bandwidth $h$. The methods in terms of mean squared error criterion (see, for example, @HL:2010) usually yield a bandwidth with order $T^{-1/5}$, which does not satisfy Assumption \[ass\_bandwidth\]. Below, we give a two-step cross-validation (CV) procedure to choose $h$ such that Assumption \[ass\_bandwidth\] is satisfied.
\[alg1\] [(CV bandwidth selection procedure)]{}
1. Set a pilot bandwidth $h_{0}=T^{-\lambda_0}$ with $\lambda_0\in(1/4,1/2)$, and then obtain the pilot estimates $\widehat{\tau}_{t,0}$ and $\widehat{u}_{t,0}$. Choose a pilot GARCH (or ARCH) model for the process $u_t$, and based on $\{\widehat{u}_{t,0}\}_{t=1}^{T}$, estimate this pilot model by the QMLE to get the pilot estimates $\{\widehat{g}_{t,0}\}_{t=1}^{T}$.
2. With $\{\widehat{g}_{t,0}\}_{t=1}^{T}$, define a CV criterion as $$CV(h)=\sum_{t=1}^{T}\Big\{{y_t^2}-{\widehat{\tau}_{-t}(h)\widehat{g}_{t,0}}\Big\}^2,$$ where $\widehat{\tau}_{-t}(h)$ is the leave-one-out estimate of $\tau_t$ with respect to the bandwidth $h$, based on all observations except $y_t$. Select our bandwidth as $h_{cv}=\arg\min_{h\in\mathcal{H}}CV(h)$, where $\mathcal{H}=[c_{\min}T^{-\lambda_0},c_{\max}T^{-\lambda_0}]$ with two positive constants $c_{\min}$ and $c_{\max}$.
Let $\widehat{\mathrm{Var}}(y_t)$ be the sample variance of $\{y_{t}\}_{t=1}^{T}$. To compute $h_{cv}$ in Algorithm \[alg1\], we suggest to choose $\lambda_0=2/7$, $c_{\min}=0.1\widehat{\mathrm{Var}}(y_t)^{\lambda_0}$ and $c_{\max}=3\widehat{\mathrm{Var}}(y_t)^{\lambda_0}$, which will be used and demonstrated with good performance in our simulation studies below. For the pilot model in Algorithm \[alg1\], it could be taken based on either some prior information or the BIC.
Simulations for the estimation
------------------------------
In this subsection, we examine the finite-sample performance of the QMLE $\widehat{\theta}_{T}$. We generate 1000 replications of sample size $T=1000$ and $2000$ from the following two data generating processes (DGPs):
&\_[10]{}=\_[20]{}=0.3;\
&\_[10]{}=\_[10]{}=0.3,
where the function $\tau(x)$ is designed as follows:
\[nochange\] &(x)=1;\
\[linearchange\] &(x)=1+x;\
\[cyclicalchange\] &(x)=1+(2x)/2,
and the error $\eta_t$ follows $N(0,1)$, $\mathrm{st}_{10}$ and $\mathrm{st}_{5}$. Here, $\mathrm{st}_{\nu}$ is the standardized student-$t$ distribution with variance one.
For each replication, we compute $\widehat{\theta}_{T}$ by using the Epanechnikov kernel $K(x)=\frac{3}{4}(1-x^2)\mathbf{1}(|x|\leq 1)$ and choosing the bandwidth $h=h_{cv}$ according to Algorithm \[alg1\] with the (G)ARCH model in DGP as the pilot model. Table \[estimation\] reports the sample bias, sample empirical standard deviation (ESD) and average asymptotic standard deviation (ASD) of $\widehat{\theta}_T$ based on 1000 replications for each DGP, where the ASD is calculated as in Remark \[rem\_2\]. From Table \[estimation\], we find that (i) the biases of $\widehat{\theta}_{T}$ are small in each case; (ii) regardless of the specification of $\tau(x)$ and the distribution of $\eta_t$, the values of ESD and ASD are close to each other, especially for large $T$; (iii) when the value of $T$ increases, the value of ESD decreases; (iv) $\widehat{\theta}_{T}$ becomes less efficient with a larger value of ESD as the thickness of $\eta_t$ becomes heavier; (v) the value of ESD is almost invariant with respect to the specification of $\tau(x)$, meaning that $\widehat{\theta}_{T}$ is adaptive as expected. Overall, our QMLE $\widehat{\theta}_{T}$ has a satisfactory performance in all considered cases.
[ccrrcrrcrrcrrcrrcrrc]{} & & & &\
\
& & & & & & & & & & & & &\
& & & & & & & & & & & & &\
& & & & & & & & & & & & & & & & & & &\
\
\
& &\
1000 & Bias & -1.35 & -1.26 & & -1.76 & -2.40 & & -3.80 & -3.85 & & -1.55 & -2.88 & & -1.73 & -0.30 & & -2.36 & -3.37 &\
& ESD & 5.33 & 5.38 & & 6.54 & 6.17 & & 8.11 & 8.04 & & 5.38 & 10.72 & & 6.32 & 11.57 & & 8.12 & 13.89 &\
& ASD & 5.44 & 5.44 & & 6.68 & 6.59 & & 8.58 & 8.54 & & 5.25 & 10.66 & & 6.45 & 12.20 & & 9.25 & 15.29 &\
2000 & Bias & -0.74 & -0.62 & & -1.19 & -1.22 & & -2.42 & -2.84 & & -1.03 & -1.65 & & -1.00 & -1.09 & & -1.50 & -1.61 &\
& ESD & 3.96 & 3.91 & & 4.57 & 4.91 & & 6.55 & 6.59 & & 3.82 & 7.19 & & 4.68 & 8.27 & & 6.47 & 10.71 &\
& ASD & 3.98 & 3.98 & & 4.92 & 4.93 & & 6.96 & 6.90 & & 3.78 & 7.37 & & 4.71 & 8.35 & & 7.55 & 11.23 &\
\
& &\
1000 & Bias & -1.07 & -1.23 & & -1.51 & -1.80 & & -3.51 & -3.66 & & -1.76 & -2.11 & & -1.99 & -2.87 & & -2.20 & -3.03 &\
& ESD & 5.36 & 5.28 & & 6.33 & 6.36 & & 8.73 & 8.40 & & 5.47 & 10.35 & & 6.39 & 12.08 & & 8.33 & 14.53 &\
& ASD & 5.49 & 5.46 & & 6.68 & 6.64 & & 9.00 & 8.94 & & 5.27 & 10.74 & & 6.39 & 12.19 & & 9.37 & 15.37 &\
2000 & Bias & -0.61 & -0.67 & & -1.08 & -1.19 & & -2.54 & -2.74 & & -1.14 & -1.38 & & -1.19 & -0.83 & & -1.63 & -0.79 &\
& ESD & 3.93 & 3.87 & & 4.92 & 4.79 & & 6.76 & 6.63 & & 3.71 & 7.08 & & 4.61 & 8.37 & & 6.39 & 11.01 &\
& ASD & 3.97 & 3.96 & & 4.96 & 4.94 & & 7.04 & 6.99 & & 3.77 & 7.38 & & 4.65 & 8.38 & & 7.31 & 11.02 &\
\
& &\
1000 & Bias & -0.65 & -0.93 & & -1.49 & -1.75 & & -3.61 & -3.83 & & -1.15 & -2.97 & & -1.54 & -1.50 & & -1.91 & -2.17 &\
& ESD & 5.76 & 5.52 & & 6.68 & 6.38 & & 8.48 & 8.75 & & 5.31 & 10.72 & & 6.36 & 12.61 & & 8.30 & 13.68 &\
& ASD & 5.50 & 5.47 & & 6.68 & 6.65 & & 8.51 & 8.50 & & 5.27 & 10.54 & & 6.39 & 11.83 & & 9.37 & 15.28 &\
2000 & Bias & -0.28 & -0.26 & & -0.55 & -0.61 & & -2.23 & -2.22 & & -0.82 & -0.91 & & -0.56 & -0.56 & & -1.33 & -0.02 &\
& ESD & 3.85 & 3.74 & & 4.75 & 4.77 & & 6.56 & 6.50 & & 3.99 & 7.48 & & 4.84 & 8.37 & & 6.14 & 10.54 &\
& ASD & 3.96 & 3.96 & & 5.00 & 5.00 & & 6.88 & 6.91 & & 3.77 & 7.27 & & 4.71 & 8.24 & & 6.90 & 10.71 &\
Simulations for the testing {#subtest}
---------------------------
In this subsection, we examine the finite-sample performance of $LM_{T}$ and $Q_{T}(\ell)$. We generate 1000 replications of sample size $T=1000$ and $2000$ from the following four DGPs:
&\_[10]{}=\_[20]{}=0.3\_[30]{}=0.03k;\
&\_[10]{}=\_[20]{}=0.3\_[10]{}=0.03k;\
&\_[10]{}=\_[10]{}=0.3\_[20]{}=0.03k;\
&\_[10]{}=\_[10]{}=0.3\_[20]{}=0.03k,
where $k=0,1,..., 10$, $\tau(x)$ is designed as in DGPs 1–2, and $\eta_t\sim N(0, 1)$. For each DGP, the model with respect to $k=0$ is taken as its null model. That is, the S-ARCH($2$) model is the null model for DGPs 3–4, and the S-GARCH($1, 1$) model is the null model for DGPs 5–6.
Next, we fit each replication by its related null model, and then apply $LM_{T}$ to detect the null hypothesis of $k=0$ as well as $Q_{T}(\ell)$ to check whether this fitted null model is adequate. Based on 1000 replications, the empirical power of $LM_{T}$ and $Q_{T}(\ell)$ is plotted in Fig\[ARCHtest\] and Fig\[GARCHtest\] for DGPs 3–4 and DGPs 5–6, respectively, where we take the level $\alpha=5\%$ and the lag $\ell=6, 9$ and $12$, and the sizes of both tests correspond to the results for $k=0$.
![Power across $k$ for $LM_{T}$ (diamond “$\diamond$” marker) and $Q_{T}(\ell)$ with $\ell=6$ (star “$\ast$” marker), $\ell=9$ (cross “$\times$” marker), and $\ell=12$ (plus “$+$” marker), when $T=1000$ (dashed “---” line) and $T=2000$ (solid “—” line ). The horizontal dash-dotted line corresponds to the level $5\%$. Upper Panel: DGP 3; lower Panel: DGP 4.[]{data-label="ARCHtest"}](ARCHtest.eps){width="15cm" height="9.5cm"}
![ The descriptions are as for Fig \[ARCHtest\]. Upper Panel: DGP 5; lower Panel: DGP 6.[]{data-label="GARCHtest"}](GARCHtest.eps){width="15cm" height="9.5cm"}
From Figs\[ARCHtest\]–\[GARCHtest\], we can find that (i) all tests have precise sizes; (ii) the power of all tests becomes large as the value of $T$ or $k$ increases; (iii) $LM_{T}$ is more powerful than all $Q_{\ell}$, and $Q_6$ is generally more powerful than $Q_9$ and $Q_{12}$; (iv) all tests are more powerful to detect the mis-specification of ARCH part in DGPs 3 and 5 than the mis-specification of GARCH part in DGPs 4 and 6; (v) all tests are adaptive, since their power is unaffected by the form of $\tau(x)$. In summary, all tests have a good performance especially for large $T$.
Comparison with the VT method
-----------------------------
In this subsection, we compare the finite-sample performance of $\widehat{\theta}_{T}$, $LM_T$ and $Q_T(\ell)$ with those of $\overline{\theta}_{T}$, $LM_{T}^{vt}$ and $Q_T^{vt}(\ell)$, respectively, where $\overline{\theta}_{T}$ defined in (\[vt\_qmle\]) is the QMLE from the VT method, and $LM_{T}^{vt}$ and $Q_T^{vt}(\ell)$ are defined in the same way as $LM_T$ and $Q_T(\ell)$ with $\widehat{\theta}_{T}$ replaced by $\overline{\theta}_{T}$. Note that when the S-GARCH($p, q$) model is stationary, $\overline{\theta}_{T}$ is asymptotically normal, and $LM_{T}^{vt}$ and $Q_T^{vt}(\ell)$ have the same limiting null distributions as those of $LM_T$ and $Q_T(\ell)$.
First, we compare the efficiency of $\widehat{\theta}_{T}$ and $\overline{\theta}_{T}$ by looking at the following ratio: $$R_{qmle}(\gamma)=\frac{\mbox{the ESD of }\widehat{\gamma}_{T}}{\mbox{the ESD of }\overline{\gamma}_{T}},$$ where $\gamma$ is any entry of $\theta_0$, and the ESD of each estimator is computed based on 1000 replications. Table \[estimationcompare\] reports the values of $R_{qmle}(\gamma)$ when the DGP is a stationary S-ARCH(2) (or S-GARCH($1, 1$)) model with $\tau(x)\sim (\ref{nochange})$, $\eta_t\sim N(0, 1)$, and three different choices of $\theta_0$. From this table, we find that as expected, all the values of $R_{qmle}(\gamma)$ are close to 1, indicating that $\widehat{\theta}_{T}$ and $\overline{\theta}_{T}$ have the same asymptotic efficiency when the S-GARCH model is stationary.
[ccccccccc]{} &\
\
& && &&\
\
& & && & && &\
\
\
1000 & 1.0966 & 0.9828 && 0.9403 & 1.0239 && 0.9981 & 0.9573\
2000 & 1.0598 & 0.9405 && 1.0340 & 0.9864 && 1.0233 & 0.9827\
\
&\
\
& && &&\
\
& & && & && &\
\
\
1000 & 0.9540 & 1.0746 && 0.9716 & 1.0483 && 1.0056 & 0.9926\
2000 & 1.0588 & 1.0800 && 1.0223 & 1.0595 && 0.9795 & 0.9972\
0 1 2 3 4 5 6 7 8 9 10
------------- ------ -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- --------
$R_{lm}$ 1000 1.4222 1.0294 1.0597 0.9652 1.1169 1.0269 1.0065 0.9894 1.0048 0.9911 0.9738
2000 1.0189 1.1622 1.0819 1.0292 1.0182 0.9973 0.9966 0.9897 0.9879 0.9809 0.9910
$R_{q}(6)$ 1000 0.8667 0.6212 0.8602 0.9444 0.8400 0.9550 0.9071 0.9333 0.9275 0.9261 0.9676
2000 1.2326 1.0196 0.7614 0.8526 0.9167 0.9566 1.0735 0.9879 0.9875 0.9597 0.9883
$R_{q}(9)$ 1000 0.9231 0.7838 0.8434 0.8700 0.8333 0.9337 0.8755 0.9469 0.9102 0.9096 0.9431
2000 1.2826 0.8438 0.7976 0.9435 0.9900 0.9617 1.0346 0.9795 0.9931 0.9513 0.9833
$R_{q}(12)$ 1000 0.7288 0.8421 0.8632 0.8716 0.8879 0.9000 0.9439 0.8900 0.9379 0.8921 0.9121
2000 1.1600 0.6988 0.8333 0.9166 1.0539 0.8707 1.0210 0.9905 0.9796 0.9634 0.9827
: The values $R_{lm}$ and $R_{q}(\ell)$ based on a stationary S-GARCH($1, 2$) model in DGP 5[]{data-label="VTpower"}
Second, we compare the power of $LM_{T}$ and $LM_{T}^{vt}$ and that of $Q_{T}(\ell)$ and $Q_{T}^{vt}(\ell)$ by looking at the following two ratios: $$R_{lm}=\frac{\mbox{the power of }LM_{T}}{\mbox{the power of }LM_{T}^{vt}}\quad\mbox{ and }\quad
R_{q}(\ell)=\frac{\mbox{the power of }Q_{T}(\ell)}{\mbox{the power of }Q_{T}^{vt}(\ell)},$$ where the power of each test is computed based on 1000 replications. Table \[VTpower\] reports the values of $R_{lm}$ and $R_{q}(\ell)$ (for $\ell=6, 9$ and $12$), when the data are generated from a stationary S-GARCH($1, 2$) model in DGP 5 with $\tau(x)\sim (\ref{nochange})$. The results for DGPs 3–4 and 6 are quite similar and hence omitted for saving the space. From Table \[VTpower\], we can see that (i) the values of $R_{lm}$ are close to 1 in all examined cases; (ii) when the value of $T$ or $k$ is small, the values of $R_{q}(\ell)$ are slightly less than one, meaning that $Q_T^{vt}(\ell)$ could be more powerful than $Q_T(\ell)$; (ii) when the value of $T$ or $k$ becomes large, the power advantage of $Q_T^{vt}(\ell)$ disappears as the values of $R_{q}(\ell)$ are close to 1. These findings demonstrate that when the S-GARCH model is stationary, our two tests have the same power performance as their counterparts from the VT method especially for large $T$. We also highlight that when the S-GARCH model is non-stationary, our un-reported results show that $LM^{vt}_{T}$ and $Q_T^{vt}(\ell)$ can cause a severe over-sized problem, and hence they can not be used in this case.
Applications
============
In this section, we re-study the US dollar to Indian rupee (USD/INR) exchange rate series and FTSE-index series in @Truquet:2017, with respect to in-sample fitting and out-of-sample prediction.
USD/INR exchange rates
----------------------
This subsection considers the USD/INR exchange rates series from December 19th, 2005 to February 18th, 2015. The log returns (in percentage) of this series having $T=2301$ observations in total are denoted by $\{y_t\}$, and they are plotted in the upper panel of Fig\[rupee\]. In @Truquet:2017, this return series is fitted by a semiparametric ARCH(1) model with a time-varying intercept and a constant lag-1 ARCH parameter. Motivated by this, we use an ARCH(1) model as the pilot model in Algorithm \[alg1\] to choose the bandwidth $h=0.0358$, and then calculate the series $\{\widehat{u}_{t}\}$. Based on $\{\widehat{u}_{t}\}$, our BIC selects $p=q=1$ for the S-GARCH model, and hence we fit this return series by the S-GARCH($1, 1$) model with $\widehat{\alpha}_{1T}=0.0762_{(0.0231)}$, $\widehat{\beta}_{1T}=0.8443_{(0.0475)}$, and $\widehat{\tau}_t$ being plotted in the bottom panel of Fig\[rupee\], where the values in parentheses are the related asymptotic standard errors, and the bandwidth $h=0.0833$ is re-chosen by using a GARCH($1, 1$) model as the pilot model in Algorithm \[alg1\]. For this fitted S-GARCH($1, 1$) model, the p-values of the portmanteau tests $Q_{T}(6)$, $Q_{T}(9)$ and $Q_{T}(12)$ are 0.6472, 0.7530 and 0.8268, respectively, implying that our short run GARCH($1, 1$) component $u_t$ is adequate to fit the return series. In view of the plot of $\{\widehat{\tau}_t\}$ in Fig\[rupee\], we can find that the long run component $\tau_t$ has relatively larger values around years 2009 and 2014. This finding is reasonable since the examined return series is more volatile during 2008-2009 and 2013-2014.
![The plot of log returns $\{y_t\}$ (upper panel) and estimates $\{\widehat{\tau}_t\}$ (bottom panel) for USD/INR series. Here, $\widehat{\tau}_t$ is computed by using the Epanechnikov kernel with $h=0.0833$.[]{data-label="rupee"}](rupee.eps){width="12cm" height="8.7cm"}
FTSE-index
----------
This subsection considers the FTSE-index series from January 4th, 2005 to March 4th, 2015. As the previous example, we study the log returns of this index series with $T=2568$ observations in total, which is denoted by $\{y_t\}$ and plotted in the upper panel of Fig\[FTSE\]. Since @Truquet:2017 suggested a semiparametric ARCH(5) model with a time-varying intercept and constant ARCH parameters to fit this return series, we take an ARCH(5) model as the pilot model in Algorithm \[alg1\], and then select the bandwidth $h=0.0358$ as a result. Based on this choice of $h$, we compute $\{\widehat{u}_{t}\}$ and select $p=q=1$ according to the BIC. Hence, we fit this return series by the S-GARCH($1, 1$) model with $\widehat{\alpha}_{1T}=0.1098_{(0.0165)}$, $\widehat{\beta}_{1T}=0.8433_{(0.0233)}$, and $\widehat{\tau}_t$ being plotted in the bottom panel of Fig\[FTSE\], where the bandwidth $h=0.0941$ is re-chosen by using a GARCH($1, 1$) model as the pilot model in Algorithm \[alg1\]. Further, the portmanteau tests $Q_{T}(6)$, $Q_{T}(9)$ and $Q_{T}(12)$ (with p-values equal to $0.5326$, $0.5335$ and $0.2800$, respectively) suggest this fitted S-GARCH($1, 1$) model is adequate. From the bottom panel of Fig\[FTSE\], we find that the long run component $\tau_t$ for the FTSE return series only has a clear peak around 2009. This may imply that the stock market index series has a different long run structure with the exchange rate series.
![The plot of log returns $\{y_t\}$ (upper panel) and estimates $\{\widehat{\tau}_t\}$ (bottom panel) for FTSE series. Here, $\widehat{\tau}_t$ is computed by using the Epanechnikov kernel with $h=0.0941$.[]{data-label="FTSE"}](FTSE.eps){width="12cm" height="8.7cm"}
Forecasting comparisons
-----------------------
This subsection makes a forecasting comparison among S-GARCH($1, 1$) model, S-ARCH($q$) model, GARCH($1, 1$) model in @Bollerslev:1986, and LS-ARCH($q$) model (i.e., the locally stationary ARCH($q$) model) in @FSR:2008 for the USD/INR and FTSE return series. Note that the S-ARCH($q$) model can locally approximate the semiparametric ARCH($q$) model in @Truquet:2017, where $q=1$ (or 5) is suggested for the USD/INR (or FTSE) return series. Hence, we follow @Truquet:2017 to select $q$ for the S-ARCH($q$) and LS-ARCH($q$) models.
Next, we compare all four models in terms of the mean squared prediction error (MSPE). Specifically, we use the in-sample data set $\{y_t\}_{t=1}^{T_0}$ to make a $t_0$-step ahead forecast $\widehat{y}_{{T_0}+t_0|T_0}^2$ for the out-of-sample data point $y_{T_0+t_0}^2$, and then compute the MSPE by $$\mbox{MSPE}(t_0)=\sum_{T_0=1500}^{T-t_0}\big(\widehat{y}_{{T_0}+t_0|T_0}^2-y_{T_0+t_0}^2\big)^2.$$ The model with the smallest value of $\mbox{MSPE}(t_0)$ has the best $t_0$-step ahead forecasting performance.
Moreover, we introduce how each model computes $\widehat{y}_{{T_0}+t_0|T_0}^2$. For the S-GARCH($1, 1$) model, we fit the model via the two-step estimation based on the in-sample data set $\{y_t\}_{t=1}^{T_0}$, where the bandwidth $h$ is chosen by Algorithm \[alg1\] with a pilot GARCH(1, 1) model. With the kernel estimate $\widehat{\tau}_{T_0}$ and QMLE $\widehat{\theta}_{T_0}$, we then obtain $\widehat{y}_{T_0+t_0|T_0}^2=\widehat{\tau}_{T_0}g_{T_0+t_0|T_0}(\widehat{\theta}_{T_0})$, where $g_{T_0+t_0|T_0}(\widehat{\theta}_{T_0})$ computed as for volatility prediction in the GARCH($1, 1$) model is the $t_0$-step-ahead prediction of $g_{T_0+t_0}$. A similar way is used for the S-ARCH($q$) model to compute $\widehat{y}_{{T_0}+t_0|T_0}^2$. For the GARCH($1, 1$) model, we fit the model via the VT estimation based on the in-sample data set $\{y_t\}_{t=1}^{T_0}$, and then compute $\widehat{y}_{{T_0}+t_0|T_0}^2$ in the conventional way. For the LS-ARCH($q$) model, we follow the method in@FSR:2008 to compute $\widehat{y}_{{T_0}+t_0|T_0}^2$. That is, we treat the last $\widetilde{T}$ in-sample data points as if they came from a stationary ARCH($q$) process, and then estimate the parameter based on these $\widetilde{T}$ data points and compute $\widehat{y}_{{T_0}+t_0|T_0}^2$ as for the stationary ARCH($q$) model. Here, the tuning parameter $\widetilde{T}$ is chosen by minimizing the following MSPE: $$ \widetilde{T}=\arg\min_{T\in\mathcal{T}}\sum_{t\in[{T_0}-50,{T_0}-1]}\big(\widehat{y}_{t+1|t}^2(T)-y_{t+1}^2\big)^2,$$ where $\mathcal{T}=\{50, 100, \cdots,500\}$, and $\widehat{y}_{t+1|t}^2(T)$ computed as for the stationary ARCH($q$) model is the prediction of $y_{t+1}^2$ based on the data set $\{y_{i}\}_{i=t-T+1}^{t}$.
Table \[tableforecast\] reports the values of MSPE($t_0$) for all four models, where the prediction horizon $t_0$ is taken as $1, 5, 10$ and $22$, corresponding to daily, weekly, biweekly and monthly predictions, respectively. From this table, we find that for the USD/INR return series, (i) the S-GARCH model has the best daily and weekly forecasting performance and the second best biweekly and monthly forecasting performance; (ii) the LS-ARCH model performs the best for the biweekly and monthly forecast, the second best for the weekly forecast, but the worst for the daily forecast; (iii) the GARCH model has a better forecasting performance than the S-ARCH model, which delivers the worst weekly, biweekly and monthly forecasts. For the FTSE return series, the S-GARCH model always performs best except for $t_0=1$, in which the GARCH model performs slightly better. Meanwhile, the GARCH model exhibits better forecasting performance than the LS-ARCH model, while the S-ARCH model still remains the worst one in general. Overall, our S-GARCH model has the best forecasting performance especially when the prediction horizon is long.
[cccccccccc]{} & & &\
\
& & & & & & & & &\
\
\
S-GARCH & **[0.0434]{} & **[0.0476]{} & 0.0487 & 0.0500 & &3.1588 &**[3.2889]{} &**[3.3447]{} &**[3.5227]{}\
S-ARCH & 0.0454 & 0.0496 & 0.0500 & 0.0514 & & 3.2381 & 3.4576 & 3.5692 &3.7502\
GARCH & 0.0444 & 0.0484 & 0.0499 & 0.0509 & &**[3.1587]{} &3.2917 &3.3749 &3.5833\
LS-ARCH & 0.0456 & 0.0482 & **[0.0479]{} & **[0.0480]{} & &3.3553 & 3.3828 &3.4317 &3.5389\
****************
[Note: For each $t_0$, the smallest value of MSPE($t_0$) among all four models is in boldface.]{}
Concluding remarks
==================
This paper provides a complete statistical inference procedure for the S-GARCH model. Our methodologies including the estimation and testing center around the QMLE of the non-time-varying parameters in GARCH-type short run component. Since this QMLE is based on the estimate of the long run component, we develop new proof techniques to derive its asymptotic normality, and find that its asymptotic variance is adaptive to the long run component with unknown form. By comparing the results with those in @HL:2010, we discover a much more simple asymptotic variance expression for the QMLE, which can bring a lot of convenience to practitioners. By comparing with the QMLE from VT method in @FHZ:2011, we find that our QMLE not only enjoys a broader application scope to deal with the non-stationary S-GARCH model, but also avoids any efficiency loss when the S-GARCH model is stationary. All of these interesting features have not been unveiled before in the literature, and they make our QMLE and its related LM and portmanteau tests more appealing in practice. Finally, we suggest some future subjects. First, it is interesting to extend our study to the robust estimation context. This could give us more efficient estimators and more powerful tests for dealing with heavy-tailed data. Second, a similar semiparametric framework as (\[semi\_model\]) can be posed into many variants of the standard GARCH model (for example, the asymmetric power-GARCH model in @PWT:2008 and the asymmetric log-GARCH model in @FWZ:2013), and our methodologies could be applied to these new resulting semiparametric models. Third, another possible work is to relax the smooth condition of the long run component to allow for abrupt changes. This seems challenging and may require more non-trivial technique treatments.
Appendix: Proofs {#appendix-proofs .unnumbered}
================
To facilitate the proofs, we first introduce some notations. As for $g_{t}(\theta)$, $\widehat{g}_{t}(\theta)$, $L_{T}(\theta)$ and $\widehat{L}_{T}(\theta)$ in (\[ideal\_llf\])–(\[hatg\]), we similarly define $$\label{mle}
\widetilde{L}_T(\theta)=\sum_{t=1}^{T}\widetilde{l}_t(\theta)\quad \text{with}\quad \widetilde{l}_t(\theta)=\frac{u_t^2}{\widetilde{g}_t(\theta)}+\log\widetilde{g}_t(\theta),$$ where $\widetilde{g}_t(\theta)$ is defined in the same way as $\widehat{g}_t(\theta)$ in (\[hatg\]) with $\widehat{u}_{t}$ replaced by $u_{t}$. Meanwhile, we let $\kappa_T=\sqrt{\frac{\log T}{Th}}+h^2$, $\Delta_t=\widehat{u}^2_t-u^2_t$, $\widetilde{S}_t(\theta)={\widehat{g}_t(\theta)}^{-1}-{\widetilde{g}_t(\theta)}^{-1}$, $B^{(j)}$ be a $p\times p$ matrix with $(1,j)$th element 1 and other elements 0, and $$B=\left(\begin{matrix}
\beta_1&\beta_2&\cdots&\beta_p\\
1&0&\cdots&0\\
\vdots&&&\vdots\\
0&\cdots&1&0
\end{matrix}\right)$$ be a $p\times p$ matrix. Also, we let $C$ be a generic constant which may differ at each appearance.
Next, we give five technical lemmas, whose proofs are given in the supplementary material (@JLZ:2019). Lemma \[lem\_linton\] captures the error from the nonparametric estimation. Lemma \[lem\_diff\] gives some useful results on $\Delta_t$ and $\widetilde{S}_t(\theta)$. Lemma \[lem\_hat\] ensures that replacing $u_t$ by $\widehat{u}_t$ has a negligible impact on our asymptotic results. Lemma \[lem\_initial\] guarantees that the effect from initial values to our asymptotics is negligible. Lemma \[lem\_mixing\] provides a useful $\beta$-mixing result.
\[lem\_linton\] Suppose Assumptions \[ident\_tau\]–\[ass\_ut\] hold. Then, almost surely $(a.s.)$,
$\mathrm{(i)}$ ${\displaystyle \sup_{x\in(0,1)}\Big|\widehat{\tau}(x)-\tau(x)-\frac{1}{T}\sum_{t=1}^{T}K_h\Big(x-\frac{t}{T}\Big)\tau(x)(u^2_t-1)-h^2b(x)\Big|=O\Big(\frac{\log T}{Th}\Big)+o(h^2)}$;
$\mathrm{(ii)}$ ${\displaystyle \sup_{x\in(0,1)}\Big|\frac{1}{T}\sum_{t=1}^{T}K_h\Big(x-\frac{t}{T}\Big)\tau(x)(u^2_t-1)\Big|=O\Big(\sqrt{\frac{\log T}{Th}}\Big)}$.
\[lem\_diff\]
Suppose the conditions in Theorem \[thm\_garch\] hold. Then,
$(\mathrm{i})$ $\Delta_t={\tau_t}^{-1}{(\tau_t-\widehat{\tau}_t)u^2_t}+O(\kappa_T^2)u^2_t$, where $O(1)$ holds uniformly in $t$;
$(\mathrm{ii})$ $\sup_{\theta\in\Theta}|\widetilde{S}_t(\theta)|\leq C\kappa_T$.
\[lem\_hat\] Suppose the conditions in Theorem \[thm\_garch\] hold. Then, for any $\iota\leq 4(1+2\delta)$,
$(\mathrm{i})$ ${\displaystyle \sup_{\theta\in\Theta}\Big\|\widehat{g}_t(\theta)-\widetilde{g}(\theta)\Big\|_{\iota}\leq C\kappa_T}$;
$(\mathrm{ii})$ ${\displaystyle \sup_{\theta\in\Theta}\Big\|\frac{\partial \widehat{g}_t(\theta)}{\partial\theta}-\frac{\partial\widetilde{g}_t(\theta)}{\partial\theta}\Big\|_{\iota}\leq C\kappa_T}$;
$(\mathrm{iii})$ ${\displaystyle \sup_{\theta\in\Theta}\Big\|\frac{\partial^2 \widehat{g}_t(\theta)}{\partial\theta\partial\theta'}-\frac{\partial^2 \widetilde{g}_t(\theta)}{\partial\theta\partial\theta'}\Big\|_{\iota}\leq C\kappa_T}$.
\[lem\_initial\] Suppose Assumptions \[ident\_tau\] and \[ass\_garch\]–\[ass\_eta\] hold. Then, there exists a $\rho\in(0,1)$ such that for any $\iota\leq 4(1+2\delta)$,
$(\mathrm{i})$ ${\displaystyle \sup_{\theta\in\Theta}\Big\|g_t(\theta)-\widetilde{g}(\theta)\Big\|_{\iota}\leq C\rho^t}$;
$(\mathrm{ii})$ ${\displaystyle \sup_{\theta\in\Theta}\Big\|\frac{\partial g_t(\theta)}{\partial\theta}-\frac{\partial\widetilde{g}_t(\theta)}{\partial\theta}\Big\|_{\iota}\leq C\rho^t}$;
$(\mathrm{iii})$ ${\displaystyle \sup_{\theta\in\Theta}\Big\|\frac{\partial^2 g_t(\theta)}{\partial^2\theta\partial\theta'}-\frac{\partial^2 \widetilde{g}_t(\theta)}{\partial\theta\partial\theta'}\Big\|_{\iota}\leq C\rho^t}$.
\[lem\_mixing\] Suppose Assumptions \[ass\_ut\]–\[ass\_garch\] and \[ass\_eta\](i) hold. Then, $\big\{(u_t,g_t,\frac{\partial g_t(\theta)}{\partial\theta'})\big\}$ is strictly stationary and $\beta$-mixing with exponential decay.
<span style="font-variant:small-caps;">Proof of Theorem \[thm\_kernel\].</span> See the supplementary material in @JLZ:2019.
<span style="font-variant:small-caps;">Proof of Theorem \[thm\_garch\]($\mathrm{i}$).</span> See the supplementary material in @JLZ:2019.
In order to prove Theorem \[thm\_garch\]($\mathrm{ii}$), we need a crucial proposition, which is interesting in its own right.
\[keypro\] Let $\{c_t\}_{t\in\mathbb{Z}}$ be a sequence of stationary process and $\mathcal{F}_{t}^{s}=\sigma(c_i,t\leq i\leq s)$ be the sigma-filed generated by $\{c_i,t\leq i\leq s\}$. Define $$S_{T}=\frac{1}{\sqrt{T}}\sum_{t=1}^{T}b_t\Big\{\frac{1}{Th}\sum_{s=1}^{T}K\Big(\frac{s-t}{Th}\Big)a_s\Big\},$$ where $a_t=f(c_t)$, $b_t=g(c_t,c_{t-k})$ for some $k\leq n_T$, and $f(\cdot)$ and $g(\cdot,\cdot)$ are two real-valued functions. Suppose the following conditions hold:
$(1)$ $Ea_t=0$, $Eb_t=0$, $E|a_t|^{\iota_1(1+2\delta)}<\infty$ and $E|b_t|^{\iota_2(1+2\delta)}<\infty$ where $\iota_1,\iota_2>0$ satisfying $\iota_1^{-1}+\iota_{2}^{-1}=1/2$ and $\delta>0$;
$(2)$ $c_t$ is $\beta$-mixing with mixing coefficients $\beta(j)$ satisfying $\sum_{j=1}^{\infty}\beta(j)^{\delta/(1+\delta)}<\infty$;
$(3)$ $K(\cdot)$ satisfies Assumption \[ass\_kernel\] and $h$ satisfies Assumption \[ass\_bandwidth\];
$(4)$ $n_T$ is either a constant or $n_T\to \infty$ and $n_T=o(\sqrt{Th^2})$ as $T\to \infty$.
Then,
() |ES\_t| () ES\_T\^2C{,}.
We decompose $S_T=S_{T,1}+S_{T,2}+S_{T,3}+S_{T,4}$, where
S\_[T,1]{}=&\_[t=1]{}\^[T-1]{}b\_t\_[s=t+1]{}\^[T]{}K()a\_s,S\_[T,2]{}=\_[t=n\_T+1]{}\^[T]{}b\_t\_[s=1]{}\^[t-n\_T]{}K()a\_s,\
S\_[T,3]{}=&\_[t=n\_T+1]{}\^[T-1]{}b\_t\_[s=t-n\_T+1]{}\^[t]{}K()a\_s,S\_[T,4]{}=\_[t=1]{}\^[n\_T]{}b\_t\_[s=1]{}\^[t]{}K()a\_s.
$(\mathrm{i})$ Under Condition (1), we must have $\iota_1>2$ and $\iota_2>2$, which indicates $\|a_t\|_{2(1+2\delta)}<\infty$ and $\|b_s\|_{2(1+2\delta)}<\infty$ .
Since $b_t\in\mathcal{F}_{t-n_T}^{t}$, by Conditions (1)–(3) and Davydov’s inequality (see @Davydov:1968), we have
|ES\_[T,1]{}|& \_[t=1]{}\^[T-1]{}\_[s=t+1]{}\^[T]{}|E(b\_ta\_s)|\
& \_[t=1]{}\^[T-1]{}\_[s=t+1]{}\^[T]{}(s-t)\^[/(1+)]{}b\_t\_[2(1+)]{}a\_s\_[2(1+)]{}=.
Similarly, we can show that $|ES_{T,2}|\leq\frac{C}{T^{1/2}h}$. The result holds by noticing that
&E|S\_[T,3]{}|\_[t=n\_T+1]{}\^[T-1]{}\_[s=t-n\_T+1]{}\^[t]{}E|b\_ta\_s| ,\
&E|S\_[T,4]{}|\_[t=1]{}\^[n\_T]{}\_[s=1]{}\^[t]{}E|b\_ta\_s| .
$(\mathrm{ii})$ It is not hard to obtain that $ES_{T,3}^2\leq \frac{Cn_T^2}{Th^2}$ and $ES_{T,4}^2\leq \frac{Cn_T^4}{T^3h^2}$. Below, we only prove that $ES_{T,1}^2\leq\frac{Cn_T}{\sqrt{Th}}+\frac{C}{Th^2}$, since we can similarly show that $ES_{T,2}^2\leq\frac{C}{\sqrt{Th}}+\frac{C}{Th^2}$.
Let $\varpi_t=\sum_{s=t+1}^{T}K(\frac{t-s}{Th})a_s$. Then, $ES_{T,1}^2=\frac{1}{T^3h^2}\sum_{t=1}^{T}\sum_{t'=1}^{T}
Eb_tb_{t'}\varpi_t\varpi_{t'}:=V_{T,1}+V_{T,2}+V_{T,3}$, where
V\_[T,1]{}=&\_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[T]{} Eb\_tb\_[t’]{}\_t\_[t’]{},V\_[T,2]{}=\_[t=1]{}\^[T]{} E(b\_t\_t)\^2,\
V\_[T,3]{}=&\_[t’=1]{}\^[T]{}\_[t=t’+1]{}\^[T]{} Eb\_tb\_[t’]{}\_t\_[t’]{}.
For simplicity, we only show that $V_{T,1}\leq \frac{Cn_T}{\sqrt{Th}}+\frac{C}{Th^2}$. Let
\_[1t]{}=&\_[s=t+1]{}\^[t’]{}K()a\_s\_[t+1]{}\^[{t’-1,t+\[Th\]}]{},\_[2t]{}=\_[s=t’+1]{}\^[T]{}K()a\_s\_[t’]{}\^,\
\_[1t’]{}=&\_[s=t’+1]{}\^[t’+\[Th\]-1]{}K()a\_s\_[t’]{}\^[t’+\[Th\]-1]{},\_[2t’]{}=\_[s=t’+\[Th\]]{}\^[T]{}K()a\_s\_[t’+\[Th\]]{}\^.
Here, we have used the fact that $K(\frac{t-s}{Th})=0$ if $|t-s|>[Th]$ by Condition (3). Moreover, decompose $V_{T,1}=V_{T,11}+V_{T,12}+V_{T,13}$, where
V\_[T,11]{}=&\_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[T]{}Eb\_t\_[1t]{}b\_[t’]{}\_[1t’]{},V\_[T,12]{}=\_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[T]{}Eb\_t\_[1t]{}b\_[t’]{}\_[2t’]{},\
V\_[T,13]{}=&\_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[T]{}Eb\_t\_[2t]{}b\_[t’]{}\_[t’]{}.
Using Lemmas \[lem\_key1\]–\[lem\_key3\] below, it follows that $V_{T,1}\leq C\max\big\{\frac{n_T}{\sqrt{Th}},\frac{1}{Th^2}\big\}$.
\[lem\_key1\] Under the conditions in Proposition \[keypro\], $V_{T,11}\leq C\max\big\{\frac{1}{\sqrt{Th}},\frac{1}{Th^2}\big\}$.
First, we decompose $V_{T,11}=\sum_{i=1}^{3}V_{T,11i}$, where
V\_[T,111]{}=&\_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[t+2\[Th\]]{}(b\_t\_[1t]{}b\_[t’]{},\_[1t’]{}), V\_[T,112]{}=\_[t=1]{}\^[T]{}\_[t’=t+2\[Th\]+1]{}\^[T]{}(b\_t\_[1t]{},b\_[t’]{}\_[1t’]{}),\
V\_[T,113]{}=&\_[t=1]{}\^[T]{}\_[t’=t+2\[Th\]+1]{}\^[T]{}E(b\_t\_[1t]{})E(b\_[t’]{}\_[1t’]{}).
Next, by Theorem 4.1 in @SY:1996, we have
\[varpi2\]&\_[1t]{}\_Ca\_s\_[+\_0]{}
for some $0\leq \iota\leq \iota_1(1+2\delta)-\xi_0 $ and $\xi_0>0$. Since $b_t\varpi_{1t}b_{t'}\in\mathcal{F}_{-\infty}^{t'}$, by Davydov’s inequality and Hölder’s inequality, we can obtain
|V\_[T,111]{}|& \_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[t+2\[Th\]]{}\_[s=t’]{}\^[t’+\[Th\]-1]{}C(s-t’)\^[/(1+)]{} b\_t\_[1t]{}b\_[t’]{}\_[(1+)\_1/(\_1-1)]{}a\_s\_[\_1(1+)]{}\
&\_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[t+2\[Th\]]{}\_[s=t’]{}\^[t’+\[Th\]-1]{}C(s-t’)\^[/(1+)]{} b\_t\^2\_[\_2(1+)]{}\_[1t]{}\_[\_1(1+)]{}a\_s\_[\_1(1+)]{}\
=&\_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[t+2\[Th\]]{}\_[1t]{}\_[\_1(1+)]{}.
Using (\[varpi2\]) with $\iota=\iota_1(1+\delta)$ and $\xi_0=\iota_1\delta$, it follows that $|V_{T,111}|\leq\frac{CT^2h\sqrt{Th}}{T^3h^2}=\frac{C}{\sqrt{Th}}$.
Third, we note that $b_t\varpi_{1t}\in\mathcal{F}_{-\infty}^{t+[Th]}$ as $t'>t+[Th]$, and $b_{t'}K(\frac{t'-s}{Th})a_s\in\mathcal{F}_{t'-n_T}^{\infty}\subset\mathcal{F}_{t'-[Th]+1}^{\infty}$ as $n_T\ll[Th]$. Then, by Davydov’s inequality and Hölder’s inequality, we have
|V\_[T,112]{}|& \_[t=1]{}\^[T]{}\_[t’=t+2\[Th\]+1]{}\^[T]{}\_[s=t’+1]{}\^[t’+\[Th\]-1]{} |(b\_[t]{}\_[1t]{},b\_[t’]{}K((t’-s)/Th)a\_s)|\
& \_[t=1]{}\^[T]{}\_[t’=t+2\[Th\]+1]{}\^[T]{}\_[s=t’+1]{}\^[t’+\[Th\]-1]{}(t’-t-2\[Th\])\^[/(1+)]{}b\_t\_[1t]{}\_[2(1+)]{}b\_[t’]{}a\_s\_[2(1+)]{}\
&\_[t=1]{}\^[T]{}\_[t’=t+2\[Th\]+1]{}\^[T]{}\_[s=t’+1]{}\^[t’+\[Th\]-1]{}(t’-t-2\[Th\])\^[/(1+)]{}b\_t\^2\_[\_2(1+)]{}\_[1t]{}\_[\_1(1+)]{}a\_s\_[\_1(1+)]{}\
& \_[t=1]{}\^[T]{}\_[1t]{}\_[\_1(1+)]{}.
Using (\[varpi2\]) with $\iota=\iota_1(1+\delta)$ and $\xi_0=\iota_1\delta$, it follows that $|V_{T,112}|\leq \frac{C}{\sqrt{Th}}$. Finally, since it is straightforward to show that $|VT_{113}|\leq \frac{C}{Th^2}$, the result follows.
\[lem\_key2\] Under the conditions in Proposition \[keypro\], $V_{T,12} \leq\frac{C}{\sqrt{Th}}$.
Note that $b_t\varpi_{1t}b_{t'}\in\mathcal{F}_{-\infty}^{t'}$. By Davydov’s inequality, Hölder’s inequality and (\[varpi2\]), we have
|(b\_t\_[1t]{}b\_[t’]{},\_[2t’]{})| &\_[s=t’+\[Th\]]{}\^[T]{}K()|(b\_t\_[1t]{}b\_[t’]{},a\_s)|\
& C\_[j=\[Th\]]{}\^[T]{}(j)\^[/(1+)]{}b\_t\^2\_[\_2(1+)]{}\_[1t]{}\_[\_1(1+)]{}a\_s\_[\_1(1+)]{}\
=&C\_[j=\[Th\]]{}\^[T]{}(j)\^[/(1+)]{}.
By Condition (2) and the fact $Th\to\infty$ as $T\to\infty$, we have that $Th\sum_{j=[Th]}^{T}\beta(j)^{\delta/(1+\delta)}\leq C$, which entails that $|V_{T,12}|\leq\frac{C}{\sqrt{T^3h^5}}\leq \frac{C}{\sqrt{Th}}.$
\[lem\_key3\] Under assumptions of Proposition \[keypro\], $V_{T,13} \leq\frac{Cn_T}{\sqrt{Th}}$.
Rewrite $\varpi_{t'}=\varpi_{3t'}+\varpi_{4t'}$, where $$\varpi_{3t'}=\sum_{r=t'+1}^{s}K\Big(\frac{t'-r}{Th}\Big)a_r
\mbox{ and } \varpi_{4t'}=\sum_{r=s+1}^{t'+[Th]}K\Big(\frac{t'-r}{Th}\Big)a_r.$$ Then, we can decompose $V_{T,13}=\sum_{i=1}^{4}V_{T,13i}$, where
V\_[T,131]{}=&\_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[t+\[Th\]]{}\_[s=t’+1]{}\^[t+\[Th\]]{}K()(b\_tb\_[t’]{},a\_s\_[3t’]{}),\
V\_[T,132]{}=&\_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[t+\[Th\]]{}\_[s=t’+1]{}\^[t+\[Th\]]{}K()(b\_tb\_[t’]{},a\_s\_[4t’]{}),\
V\_[T,133]{}=&\_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[t+\[Th\]]{}\_[s=t’+1]{}\^[t+\[Th\]]{}K()E(b\_tb\_[t’]{})E(a\_s\_[t’]{}),\
V\_[T,134]{}=&\_[t=1]{}\^[T]{}\_[t’=t+\[Th\]+1]{}\^[T]{}\_[s=t’+1]{}\^[T]{}K()E(b\_tb\_[t’]{}a\_s\_[t’]{}).
First, by interchanging summations of $s$ and $r$, we have
V\_[T,131]{}=\_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[t+\[Th\]]{}\_[r=t’+1]{}\^[t+\[Th\]]{}\_[s=r]{}\^[t+\[Th\]]{} K()K()(b\_tb\_[t’]{},a\_sa\_r).
Since $b_tb_{t'}\in\mathcal{F}_{-\infty}^{t'}$, by Davydov’s inequality and Hölder’s inequality, we can show
|V\_[T,131]{}|& \_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[t+\[Th\]]{}\_[r=t’+1]{}\^[t+\[Th\]]{} (b\_tb\_[t’]{},a\_r\_[s=r]{}\^[t+\[Th\]]{}K()a\_s)\
&\_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[t+\[Th\]]{}\_[r=t’+1]{}\^[t+\[Th\]]{}(r-t’)\^[/(1+)]{}b\_tb\_[t’]{}\_[\_2(1+)/2]{}\
&a\_r\_[s=r]{}\^[t+\[Th\]]{}K()a\_s\_[\_1(1+)]{}\
&\_[t=k+1]{}\^[T]{}\_[t’=t+1]{}\^[t+\[Th\]]{}\_[r=t’]{}\^[t+\[Th\]]{}(r-t’)\^[/(1+)]{}b\_t\^2\_[\_2(1+)/2]{}\
&a\_r\_[\_1(1+)]{}\_[s=r]{}\^[t+\[Th\]]{}K()a\_s\_[\_1(1+)]{}.
By similar arguments as for (\[varpi2\]), we have $$\Big\|\sum_{s=r}^{t+[Th]}K\Big(\frac{t-s}{Th}\Big)a_s\Big\|_{\iota_1(1+\delta)}\leq C\sqrt{t+[Th]-r}\|a_s\|_{\iota_1(1+2\delta)}\leq C\sqrt{Th}\|a_s\|_{\iota_1(1+2\delta)},$$ and hence it follows that $|V_{T,131}|\leq \frac{CT^2h\sqrt{Th}}{T^3h^2}=\frac{C}{\sqrt{Th}}$. Similarly, $|V_{T,132}|\leq \frac{C}{\sqrt{Th}}$.
Next, we decompose $V_{T,133}=V_{T,1331}+V_{T,1332}$, where
V\_[T,1331]{}=&\_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[t+n\_T]{}\_[s=t’]{}\^[t+\[Th\]]{}K()E(b\_tb\_[t’]{})E(a\_s\_[t’]{}),\
V\_[T,1332]{}=&\_[t=1]{}\^[T]{}\_[t’=t+n\_T+1]{}\^[t+\[Th\]]{}\_[s=t’]{}\^[t+\[Th\]]{}K()E(b\_tb\_[t’]{})E(a\_s\_[t’]{}).
It is easy to see that
|V\_[T,1331]{}|& \_[t=1]{}\^[T]{}\_[t’=t+1]{}\^[t+n\_T]{}\_[s=t’]{}\^[t+\[Th\]]{}K()b\_t\^2\_[2]{}a\_s\_2\_[t’]{}\_2 \_[t’=t+1]{}\^[t+n\_T]{}\_[t’]{}\_2
and $\|\varpi_{t'}\|_2\leq C\sqrt{Th}\|a_s\|_{2(1+\delta)}$ by (\[varpi2\]). So, we have that $|V_{T,1331}|\leq \frac{Cn_T}{\sqrt{Th}}$. Moreover, by Davydov’s inequality and Hölder’s inequality, we can show
|V\_[T,1332]{}|&\_[t=1]{}\^[T]{}\_[t’=t+n\_T+1]{}\^[t+\[Th\]]{}\_[s=t’]{}\^[t+\[Th\]]{}(t’-n\_T-t)\^[/(1+)]{}b\_t\^2\_[\_2(1+)/2]{}a\_s\_2\_[t’]{}\_2\
& =,
which implies that $|V_{T,133}|\leq \frac{Cn_T}{\sqrt{Th}}$.
Finally, since $K\big(\frac{s-t}{Th}\big)=0$ when $s\geq t'>t+[Th]$, it follows that $V_{T,134}=0$, and hence the result follows.
<span style="font-variant:small-caps;">Proof of Theorem \[thm\_garch\]($\mathrm{ii}$).</span> By Taylor’s expansion, we have
\[taylor\] (\_T-\_0)=-{}\^[-1]{} ,
where $\theta^*$ lies between $\hat{\theta}_T$ and $\theta_0$.
Let $\widehat{g}_t=\widehat{g}_{t}(\theta_0), \widehat{g}_t=\widehat{g}_{t}(\theta_0)$, $\frac{\partial \widehat{g}_t}{\partial\theta_m}=\frac{\partial \widehat{g}_t(\theta_0)}{\partial\theta_m}$, $\frac{\partial \widetilde{g}_t}{\partial\theta_m}=\frac{\partial \widetilde{g}_t(\theta_0)}{\partial\theta_m}$, and $\widetilde{S}_t=\widetilde{S}_t(\theta_0)$. By noting that $\widehat{g}_t^{-1}=\widetilde{S}_t+\widetilde{g}_t^{-1}$, $\widehat{g}_t^{-1}\widehat{u}^2_t=\widetilde{g}_t^{-1}u^2_t+u^2_t\widetilde{S}_t+\widetilde{g}_t^{-1}\Delta_t$ and $\frac{\partial \widehat{g}_t}{\partial\theta_m}=\frac{\partial \widehat{g}_t}{\partial\theta_m}-\frac{\partial \widetilde{g}_t}{\partial\theta_m}+\frac{\partial \widetilde{g}_t}{\partial\theta_m}$, we have
\[expansionlikelihood\] =\_[i=1]{}\^[12]{}U\_i,
where
U\_1=&\_[t=1]{}\^[T]{}(1-\_t\^[-1]{}u\^2\_t)[\_t]{}\^[-1]{}, U\_2=\_[t=1]{}\^[T]{}(1-\_t\^[-1]{}u\^2\_t)[\_t]{}\^[-1]{}(-),\
U\_3=&\_[t=1]{}\^[T]{}(1-\_t\^[-1]{}u\^2\_t)\_t,U\_4=\_[t=1]{}\^[T]{}(1-\_t\^[-1]{}u\^2\_t)\_t(-),\
U\_5=&-\_[t=1]{}\^[T]{}\_t\^[-1]{}u\^2\_t\_t,U\_6=-\_[t=1]{}\^[T]{}\_t\^[-1]{}u\^2\_t\_t(-),\
U\_7=&-\_[t=1]{}\^[T]{}u\^2\_t\_t\^2, U\_8=-\_[t=1]{}\^[T]{}u\^2\_t\_t\^2(-),\
U\_9=&-\_[t=1]{}\^[T]{}\_t\_t\^[-1]{}\_t\^[-1]{},U\_[10]{}=-\_[t=1]{}\^[T]{}\_t\_t\^[-1]{}\_t\^[-1]{}(-),\
U\_[11]{}=&-\_[t=1]{}\^[T]{}\_t\_t\^[-1]{}\_t, U\_[12]{}=-\_[t=1]{}\^[T]{}\_t\_t\^[-1]{}\_t(-).
Using the similar proof as for Theorem 2.2 in @FZ:2004, we can show $$U_1=\frac{1}{\sqrt{T}}\frac{\partial{L}_T({\theta}_0)}{\partial\theta_m}+o_p(1)=-\frac{1}{\sqrt{T}}\sum_{t=1}^{T}(\eta_t^2-1)\psi_t+o_{p}(1).$$ By Hölder’s inequality and Lemmas \[lem\_diff\]–\[lem\_hat\], it is not hard to prove $$U_{i}=o_p(1) \mbox{ for }i=4,6,7,8,10,11,12.$$ Combining with the results in Lemmas \[pro\_u3\]–\[pro\_u9\] below, by (\[expansionlikelihood\]) it follows that
\[first\_de\] =-\_[t=1]{}\^[T]{}{(\_t\^2-1)\_t- E()z\_t}+o\_p(1),
where $\omega_0=1-\sum_{i=1}^{q}\alpha_{i0}-\sum_{j=1}^{p}\beta_{j0}$ and $\gamma_0=1-\sum_{j=1}^{p}\beta_{j0}$.
Using Lemmas \[lem\_diff\]–\[lem\_initial\] and the consistency of $\widehat{\theta}_T$, it follows directly that
\[second\_de\] \_p E{}=J\_1.
Hence, by (\[taylor\]) and (\[first\_de\])–(\[second\_de\]) we have
\[expression\_1\] (\_T-\_0)=J\_1\^[-1]{}\_[t=1]{}\^[T]{} {(\_t\^2-1)\_t-E()z\_t}+o\_p(1).
Following @HKZ:2006, $u_t^2$ has an ARMA representation:
u\_t\^2=\_0+\_[i=1]{}\^[{p,q}]{}(\_[i0]{}+\_[i0]{})u\_[t-i]{}\^2-\_[j=1]{}\^[p]{}\_[j0]{}g\_[t-j]{}(\_[t-j]{}\^2-1)+g\_t(\_t\^2-1),
with the convention $\beta_{i0}=0$ if $i>p$ and $\alpha_{i0}=0$ if $i>q$. Hence, it follows that $z_t=\sum_{i=1}^{\max\{p,q\}}(\alpha_{i0}+\beta_{i0})z_{t-i}-\sum_{j=1}^{p}\beta_{j0}g_{t-j}(\eta_{t-j}^2-1)+g_t(\eta_t^2-1)$, which entails $$\label{z_t}
\frac{1}{\sqrt{T}}\sum_{t=1}^{T}z_{t}=\frac{\gamma_0}{\omega_0}\frac{1}{\sqrt{T}}\sum_{t=1}^{T}g_t(\eta_t^2-1)+o_p(1).$$ By (\[expression\_1\])–(\[z\_t\]), we have
\[bahadur\] (\_T-\_0)=J\_1\^[-1]{}\_[t=1]{}\^[T]{}(\_t\^2-1){\_t-E()g\_t}+o\_p(1).
Now, the result holds by (\[bahadur\]), the central limit theorem for martingale difference sequence, and the fact that $E(g_t\psi_{t}')=E\big(\frac{\partial g_{t}}{\partial\theta_0}\big)=0$.
In order to prove Lemmas \[pro\_u3\]–\[pro\_u9\], we need Lemma \[lem\_difftau\] below. The proofs of Lemmas \[lem\_difftau\]–\[pro\_u9\] are provided in the supplementary material (@JLZ:2019).
\[lem\_difftau\] Let $m_T$ satisfy $$\label{m_T}
m_T=O(T^{\lambda_m}) ~\text{for some } \lambda_m>0 ~ \text{and } \lambda_m+\lambda_h<1/2.$$ Then, under the conditions in Theorem \[thm\_garch\], $$\max_{1\leq i\leq m_T}\max_{i+1\leq t\leq T}\Big|\tau_t^{-1}(\widehat{\tau}_t-\tau_t)-\tau_{t-i}^{-1}(\widehat{\tau}_{t-i}-\tau_{t-i})\Big|=o\Big(\frac{1}{\sqrt{T}}\Big) \text{ a.s.}$$
\[pro\_u3\] Under the conditions in Theorem \[thm\_garch\], $U_2=o_p(1)$ and $U_3=o_p(1)$.
\[pro\_u5\] Under the conditions in Theorem \[thm\_garch\], $U_5=-\frac{1}{\sqrt{T}}\sum_{t=1}^{T}M_mz_t+o_p(1)$, where $M_m=E\big(\frac{1}{g_t}\frac{\partial g_t}{\partial\theta_m}\big)-\frac{\omega_0}{\gamma_0}E\big(\frac{1}{g_t^{2}}\frac{\partial g_t}{\partial\theta_m}\big).$
\[pro\_u9\] Under the conditions in Theorem \[thm\_garch\], $U_9=\frac{1}{\sqrt{T}}\sum_{t=1}^{T}E\Big(\frac{1}{g_t}\frac{\partial g_t}{\partial\theta_m}\Big)z_t+o_p(1)$.
<span style="font-variant:small-caps;">Proof of Theorem \[thm\_BIC\].</span> See the supplementary material in @JLZ:2019.
<span style="font-variant:small-caps;">Proof of Theorem \[LMtest\].</span> See the supplementary material in @JLZ:2019.
<span style="font-variant:small-caps;">Proof of Theorem \[thm\_port\].</span> Since $\overline{\widehat{\eta}^2}\to_p 1$ and $\frac{1}{T}\sum_{t=1}^{T}(\widehat{\eta}^2_t(\widehat{\theta}_T)-1)^2\to_p \kappa-1$, it suffices to consider $P_k$, where
P\_k=&\_[t=k+1]{}\^[T]{}{\^2\_t(\_T)-1}{\^2\_[t-k]{}(\_T)-1}\
=&\_[t=k+1]{}\^[T]{}{-1} {-1}\
=&\_[t=k+1]{}\^[T]{}{ -+-1} {-+-1}\
:=&R\_1+R\_2+R\_3+R\_4,
where
R\_1=&\_[t=k+1]{}\^[T]{}{-1}{-1},\
R\_2=&\_[t=k+1]{}\^[T]{}{-} {-},\
R\_3=&\_[t=k+1]{}\^[T]{}{-1} {-},\
R\_4=&\_[t=k+1]{}\^[T]{}{-} {-1}.
By Lemmas \[r2\]–\[r4\] below, we have $$P_k=\frac{1}{\sqrt{T}}\sum_{t=k+1}^{T}(\eta_t^2-1)(\eta_{t-k}^2-1)
-D_k\sqrt{T}(\widehat{\theta}_T-\theta_0)-\frac{\omega_0}{\gamma_0}H_k\Big(\frac{1}{\sqrt{T}}\sum_{t=1}^{T}z_t\Big).$$
Together with (\[z\_t\])–(\[bahadur\]), it follows that
P\_k=&\_[t=k+1]{}\^[T]{}(\_t\^2-1)(\_[t-k]{}\^2-1) -D\_kJ\^[-1]{}\
&-H\_k{\_[t=1]{}\^[T]{}g\_t(\_t\^2-1)}+o\_p(1).
The result follows by the central limit theorem for martingale difference sequences.
\[r2\] Under the conditions in Theorem \[thm\_port\], $R_2=o_p(1).$
\[r3\] Under the conditions in Theorem \[thm\_port\], $R_3=o_p(1)$.
\[r4\] Under the conditions in Theorem \[thm\_port\], $R_{4}=-D_k\sqrt{T}(\widehat{\theta}_T-\theta_0)-\frac{\omega_0}{\gamma_0}H_k$ $\big(\frac{1}{\sqrt{T}}\sum_{t=1}^{T}z_t\big)+o_p(1)$.
The proofs of Lemmas \[r2\]–\[r4\] are given in the supplementary material (@JLZ:2019).
<span style="font-variant:small-caps;">Proof of Theorem \[thm\_improve\].</span> See the supplementary material in @JLZ:2019.
[7]{} natexlab\#1[\#1]{}
Amado, C., Teräsvirta, T., 2013. Modelling volatility by variance decomposition. *Journal of Econometrics* [**175**]{}, 142–153.
Bickel, P.J., Klaassen, C.A., Ritov, Y., Wellner, J.A., 1993. . Baltimore: Johns Hopkins University Press.
Bollerslev, T., 1986. Generalized autoregressive conditional heteroskedasticity. *Journal of Econometrics* [**31**]{}, 307–327.
Carrasco, M., Chen, X., 2002. Mixing and moment properties of various GARCH and stochastic volatility Models. *Econometric Theory* [**18**]{}, 17–39.
Cavaliere, G., Taylor, A.M.R., 2007. Testing for unit roots in time series models with non-stationary volatility. , 919–947.
Chen, B., Hong, Y., 2016. Detecting for smooth structural changes in GARCH models. *Econometric Theory* [**32**]{}, 740–791.
Dahlhaus, R., Subba Rao, S., 2006. Statistical inference for time-varying ARCH processes. *Annals of Statistics* [**34**]{}, 1075–1114.
Davydov, Y.A., 1968. Convergence of distributions generated by stationary stochastic processes. , 691–696.
Engle, R.F., 1982. Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. *Econometrica* [**50**]{}, 987–1007.
Engle, R., Rangel, J., 2008. The spline-GARCH model for low-frequency volatility and its global macroeconomic causes. *Review of Financial Studies* [**21**]{}, 1187–1222.
Francq, C., Zakoïan, J.M., 2004. Maximum likelihood estimation of pure GARCH and ARMA-GARCH processes. *Bernoulli* [**10**]{}, 605–637.
Francq, C., Zakoïan, J.M., 2007. Quasi-maximum likelihood estimation in GARCH processes when some coefficients are equal to zero. , 1265–1284.
Francq, C., Horváth, L., Zakoïan, J.M., 2011. Merits and drawbacks of variance targeting in GARCH models. *Journal of Financial Econometrics* [**9**]{}, 619–656.
Francq, C., Wintenberger, O., Zakoïan, J.M., 2013. GARCH models without positivity constraints: Exponential or Log GARCH?. , 34–46.
Fryzlewicz, P., Sapatinas, T., Subba Rao, S., 2008. Normalized least-squares estimation in time-varying ARCH models. *Annals of Statistics* [**36**]{}, 742–786.
Hafner, C.M. Linton, O., 2010. Efficient estimation of a multivariate multiplicative volatility model. *Journal of Econometrics* [**159**]{}, 55–73.
Hall, P., Wehrly, T.E., 1991. A geometrical method for removing edge effects from kernel-type nonparametric regression estimators. *Journal of the American Statistical Association* [**86**]{}, 665–672.
Hall, P., Yao, Q., 2003. Inference in ARCH and GARCH models with heavy-tailed errors. *Econometrica* [**71**]{}, 285–317.
Hansen, B.E., 2001. The new econometrics of structural change: dating breaks in US labour productivity. *Journal of Economic Perspectives* [**15**]{}, 117-128.
Horváth, L., Kokoszka, P., 2003. GARCH processes: structure and estimation. *Bernoulli* [**9**]{}, 201–227.
Horváth, L., Kokoszka, P., Zitikis, R., 2006. Sample and implied volatility in GARCH models. , 617–635.
Jiang, F., Li, D., Zhu, K. (2019) Supplement to “Adaptive inference for a semiparametric generalized autoregressive conditional heteroscedastic model”.
Li, W.K., Mak, T.K. 1994. On the squared residual autocorrelations in non-inear time series with conditional heteroskedasticity. *Journal of Time Series Analysis* [**15**]{}, 627–636.
Ljung, G.M., Box, G.E., 1978. On a measure of lack of fit in time series models. *Biometrika* [**65**]{}, 297–303.
Mikosch, T., Stărică, C., 2004. Nonstationarities in financial time series, the long-range dependence, and the IGARCH effects. , 378–390.
Pan, J., Wang, H., Tong, H., 2008. Estimation and tests for power-transformed and threshold GARCH models. , 352–378.
Patilea, V., Raïssi, H., 2014. Testing second-order dynamics for autoregressive processes in presence of time-varying variance. *Journal of the American Statistical Association* [**109**]{}, 1099–1111.
Robinson, P.M., 1987. Asymptotically efficient estimation in the presence of heteroskedasticity of unknown form. *Econometrica* [**55**]{}, 875-891.
Robinson, P.M., 1989. Nonparametric estimation of time-varying parameters. *Statistical Analysis and Forecasting of Economic Structural Change. Hackl, P. (Ed.)*. , 253–264.
Shao, Q.M., Yu, H., 1996. Weak convergence for weighted empirical processes of dependent sequences. , 2098–2127.
Stărică, C., Granger, C., 2005. Nonstationarities in stock returns. , 503–522.
Truquet, L., 2017. Parameter stability and semiparametric inference in time varying auto-regressive conditional heteroscedasticity models. *Journal of the Royal Statistical Society: Series B* [**79**]{}, 1391–1414.
Xu, K.L., Phillips, P.C.B., 2008. Adaptive estimation of autoregressive models with time-varying variances. , 265–280.
Zhang, T., Wu, W.B., 2012. Inference of time-varying regression models. *Annals of Statistics* [**40**]{}, 1376–1402.
Zhou, Z., Shao, X., 2013. Inference for linear models with dependent errors. *Journal of the Royal Statistical Society: Series B* [**75**]{}, 323–343.
Zhou, Z., Wu, W.B., 2009. Local linear quantile estimation for nonstationary time series. *Annals of Statistics* [**37**]{}, 2696–2729.
Zhu, K., 2019. Statistical inference for autoregressive models under heteroscedasticity of unknown form. Forthcoming in *Annals of Statistics*.
|
---
abstract: 'In this paper, we develop an adaptive multiresolution discontinuous Galerkin (DG) scheme for time-dependent transport equations in multi-dimensions. The method is constructed using multiwavlelets on tensorized nested grids. Adaptivity is realized by error thresholding based on the hierarchical surplus, and the Runge-Kutta DG (RKDG) scheme is employed as the reference time evolution algorithm. We show that the scheme performs similarly to a sparse grid DG method when the solution is smooth, reducing computational cost in multi-dimensions. When the solution is no longer smooth, the adaptive algorithm can automatically capture fine local structures. The method is therefore very suitable for deterministic kinetic simulations. Numerical results including several benchmark tests, the Vlasov-Poisson (VP) and oscillatory VP systems are provided.'
author:
- 'Wei Guo [^1]'
- 'Yingda Cheng [^2]'
bibliography:
- 'ref\_cheng.bib'
- 'ref\_cheng\_2.bib'
- 'adaptive.bib'
title: 'An Adaptive Multiresoluton Discontinuous Galerkin Method for Time-Dependent Transport Equations in Multi-dimensions'
---
discontinuous Galerkin methods; adaptive multiresolution analysis; sparse grids; transport equations; Vlasov-Poisson system.
Introduction
============
In this paper, we propose an adaptive multiresolution DG scheme for time-dependent transport equations in multi-dimensions. This is a continuation of our previous research on sparse grid DG schemes [@sparsedgelliptic; @guo_sparsedg]. In particular, here we consider linear variable-coefficient equations, aiming at developing efficient solvers for kinetic transport problem as the eventual goal. It is well known that the main bottleneck to solve kinetic equations are their high dimensionality. The equations are posed in the probability space, which is in six dimensions in a realistic setting. A popular framework for high dimensional computations is called sparse grid [@zenger1991sparse; @bungartz2004sparse; @garcke2013sparse]. The idea is to use a properly truncated subset of the tensor product approximation space to break the curse of dimensionality. In our previous work [@guo_sparsedg], a sparse grid DG method has been formulated and applied to kinetic simulations. The construction is based on Alpert’s multiwavelets [@alpert1993class; @alpert_adaptive_2002] and the method is demonstrated to save significant computational and storage cost because of the reduced degrees of freedom of the approximation space. By using the DG framework, many attractive features such as stability and conservation can be proven. However, the scheme’s success and the underlying convergence theory still rely heavily on the smoothness of the exact solution. In fact, it was generally understood that any *a priori* type of choice of the sparse grid approximation space will depend on the smoothness assumption of the exact solution, which is often not satisfied in practice. For example, for the VP system and many other kinetic models, small scale structures will often develop over time. Therefore, using the standard sparse grid methods or any uniform grid based methods may not be optimal. The situation is even worse if the solution contains discontinuities. In the literature, adaptive sparse grid methods have been developed [@zenger1991sparse; @griebel1998adaptive; @bungartz2004sparse; @bokanowski_adaptive_2012] to address this issue. Such schemes measure the hierarchical coefficients or the so-called hierarchical surplus as a natural indicator for refinement or coarsening. There is a particular connection of this approach with the celebrated adaptive wavelet method [@dahmen_wavelet_1997; @cohen2000wavelet]. This type of multiresolution schemes have been used to accelerate the computations for conservation laws under finite difference or finite volume frameworks [@harten_multiresolution_1995; @bihari_multiresolution_1997; @dahmen_multiresolution_2001; @alves_adaptive_2002; @cohen_fully_2003; @chiavassa2005multiresolution]. In recent years, there have been developments of adaptive multiresolution DG schemes [@calle_wavelets_2005; @archibald_adaptive_2011; @hovhannisyan_adaptive_2014; @gerhard_adaptive_2014; @gerhard_high-order_2014; @cite-key] which use the multiwavelets of Alpert for computing conservation laws and compressible flows. In the context of adaptive computation for Vlasov equations, closely related work includes semi-Lagrangian type wavelet method [@besse2003adaptive; @gutnic_vlasov_2004; @besse_wavelet-mra-based_2008] and the $h$-adaptive RKDG method [@zhuqiu].
The objective of the present paper is to develop an adaptive multiresolution DG method that also fits under the sparse grid framework in multi-dimensions. When compared with other adaptive multiresolution DG methods in the literature, the main difference is in the multi-dimensional case. Our scheme will naturally go back to a sparse grid DG method, saving computational cost, when the solution possess sufficient smoothness. This is realized by using the fully tensorized basis functions instead of exploring multiwavelet only in local elements. When the solution is no longer smooth, the adaptive algorithm that uses the hierarchical surplus as the refinement or coarsening indicator, can automatically capture the local structures, thus removing the smoothness requirement of *a priori* chosen sparse grid approximation space. We use the hash table as the underlying data structure and can deal with equations in arbitrary dimensions. By using the DG formulation, many nice properties are retained for the transport equations. The numerical scheme is validated by benchmark tests with smooth and nonsmooth solutions, the standard VP system and oscillatory VP system.
The rest of this paper is organized as follows: in Section \[sec:method\], we construct the adaptive multiresolution DG scheme. The numerical performance is validated in Section \[sec:numerical\] by three benchmark tests. Section \[sec:kinetic\] discusses the application to Vlasov equations, and we conclude the paper in Section \[sec:conclusion\].
Numerical method {#sec:method}
================
In this section, we formulate an adaptive multiresolution DG method for solving time-dependent linear transport equations. First, we review the multiresolution analysis and multiwavelets which serve as foundations of the underlying scheme. Then, we discuss an adaptive multiresolution projection method that supplies the numerical initial conditions. The adaptive time evolution algorithm is introduced at the end of this section after a review of the reference DG method.
Multiresolution analysis and multiwavelets
------------------------------------------
In this subsection, we review multiresolution analysis associated with piecewise polynomials. We focus on box shaped domains in this paper. Without loss of generality, all the discussions in this section are for a unit sized box $\Omega=[0,1]^d$, where $d$ is the dimension of the problem.
First, we review the case when $d=1$. We define a set of nested grids, where the $n$-th level grid $\Omega_n$ consists of $2^n$ uniform cells $I_{n}^j=(2^{-n}j, 2^{-n}(j+1)]$, $j=0, \ldots, 2^n-1,$ for any $n \ge 0.$ For notational convenience, we also denote $I_{-1}=[0,1].$
The nested grids result in the nested piecewise polynomial spaces. In particular, let $$V_n^k:=\{v: v \in P^k(I_{n}^j),\, \forall \,j=0, \ldots, 2^n-1\}$$ be the usual piecewise polynomials of degree at most $k$ on the $n$-th level grid $\Omega_n$. Then, we have $$V_0^k \subset V_1^k \subset V_2^k \subset V_3^k \subset \cdots$$ We can now define the multiwavelet subspace $W_n^k$, $n=1, 2, \ldots $ as the orthogonal complement of $V_{n-1}^k$ in $V_{n}^k$ with respect to the $L^2$ inner product on $[0,1]$, i.e., $$V_{n-1}^k \oplus W_n^k=V_{n}^k, \quad W_n^k \perp V_{n-1}^k.$$ For notational convenience, we let $W_0^k:=V_0^k$, which is standard piecewise polynomial space of degree $k$ on $[0,1]$. Therefore, we have $V_n^k=\bigoplus_{0 \leq l \leq n} W_l^k$.
Now we need to supply a set of orthonormal basis associated with the space $W_l^k$. The case of mesh level $l=0$ is trivial. We use the scaled Legendre polynomials and denote the basis by $v^0_{i,0}(x),\quad i=1,\ldots,k+1.$ When $l>0$, the orthonormal bases in $W_l^k$ are presented in [@alpert1993class] and denoted by $$v^j_{i,l}(x),\quad i=1,\ldots,k+1,\quad j=0,\ldots,2^{l-1}-1.$$ The construction follows a repeated Gram-Schmidt process and the explicit expression of the multiwavelet basis functions are provided in [@alpert1993class]. Note that such multiwavelet bases retain the orthonormal property of wavelet bases for different mesh levels, i.e., $$\label{ortho1d}
\int_0^1 v^j_{i,l}(x)v^{j'}_{i',l'}(x)\,dx=\delta_{ii'}\delta_{ii'}\delta_{jj'}.$$
Now we are ready to review the case when $d>1$. First we recall some basic notations about multi-indices. For a multi-index $\mathbf{\alpha}=(\alpha_1,\cdots,\alpha_d)\in\mathbb{N}_0^d$, where $\mathbb{N}_0$ denotes the set of nonnegative integers, the $l^1$ and $l^\infty$ norms are defined as $$|{{\bm{\alpha}}}|_1:=\sum_{m=1}^d \alpha_m, \qquad |{{\bm{\alpha}}}|_\infty:=\max_{1\leq m \leq d} \alpha_m.$$ The component-wise arithmetic operations and relational operations are defined as $${{\bm{\alpha}}}\cdot {{\bm{\beta}}}:=(\alpha_1 \beta_1, \ldots, \alpha_d \beta_d), \qquad c \cdot {{\bm{\alpha}}}:=(c \alpha_1, \ldots, c \alpha_d), \qquad 2^{{\bm{\alpha}}}:=(2^{\alpha_1}, \ldots, 2^{\alpha_d}),$$ $${{\bm{\alpha}}}\leq {{\bm{\beta}}}\Leftrightarrow \alpha_m \leq \beta_m, \, \forall m,\quad
{{\bm{\alpha}}}<{{\bm{\beta}}}\Leftrightarrow {{\bm{\alpha}}}\leq {{\bm{\beta}}}\textrm{ and } {{\bm{\alpha}}}\neq {{\bm{\beta}}}.$$
By making use of the multi-index notation, we denote by ${\mathbf{l}}=(l_1,\cdots,l_d)\in\mathbb{N}_0^d$ the mesh level in a multivariate sense. We define the tensor-product mesh grid $\Omega_{\mathbf{l}}=\Omega_{l_1}\otimes\cdots\otimes\Omega_{l_d}$ and the corresponding mesh size $h_{\mathbf{l}}=(h_{l_1},\cdots,h_{l_d}).$ Based on the grid $\Omega_{\mathbf{l}}$, we denote by $I_{\mathbf{l}}^{\mathbf{j}}=\{{\mathbf{x}}:x_m\in(h_mj_m,h_m(j_{m}+1)),m=1,\cdots,d\}$ an elementary cell, and $${{\bf V}}_{\mathbf{l}}^k:=\{{{\bf v}}: {{\bf v}}({\mathbf{x}}) \in Q^k(I^{{\mathbf{j}}}_{{\mathbf{l}}}), \,\, {\mathbf{0}}\leq {\mathbf{j}}\leq 2^{{\mathbf{l}}}-{\mathbf{1}}\}= V_{l_1,x_1}^k\times\cdots\times V_{l_d,x_d}^k$$ the tensor-product piecewise polynomial space, where $Q^k(I^{{\mathbf{j}}}_{{\mathbf{l}}})$ denotes the collection of polynomials of degree up to $k$ in each dimension on cell $I^{{\mathbf{j}}}_{{\mathbf{l}}}$. If we use equal mesh refinement of size $h_N=2^{-N}$ in each coordinate direction, the grid and space will be denoted by $\Omega_N$ and ${{\bf V}}_N^k$, respectively.
Based on a tensor-product construction, the multi-dimensional increment space can be defined as $${\mathbf{W}}_{\mathbf{l}}^k=W_{l_1,x_1}^k\times\cdots\times W_{l_d,x_d}^k.$$ Therefore, the standard tensor-product piecewise polynomial space on $\Omega_N$ can be written as $$\label{eq:hiere_tp}
{{\bf V}}_N^k=\bigoplus_{\substack{ |{\mathbf{l}}|_\infty \leq N\\{\mathbf{l}}\in \mathbb{N}_0^d}} {\mathbf{W}}_{\mathbf{l}}^k,$$ while the sparse grid approximation space we used in [@sparsedgelliptic; @guo_sparsedg] is $$\label{eq:hiere_sg}
\hat{{{\bf V}}}_N^k=\bigoplus_{\substack{ |{\mathbf{l}}|_1 \leq N\\{\mathbf{l}}\in \mathbb{N}_0^d}}{\mathbf{W}}_{\mathbf{l}}^k \subset {{\bf V}}_N^k.$$ The dimension of $\hat{{{\bf V}}}_N^k$ scales as $O((k+1)^d2^NN^{d-1})$ [@sparsedgelliptic], which is significantly less than that of ${{\bf V}}_N^k$ with exponential dependence on $Nd$. The approximation results for $\hat{{{\bf V}}}_N^k$ are discussed in [@sparsedgelliptic; @guo_sparsedg], which has a stronger smoothness requirement than the traditional ${{\bf V}}_N^k$ space. In this paper, we will not require the numerical solution to be in $\hat{{{\bf V}}}_N^k$, but rather in ${{\bf V}}_N^k$ and to be chosen adaptively.
Finally, we define the basis functions in multi-dimensions as $$\label{basis}
v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})\doteq\prod_{m=1}^d v^{j_m}_{i_m,l_m}(x_m), $$ for ${\mathbf{l}}\in \mathbb{N}_0^d, {\mathbf{j}}\in B_{\mathbf{l}}\doteq \{{\mathbf{j}}\in\mathbb{N}_0^d: \,\mathbf{0}\leq{\mathbf{j}}\leq\max(2^{{\mathbf{l}}-\mathbf{1}}-\mathbf{1},\mathbf{0}) \}$ and $\mathbf{1}\leq{\mathbf{i}}\leq {\mathbf{k}}+\mathbf{1}.$ The orthonormality of the bases can be established by . Furthermore, we note that the support of $v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}$ is $I_{{\mathbf{l}}-\mathbf{1}}^{\mathbf{j}}.$
Adaptive multiresolution projection method
------------------------------------------
In this subsection, we formulate an adaptive multiresolution projection algorithm which supplies the numerical initial condition for DG schemes. Given a maximum mesh level $N$ and an accuracy threshold $\varepsilon>0$, we find a projected solution $u_h({\mathbf{x}}) \in {{\bf V}}^k_N$ of a given function $u({\mathbf{x}})$ defined on $\Omega$ using an adaptive procedure.
The backbone of the algorithm is the fact that each hierarchical basis of space ${{\bf V}}^k_N$ represents the fine level detail on a specific mesh scale, which naturally provides an error indicator for the design of adaptive algorithms. We first review the mixed derivative norm for a function $u(x).$ For any set $L=\{i_1, \ldots i_r \} \subset \{1, \ldots d\}$, we define $L^c$ to be the complement set of $L$ in $\{1, \ldots d\}.$ For a non-negative integer $\alpha$ and set $L$, we define the semi-norm on any domain denoted by $\Omega$ $
|u|_{H^{\alpha,L}(\Omega)} := \left \| \left ( \frac{\partial^{\alpha}}{\partial x_{i_1}^{\alpha}} \cdots \frac{\partial^{\alpha}}{\partial x_{i_r}^{\alpha}} \right ) u \right \|_{L^2(\Omega)}
$ and $
|u|_{\mathcal{H}^{q+1}(\Omega)} :=\max_{1 \leq r \leq d} \left ( \max_{\substack{L\subset\{1,2,\cdots,d\} \\|L|=r}} |u|_{H^{t+1, L}(\Omega)} \right ),$ which is the norm for the mixed derivative of $u$ of at most degree $q+1$ in each direction. For a function $u({\mathbf{x}}) \in \mathcal{H}^{p+1}(\Omega),$ we showed that [@guo_sparsedg] $u({\mathbf{x}})=\sum_{{\mathbf{l}}\in \mathbb{N}_0^d} \sum_{{\mathbf{j}}\in B_{\mathbf{l}}, \mathbf{1}\leq{\mathbf{i}}\leq {\mathbf{k}}+\mathbf{1}} u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}} v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}}),$ and $$\left (\sum_{{\mathbf{j}}\in B_{\mathbf{l}}, \mathbf{1}\leq{\mathbf{i}}\leq {\mathbf{k}}+\mathbf{1}} |u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}|^2 \right )^{1/2} \leq C2^{-(q+1)|{\mathbf{l}}|_1}|u|_{\mathcal{H}^{q+1}(\Omega)},$$ where $u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}= \int_{\Omega}u({\mathbf{x}})v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})d{\mathbf{x}},$ $q=\min\{p,k\},$ and $C$ is a constant independent of mesh level ${\mathbf{l}}.$ Henceforth, the hierarchical coefficient $u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}$ (also called hierarchical surplus) serves as a natural indicator for the local smoothness of $u({\mathbf{x}})$. The main idea of the adaptive algorithm is to choose only coefficients above a prescribed threshold value $\varepsilon$. In this paper, we experiment on error indicators $\left\|\sum_{\mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}}u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})\right\|_{L^s(\Omega)}$ using different norms, where $|| \cdot ||_{L^s(\Omega)}$ denotes the broken Sobolev $L^s(\Omega)$ norm for a function in ${{\bf V}}_N^k$, with $s=1, 2, \infty.$ When $s=2,$ due to orthonormality of the basis, $\left\|\sum_{\mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}}u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})\right\|_{L^2(\Omega)}$ is equivalent to $\left(\sum_{\mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}}|u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}|^2\right)^{\frac12}.$ In other cases, for simplicity, we use instead $\sum_{\mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}}|u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}|\|v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})\|_{L^1(\Omega)}$ for $s=1$ and $\sum_{\mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}}|u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}|\|v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})\|_{L^\infty(\Omega)}$ for $s=\infty$. The values of $|\|v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})\|_{L^1(\Omega)}$ and $|\|v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})\|_{L^\infty(\Omega)}$ can be precomputed and stored. Overall, $\|v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})\|_{L^1(\Omega)}$ scales as $2^{-|{\mathbf{l}}|_1/2}$, and $\|v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})\|_{L^\infty(\Omega)}$ scales as $2^{|{\mathbf{l}}|_1/2}.$
In summary, we flag an element $V^{\mathbf{j}}_{{\mathbf{l}}}:=\{v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}, \mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}\}$ if $$\begin{aligned}
&\sum_{\mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}}|u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}|\|v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})\|_{L^1(\Omega)}>\varepsilon,\quad\text{if}\quad s=1\label{eq:l1}\\
&\left(\sum_{\mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}}|u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}|^2\right)^{\frac12}>\varepsilon,\quad\text{if}\quad s=2\label{eq:l2}\\
&\sum_{\mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}}|u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}|\|v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})\|_{L^\infty(\Omega)}>\varepsilon,\quad\text{if}\quad s=\infty\label{eq:l8},
\end{aligned}$$ where $\varepsilon$ is a prescribed error threshold. Similar to [@griebel1998adaptive], we use a top down approach, starting recursively from the coarsest level. Once an element is flagged, then we consider adding its children elements for improvement of accuracy. In particular, if a element $V^{{\mathbf{j}}'}_{{\mathbf{l}}'}$ satisfies the following conditions:
- There exists an integer m such that $1\le m\le d$ and ${\mathbf{l}}'={\mathbf{l}}+ \mathbf{e}_m$, where $\mathbf{e}_m$ denotes the unit vector in $x_m$ direction, and the support of $V^{{\mathbf{j}}'}_{{\mathbf{l}}'}$ is within the support of $V^{{\mathbf{j}}}_{{\mathbf{l}}}.$
- $|{\mathbf{l}}'|_\infty\leq N$,
then it is called a child element of $V_{\mathbf{l}}^{\mathbf{j}}$. Accordingly, element $V_{\mathbf{l}}^{\mathbf{j}}$ is called a parent element of $V_{{\mathbf{l}}'}^{{\mathbf{j}}'}$. In this notation, we can see an element can have multiple children and multiple parents.
The last component of the algorithm is an efficient data structure. As suggested in [@griebel1998adaptive], we use the hash table approach which is easy to implement, requires little storage overhead, and allows one to conveniently deal with hierarchical index $({\mathbf{l}},{\mathbf{j}})$ in the implementation. Specifically, by a prescribed hash-function, a hierarchical index $({\mathbf{l}},{\mathbf{j}})$ is mapped to a hash-key (an integer), which serves as an address in the hash table. Then, given a hierarchical index, the associated data can be easily stored and retrieved by computing the hash-key. For more details about the hash table including how to choose proper hash-function and other implementation details, readers are referred to [@griebel1998adaptive].
Finally, we summarize the adaptive projection algorithm as follows.
------------------------------------------------------------------------
\
[**Algorithm 1: Adaptive projection**]{}\
------------------------------------------------------------------------
[**Input:**]{} Function $u({\mathbf{x}})$.
[**Parameters:**]{} Maximum level $N,$ polynomial degree $k,$ error threshold $\varepsilon.$
[**Output:**]{} Hash table H, leaf table L and projected solution $u_h({\mathbf{x}}) \in {{\bf V}}_{N,H}^k.$
1. Project $u({\mathbf{x}})$ onto the coarsest level of mesh, e.g., level 0. Add all elements to the hash table $H$ (active list). Define an element without children as a leaf element, and add all the leaf elements to the leaf table $L$ (a smaller hash table).
2. For each leaf element $V_{\mathbf{l}}^{\mathbf{j}}$ in the leaf table, if , or holds, then we consider its child elements: for a child element $V_{{\mathbf{l}}'}^{{\mathbf{j}}'}$, if it has not been added to the table $H$, then compute the detail coefficients $ \{u^{{\mathbf{j}}'}_{{\mathbf{i}},{\mathbf{l}}'}, \mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}\}$ and add $V_{{\mathbf{l}}'}^{{\mathbf{j}}'}$ to both table $H$ and table $L$. For its parent elements in $H$, we increase the number of children by one.
3. Remove the parent elements from table $L$ for all the newly added elements.
4. Repeat step 2 - step 3, until no element can be further added.
------------------------------------------------------------------------
Once the adaptive projection algorithm completes, it will generate a final hash table H and a numerical approximation $u_h({\mathbf{x}})=\sum_{v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}} \in H}u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}} v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})$. We denote the approximation space ${{\bf V}}^k_{N,H}=\textrm{span}\{v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}} \in H\}$ and it is a subspace of ${{\bf V}}^k_N.$ As noticed in [@griebel1998adaptive], this top down approach may terminates too early and does not resolve the large coefficients on the very fine mesh levels. An alternative way is to find the $L^2$ projection of $u({\mathbf{x}})$ in the finest level ${{\bf V}}^k_N,$ and then truncate the elements with small coefficients as done in [@hovhannisyan_adaptive_2014]. However, this will effectively increase the computational cost and we do not pursue it in this work.
The reference DG scheme
-----------------------
In this subsection, we review the standard RKDG method defined on space ${{{\bf V}}}_N^k$ for the following $d$-dimensional linear transport equation with variable coefficients $$\label{eq:model}
\left\{\begin{array}{l}
u_t + \nabla\cdot({{\bm{a}}}(t,{\mathbf{x}}) \,u) =0,\quad {\mathbf{x}}\in\Omega,\\[2mm]
u(0,{\mathbf{x}}) = u_0({\mathbf{x}}),
\end{array}\right.$$ subject to periodic boundary conditions. Other types of boundary conditions can be accommodated in a similar way.
First, we review some basic notations about jumps and averages for piecewise functions defined on the grid $\Omega_N$. Let $T_h$ be the collection of all elementary cell $I^{{\mathbf{j}}}_{N}, \quad 0 \leq j_m \leq 2^{N}-1, \forall \,m=1, \ldots, d$. $\Gamma:=\bigcup_{T \in \Omega_N} \partial_T$ be the union of the interfaces for all the elements in $\Omega_N$ (here we have taken into account the periodic boundary condition when defining $\Gamma$) and $S(\Gamma):=\Pi_{T\in \Omega_N} L^2(\partial T)$ be the set of $L^2$ functions defined on $\Gamma$. For any $q \in S(\Gamma)$ and ${\mathbf{q}}\in [S(\Gamma)]^d$, we define their averages $\{q\}, \{{\mathbf{q}}\}$ and jumps $[q], [{\mathbf{q}}]$ on the interior edges as follows. Suppose $e$ is an interior edge shared by elements $T_+$ and $T_-$, we define the unit normal vectors $\bm{n}^+$ and $\bm{n}^-$ on $e$ pointing exterior of $T_+$ and $T_-$, respectively, then
\[ q\] = q\^- \^- + q\^+ \^+, & {q} = ( q\^- + q\^+),\
\[ \] = \^- \^- + \^+ \^+, & {} = ( \^- + \^+).
The semi-discrete DG formulation for is defined as follows: find $u_h\in {{{\bf V}}}_N^k$, such that $$\begin{aligned}
\label{eq:DGformulation}
\int_{\Omega}(u_h)_t\,v_h\,d{\mathbf{x}}=& \int_{\Omega} u_h{{\bm{a}}}\cdot\nabla v_h\,d{\mathbf{x}}- \sum_{\substack{e \in \Gamma}}\int_{e} \widehat{{{\bm{a}}}u_h} \cdot [v_h]\,ds,\quad \\
:= & A(u_h,v_h) \notag\end{aligned}$$ for $\forall \,v_h \in {{{\bf V}}}_N^k,$ where $\widehat{{{\bm{a}}}u_h}$ is defined on the element interface denotes a monotone numerical flux to ensure the $L^2$ stability of the scheme. In this paper, we use the upwind flux $$\widehat{{{\bm{a}}}u_h} = {{\bm{a}}}\{u_h\} + \frac{|{{\bm{a}}}\cdot {{\bf n}}|}{2}[u_h],$$ with ${{\bf n}}= {{\bf n}}^+$ or ${{\bf n}}^-$ for the constant coefficient case. More generally, for variable coefficients problems, we adopt the global Lax-Friedrichs flux $$\widehat{{{\bm{a}}}u_h} = \{{{\bm{a}}}u_h\} + \frac{\alpha}{2}[u_h],$$ where $\alpha=\max_{{\mathbf{x}}}{|{{\bm{a}}}({\mathbf{x}},t)\cdot {{\bf n}}|}$, the maximum is taken for all possible ${\mathbf{x}}$ at time $t$ in the computational domain.
We use the total variation diminishing (TVD) Runge-Kutta methods [@Shu_1988_JCP_NonOscill] to solve the ordinary differential equations resulting from the semidiscrete formulation , $(u_h)_t = R(u_h).$ A commonly used third-order TVD Runge-Kutta method is given by $$\begin{aligned}
u_h^{(1)} &= u_h^{n} + \Delta t R(u^n_h), \notag \\
u_h^{(2)} &= \frac{3}{4}u_h^{n} + \frac14 u_h^{(1)} +\frac14 \Delta t R(u_h^{(1)}),\label{eq:tvd}\\
u_h^{n+1} &= \frac{1}{3}u_h^{n} + \frac23 u_h^{(1)} +\frac23 \Delta t R(u_h^{(2)}), \notag\end{aligned}$$ where $u_h^{n}$ denotes the numerical solution at time level $t=t^n$.
Adaptive multiresolution DG evolution algorithm
-----------------------------------------------
Based on the previous subsections, we are now ready to formulate the adaptive multiresolution DG evolution algorithm which consists of several key steps.
The first step is the prediction step, which means given the hash table $H$ that stores the numerical solution $u_h$ at time step $t^n$ and the associated leaf table $L$, we need to predict the location where the details becomes significant at the next time step $t^{n+1}$, then add more elements in order to capture the fine structures. The time step size $\Delta t$ is chosen as follows. We denote by $l_m^n$ the largest mesh level in the $x_m$ direction in the current hash table $H$, and $l_m^{n,p} = \min(l_m^n+1, N)$ for the sake of possible refinement after prediction. Accordingly, we denote $h_m^{n,p}=2^{-l_m^{n,p}}$. The time step $\Delta t$ for at time $t^n$ is given by $$\begin{aligned}
\displaystyle\Delta t &= \frac{\text{CFL}}{\displaystyle\sum_{m=1}^d \frac{c_m}{h_m^{n,p}}},
\end{aligned}$$ where $c_m$ is the maximum wave propagation speed in $x_m$-direction and we use $\text{CFL}=0.1$ in our simulation. We then solve for $u_h\in {{\bf V}}_{N,H}^k$ from $t^n$ to $t^{n+1}$, such that $
\int_{\Omega}(u_h)_t\,v_h\,d{\mathbf{x}}=A(u_h,v_h)
$ for $\forall \,v_h \in {{\bf V}}_{N,H}^k,$ where $A(u_h,v_h)$ has been defined in . The forward Euler discretization is used as the time integrator in this step and we denote the predicted solution at $t^{n+1}$ by $u_h^{(p)}.$ We remark that the standard global time stepping method is employed in the current adaptive framework for simplicity. It is nontrivial to develop a local time stepping method for the proposed adaptive multiresolution DG method due to the distinct hierarchical basis functions, and this subject is left for future study.
The second step is the refinement step according to $u_h^{(p)}$. We traverse the hash table $H$ and if an element $V_{\mathbf{l}}^{\mathbf{j}}$ satisfies the refinement criteria , or , indicating that such an element becomes significant at the next time step, then we need to refine the mesh by adding its children elements to $H$. The detailed procedure is described as follows. For a child element $V_{{\mathbf{l}}'}^{{\mathbf{j}}'}$ of $V_{\mathbf{l}}^{\mathbf{j}}$, if it has been already added to $H$, i.e. $V_{{\mathbf{l}}'}^{{\mathbf{j}}'}\in H$, we do nothing; if not, we add the element $V_{{\mathbf{l}}'}^{{\mathbf{j}}'}$ to $H$ and let the associated detail coefficients $u^{{\mathbf{j}}'}_{{\mathbf{i}},{\mathbf{l}}'}=0,\,\mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}$. Moreover, we need to increase the number of children by one for all elements that has $V_{{\mathbf{l}}'}^{{\mathbf{j}}'}$ as its child element and remove the parent elements of $V_{{\mathbf{l}}'}^{{\mathbf{j}}'}$ from the leaf table if they have been added. Finally, we obtain a larger hash table $H^{(p)}$ and the associated approximation space ${{\bf V}}_{N,H^{(p)}}^k$ and the leaf table $L^{(p)}$.
Then, based on the updated hash table $H^{(p)}$, we evolve the numerical solution by the DG formulation with space ${{\bf V}}_{N,H^{(p)}}^k$. Namely, we solve for ${{\bf V}}_{N,H^{(p)}}^k$ from $t^n$ to $t^{n+1}$, such that $
\int_{\Omega}(u_h)_t\,v_h\,d{\mathbf{x}}=A(u_h,v_h)
$ for $\forall \,v_h \in {{\bf V}}_{N,H^{(p)}}^k,$ where $A(u_h,v_h)$ has been defined in . The semidiscrete equation is solved by the TVD-RK scheme to generate the pre-coarsened numerical solution $\tilde{u}_h^{n+1}$. We notice that the first inner stage of the Runge-Kutta method is actually the forward Euler prediction step. Moreover, recall that the detail coefficients for the newly added elements are set to zero. Therefore, after the time evolution of the first inner stage, the coefficients for original elements for $u_h^{(1)}$ should be the same as the prediction solution $u_h^{(p)}$, which can be reused to save computational cost. We only need to calculate the coefficients of newly added elements for $u_h^{(1)}$.
The last step is to coarsen the mesh by removing elements that become insignificant at time level $t^{n+1}.$ The hash table $H^{(p)}$ that stores the numerical solution $\tilde{u}_h^{n+1}$ is recursively coarsened by the following procedure. The leaf table $L^{(p)}$ is traversed, and if an element $V_{\mathbf{l}}^{\mathbf{j}}\in L^{(p)}$ satisfies the coarsening criterion $$\begin{aligned}
&\sum_{\mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}}|u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}|\|v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})\|_{L^1(\Omega)}<\eta,\quad\text{if}\quad s=1\label{eq:l1_c}\\
&\left(\sum_{\mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}}|u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}|^2\right)^{\frac12}<\eta,\quad\text{if}\quad s=2\label{eq:l2_c}\\
&\sum_{\mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}}|u^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}|\|v^{\mathbf{j}}_{{\mathbf{i}},{\mathbf{l}}}({\mathbf{x}})\|_{L^\infty(\Omega)}<\eta,\quad\text{if}\quad s=\infty\label{eq:l8_c},\end{aligned}$$ where $\eta$ is a prescribed error constant, then we remove the element from both table $L^{(p)}$ and $H^{(p)}$, and set the associated coefficients $u^{{\mathbf{j}}'}_{{\mathbf{i}},{\mathbf{l}}'}=0,\,\mathbf{1}\leq{\mathbf{i}}\leq{\mathbf{k}}+\mathbf{1}$. For each of its parent elements in table $H^{(p)}$, we decrease the number of children by one. If the number becomes zero, i.e, the element has no child any more, then it is added to the leaf table $L^{(p)}$ accordingly. Repeat the coarsening procedure until no element can be removed from the table $L^{(p)}$. By removing only the leaf element at each time, we avoid generating “holes" in the hash table. The output of this coarsening procedure are the updated hash table and leaf table, denoted by $H$ and $L$ respectively, and the compressed numerical solution $u_h^{n+1} \in {{\bf V}}_{N,H}^k$. In practice, $\eta$ is chosen to be smaller than $\varepsilon$ for safety. In the simulations presented in this paper, we use $\eta = \varepsilon/10$.
In summary, the following algorithm advances the numerical solution for one time step.
------------------------------------------------------------------------
\
[**Algorithm 2: Adaptive evolution from $t^n$ to $t^{n+1}$**]{}\
------------------------------------------------------------------------
\
[**Input:**]{} Hash table H and leaf table L at $t^n$, numerical solution $u_h^{n} \in {{\bf V}}_{N,H}^k.$
[**Parameters:**]{} Maximum level $N,$ polynomial degree $k,$ error constants $\varepsilon, \eta,$ CFL constant.
[**Output:**]{} Hash table H and leaf table L at $t^{n+1}$, numerical solution $u_h^{n+1} \in {{\bf V}}_{N,H}^k.$
1. [**Prediction.**]{} Given a hash table $H$ that stores the numerical solution $u_h$ at time step $t^n$, calculate $\Delta t$. Predict the solution by the DG scheme using space ${{\bf V}}_{N,H}^k$ and the forward Euler time stepping method. Generate the predicted solution $u_h^{(p)}$.
2. [**Refinement.**]{} Based on the predicted solution $u_h^{(p)}$, screen all elements in the hash table $H$. If for element $V_{\mathbf{l}}^{\mathbf{j}}$, the refining criteria , , or hold, then add its children elements to $H$ and $L$ provided they are not added yet, and set the associated detail coefficients to zero. We also need to make sure that all the parent elements of the newly added element are in $H$ (i.e., no “hole" is allowed in the hash table) and increase the number of children for all its parent elements by one. This step generates the updated hash table $H^{(p)}$ and leaf table $L^{(p)}$.
3. [**Evolution.**]{} Given the predicted table $H^{(p)}$ and the leaf table $L^{(p)}$, we evolve the solution from $t^n$ to $t^{n+1}$ by the DG scheme using space ${{\bf V}}_{N,H^{(p)}}^k$ and the third order Runge-Kutta time stepping method . This step generates the pre-coarsened numerical solution $\tilde{u}_h^{n+1}.$
4. [**Coarsening.**]{} For each element in the leaf table, if the coarsening criteria , or hold, then remove the element from table $H^{(p)}$ and $L^{(p)}$. For each of its parent elements in $H^{(p)}$, we decrease the number of children by one. If the number becomes zero, i.e, the element has no child, then it will be added to leaf table $L^{(p)}$. Repeat the coarsening procedure until no element can be removed from the leaf list. Denote the resulting hash table and leaf table by $H$ and $L$ respectively, and the compressed numerical solution $u_h^{n+1} \in {{\bf V}}_{N,H}^k$.
------------------------------------------------------------------------
\
The optimal choice of the maximum mesh level $N$ and error parameters $\varepsilon$ and $\eta$ is problem dependent and the performance of the adaptive scheme is closely related to their choice. For example, an excessively small $\varepsilon$ may result in unnecessary refinement and hence larger computational cost, but little gain in accuracy. On the other hand, if an excessively large $N$ is chosen, then we may need a very small time step for the stability consideration, which degrades the efficiency of the proposed scheme.
Numerical results {#sec:numerical}
=================
In this section, we present benchmark numerical results to demonstrate the performance of the proposed scheme for solving linear transport equations. For all test examples, we consider both smooth and non smooth initial profiles.
\[ex:linear\] We consider $$\label{eq:linear_adv}
u_t + \sum_{m=1}^d u_{x_m} = 0,\quad {\mathbf{x}}\in[0,1]^d
\displaystyle$$ with periodic boundary conditions.
We first consider a smooth initial condition $$\label{eq:linear_init_smooth}
u(0,{\mathbf{x}}) = \prod_{m=1}^{d} \sin^4\left(\pi x_m\right),$$ with $d=2, 3, 4$ and investigate the accuracy of the scheme using $L^2$ norm based refinement and coarsening criteria and . We run the simulations with a fixed maximum mesh level $N=7,$ different $\varepsilon$ values, and report the $L^2$ errors and the number of active degrees of freedom at final time $T=1$ in Table \[table:linear\]. The following rates of convergence are calculated, $$\begin{aligned}
\mbox{convergence rate with respect to the error threshold} \quad &R_{\varepsilon_l}=\frac{\log(e_{l-1}/e_l)}{\log(\varepsilon_{l-1}/{\varepsilon_l})}\\
\mbox{convergence rate with respect to DOF } \quad & R_{\text{DOF}_l}=\frac{\log(e_{l-1}/e_l)}{\log(\text{DOF}_l/\text{DOF}_{l-1})}, \end{aligned}$$ where $e_l$ is the standard $L^2$ error with refinement parameter $\varepsilon_l$, and $\text{DOF}_l$ is the associated number of active degrees of freedom at final time. For comparison purpose, recall the standard DG schemes with the tensor product grid yields $R_{\epsilon}\approx 1$ and $R_{\text{DOF}}\approx \frac{k+1}{d}$. From Table \[table:linear\], we observe that for the proposed scheme, $R_{\epsilon}$ is slightly smaller than 1, and $R_{\text{DOF}}$ is much larger than $\frac{k+1}{d}$ but still smaller than $k+1$. This demonstrates the effectiveness of the adaptive algorithm, as well as the computational saving of the multiresolution scheme in this case. We also experiment on varying both $N$ and $\varepsilon$ values at the same time. To save space, the results are not reported but we remark that if a excessively small $\varepsilon$ is taken with a small mesh level $N$, the performance of the scheme will be very similar to the tensor product DG method and the efficiency of the scheme will be adversely affected. We also test the code with $L^1$ and $L^\infty$ based criteria , and , , little difference is observed in the convergence order. To save space, the results are omitted in the paper. For the rest of the paper, unless otherwise noted, the refinement and coarsening criteria based on $L^2$ norms , will be used.
$\varepsilon$ DOF $L^2$ error $R_{\text{DOF}}$ $R_\varepsilon$ DOF $L^2$ error $R_{\text{DOF}}$ $R_\varepsilon$ DOF $L^2$ error $R_{\text{DOF}}$ $R_\varepsilon$
--------------- ------ ------------- ------------------ ----------------- ------- ------------- ------------------ ----------------- -------- ------------- ------------------ -----------------
1E-03 312 1.47E-02 1168 2.62E-02 2592 2.87E-02
5E-04 404 8.90E-03 1.93 0.72 1840 1.87E-02 0.75 0.49 4512 2.32E-02 0.39 0.31
1E-04 1148 1.70E-03 1.59 1.03 3920 7.26E-03 1.25 0.59 14976 9.49E-03 0.75 0.56
5E-05 1688 1.04E-03 1.28 0.71 6440 4.16E-03 1.12 0.80 23776 6.60E-03 0.79 0.53
1E-05 3588 2.42E-04 1.93 0.90 18624 8.83E-04 1.46 0.96 62368 2.13E-03 1.17 0.70
5E-06 4636 1.37E-04 2.23 0.82 25496 5.10E-04 1.75 0.79 111424 1.18E-03 1.02 0.86
5E-05 774 3.61E-04 4428 1.30E-03 26244 1.48E-03
1E-05 1584 8.78E-05 1.97 0.88 9585 2.58E-04 2.10 1.01 51840 5.30E-04 1.51 0.64
5E-06 1998 4.58E-05 2.80 0.94 13716 1.74E-04 1.09 0.57 69012 2.60E-04 2.49 1.03
1E-06 4023 1.43E-05 1.67 0.73 27081 4.15E-05 2.11 0.89 168723 9.46E-05 1.13 0.63
5E-07 5157 7.20E-06 2.76 0.99 40446 2.45E-05 1.32 0.76 226719 4.89E-05 2.23 0.95
1E-07 9072 1.80E-06 2.46 0.86 77463 7.06E-06 1.91 0.77 531684 1.24E-05 1.61 0.85
1E-05 1120 3.71E-05 10496 5.72E-05 58368 1.26E-04
5E-06 1184 2.92E-05 4.32 0.35 12032 4.91E-05 1.12 0.22 97280 7.53E-05 1.01 0.74
1E-06 2208 9.87E-06 1.74 0.67 18688 1.31E-05 3.00 0.82 129024 3.73E-05 2.49 0.44
5E-07 2864 4.85E-06 2.73 1.03 25984 1.09E-05 0.56 0.27 204800 1.34E-05 2.21 1.47
1E-07 3968 1.31E-06 4.02 0.82 43840 2.71E-06 2.66 0.86 409600 6.14E-06 1.13 0.49
5E-08 5760 7.88E-07 1.36 0.73 57472 1.50E-06 2.20 0.86 521216 2.79E-06 3.27 1.14
: Example \[ex:linear\] with initial condition . Numerical error and convergence rate. $N=7$. $T=1$.
\[table:linear\]
Next, we consider a discontinuous initial condition $$\label{eq:discontinuous}
u(0,{\mathbf{x}})=\left\{\begin{array}{ll}1& (x_1,x_2)\in[\frac12-\frac{\sqrt{6}}{2},\frac12+\frac{\sqrt{6}}{2}]^2.\\[2mm]
0& \text{otherwise},\end{array}\right.$$ when $d=2.$ It is well known that the standard sparse grid method without adaptivity cannot resolve such discontinuous solution profiles. In our simulations, we fix $N=7, \varepsilon=10^{-5}$ and compare the performance of the scheme with $L^1$, $L^2$ and $L^\infty$ based refinement/coarsening criteria up to final time $T=1$. The numerical solutions and the associated active elements are reported in Figure \[fig:linear\_dis\]. We only plot the center of support for each active basis, while noting that the basis functions contains different size of support in the scheme. The method with all three types of refinement/coarsening criteria provides well resolved solution profiles. Active elements all cluster towards the discontinuities. However, the $L^\infty$ norm based criteria has the most degrees of freedom, increasing computational cost while not improving numerical performance. Similar comments are also made in [@griebel1998adaptive]. The $L^1$ norm based criteria is the most sparse, but the solution is slightly more oscillatory. This is natural since no limiting procedure has been employed in this paper.
\
\
We then perform a detailed comparison for smooth and non smooth solution to demonstrate an important property of the proposed scheme. We fix $d=2$, $N=7$, $k=3$ and consider initial conditions and . We take $\varepsilon=10^{-7}$ and $\varepsilon=10^{-5}$ for the smooth and discontinuous problems, respectively. In Figure \[fig:linear\_percentage\], we plot the percentage of active elements for each incremental space ${\mathbf{W}}_{\mathbf{l}}$, ${\mathbf{l}}={l_1,l_2}$ at final time $T=1$ with all three norms as adaptive indicators. If the percentage is $1,$ it means all the elements on that level is enacted. If the percentage is $0,$ it means no element on that level is enacted. A full grid approximation corresponds to percentage being $1$ on all levels, while a sparse grid approximation [@guo_sparsedg] corresponds to percentage being $1$ when $|{\mathbf{l}}|_1 \le N,$ and $0$ otherwise. For the adaptive scheme, there is no longer a clean cutoff and we visualize the variation of percentages among all levels when $L^1$, $L^2$ and $L^\infty$ norm based criteria are used. We observe from Figure \[fig:linear\_percentage\] that only the upper left corner of incremental spaces are active, similar to the sparse grid DG method with space approximation $\hat{{{\bf V}}}_N^k$ when the solution is smooth. This is true for all refinement/coarsening criteria. If the solution is discontinuous, more elements are incorporated to fully resolve the discontinuities. The $L^1$ norm based criteria is the most sparse among the three as expected. From this plot, we can conclude that if the solution is globally smooth, then the scheme will go back to a sparse grid DG method proposed in [@guo_sparsedg], leading to great savings in computational cost; otherwise, the adaptive algorithm will automatically use more elements in the refined levels to capture local fine structures.
\
\
An additional point we are concerned with is the long time performance of the scheme. For the smooth initial condition , we set $T=60, d=4, N=7, k=3$ and keep track of the time evolution of $L^2$ errors and the numbers of active degrees of freedom with $\varepsilon=10^{-4},\,10^{-5},\,10^{-6}$ along time evolution as shown in Figure \[fig:linear\_d4\_n7\]. It is observed that, for this linear transport problem, the active degrees of freedom decrease at the very beginning of the simulations, then they nearly remain constant as time evolves for all $\varepsilon$. This is because the profile of solution does not change over time and it is only advected along the characteristic direction. The $L^2$ error demonstrates sub-linear growth beyond the initial stage. The maximum $L^2$ errors over time are reported in the figure. For the discontinuous initial condition , we set $T=10, d=2, N=7, k=3, \varepsilon=10^{-5}$ and report the time evolution the numbers of active degrees of freedom and $L^1$ errors in Figure \[fig:linear\_dis\_time\] with all three norm based criteria. The scheme with the $L^\infty$ norm based criteria yields the smallest error but also involves the largest number of degrees of freedom. The error performance of the $L^2$ norm based criteria is qualitatively the same as the $L^\infty$ norm based criteria, while much less degrees of freedom are used. The $L^1$ norm based criteria leads to the largest error while it is uses the least amount of elements among the three. The maximum $L^1$ errors over time are reported in the figure.
\[ex:rotation\] We consider solid body rotation, which is in the form of with $${{\bm{a}}}= \left(-x_2+\frac12, x_1-\frac12\right),\quad\text{when}\quad d=2,$$ $${{\bm{a}}}= \left(-\frac{\sqrt{2}}{2}\left(x_2-\frac12\right), \frac{\sqrt{2}}{2}\left(x_1-\frac12\right)+ \frac{\sqrt{2}}{2}\left(x_3-\frac12\right),-\frac{\sqrt{2}}{2}\left(x_2-\frac12\right)\right),\quad\text{when}\quad d=3,$$ subject to periodic boundary conditions.
This benchmark test is used to assess the performance of the sparse grid DG transport schemes [@guo_sparsedg]. The initial condition is set to be the following smooth cosine bells (with $C^5$ smoothness), $$\label{eq:cosine} u(0,{\mathbf{x}})=\left\{\begin{array}{ll}b^{d-1}\cos^6\left(\frac{\pi r}{2b}\right),& \text{if}\quad r\leq b,\\
0,&\text{otherwise},
\end{array}\right.$$ where $b=0.23$ when $d=2$ and $b=0.45$ when $d=3$, and $r=|{\mathbf{x}}-{\mathbf{x}}_c|$ denotes the distance between ${\mathbf{x}}$ and the center of the cosine bell with ${\mathbf{x}}_c=(0.75,0.5)$ for $d=2$ and ${\mathbf{x}}_c=(0.5,0.55,0.5)$ for $d=3.$ As time evolves, the cosine bell traverses along circular trajectories centered at $(1/2,1/2)$ for $d=2$ and about the axis $\{x_1=x_3\}\cap \{x_2=1/2\}$ for $d=3$ without deformation. We start with the investigation of the convergence rate of the adaptive scheme. Similar to the previous example, we run the simulation up to $T=1$ with different $\varepsilon$ and summarize the $L^2$ errors, the number of active degrees of freedom and corresponding convergence rates $R_\varepsilon$ and $R_{\text{DOF}}$ in Table \[table:solid\_d2\]. The maximum mesh level is set as $N=7$. For both $d=2,\,3$, it is observed that the rate $R_\varepsilon$ is slightly less than 1 and $R_{\text{DOF}}$ is larger than $\frac{k+1}{d}$ but smaller than $k+1$, which is similar to the previous example.
$\varepsilon$ DOF $L^2$ error $R_{\text{DOF}}$ $R_\varepsilon$ DOF $L^2$ error $R_{\text{DOF}}$ $R_\varepsilon$
--------------- ------ ------------- ------------------ ----------------- -------- ------------- ------------------ -----------------
5E-04 260 6.71E-03 928 2.17E-03
1E-04 604 1.53E-03 1.76 0.92 3280 6.32E-04 0.98 1.78
5E-05 764 8.07E-04 2.72 0.92 4912 4.34E-04 0.93 0.24
1E-05 1832 2.37E-04 1.40 0.76 12744 1.14E-04 1.40 1.93
5E-06 2332 1.24E-04 2.69 0.94 20416 6.17E-05 1.31 0.38
1E-06 3440 3.71E-05 3.10 0.75 47496 1.99E-05 1.34 1.63
1E-04 747 7.15E-04 4779 2.28E-04
5E-05 855 4.43E-04 3.54 0.69 6345 1.54E-04 1.38 0.57
1E-05 1908 1.78E-04 1.14 0.57 14418 4.67E-05 1.46 0.74
5E-06 2376 8.55E-05 3.34 1.06 19845 2.35E-05 2.14 0.99
1E-06 4095 1.51E-05 3.18 1.08 37395 9.07E-06 1.50 0.60
5E-07 4914 9.12E-06 2.77 0.73 50355 4.94E-06 2.04 0.88
5E-06 1952 6.88E-05 16384 1.02E-05
1E-06 3136 1.19E-05 3.70 1.09 29440 3.36E-06 1.90 0.69
5E-07 3696 5.79E-06 4.40 1.04 39616 1.62E-06 2.45 1.05
1E-07 4992 1.53E-06 4.43 0.83 59456 6.23E-07 2.36 0.60
5E-08 6288 6.19E-07 3.92 1.30 80832 3.53E-07 1.84 0.82
1E-08 9184 1.44E-07 3.85 0.91 129088 2.62E-08 5.56 1.62
: Example \[ex:rotation\] with initial condition . Numerical error and convergence rate. $N=7$. $T=1$.
\[table:solid\_d2\]
We also use this example to compare the performance of the scheme with different configurations of $N$ and $\varepsilon$. We let $d=2$, $k=2$ and compute the solutions up to ten periods and plot the time evolution of $L^2$ errors and the number of active degrees of freedom in Figure \[fig:sbr\]. In particular, we compare maximum mesh level $N=5$ and $N=7,$ and run the simulations with three different values of $\varepsilon$. Since the cosine bell keeps its initial profile as time evolves, the degrees of freedom to resolve the solution for a fixed accuracy threshold should remain the same. We observe that if an excessively small $\varepsilon$ is taken, the used degrees of freedom will increase, but the error may not decrease much, see Figure \[fig:sbr\] (a-b). This shows the importance of the choice of $\varepsilon$ and $N$ for the computational efficiency of the scheme.
\
\
We then consider following discontinuous initial condition:
$$\label{eq:sbr_discontinuous}
u(0,{\mathbf{x}})=\left\{\begin{array}{ll}\displaystyle 1& (x_1,x_2)\in[\frac34-\frac{\sqrt{2}}{10},\frac34+\frac{\sqrt{2}}{10}]\times[\frac12-\frac{\sqrt{2}}{10},\frac12+\frac{\sqrt{2}}{10}],\\[2mm]
0& \text{otherwise},\end{array}\right.$$
when $d=2.$ In the simulation, we set $N=7$, $k=3$, $\varepsilon=10^{-5}$ and consider both $L^1$ and $L^2$ norm based criteria. In Figure \[fig:sbr\_n7\_dis\], we report the numerical solutions and the associated active elements at $T=2\pi$. Similar to the previous example, elements clusters towards the discontinuities and the scheme with both criteria is able to well resolve the discontinuities. However, more severe localized numerical oscillations are observed when compared with the previous example.
\
\[ex:deformational\] We consider two-dimensional deformational flow with velocity field $${{\bm{a}}}=(\sin^2(\pi x_1)\sin(2\pi x_2)g(t),-\sin^2(\pi x_2)\sin(2\pi x_1)g(t)),$$ where $g(t)=\cos(\pi t/T)$ with $T=1.5$.
First, we choose the cosine bell as the initial condition, but with ${\mathbf{x}}_c=(0.65,0.5)$ and $b=0.35$. The cosine bell deforms into a crescent shape at $t=T/2$, then goes back to its initial state at $t=T$ as the flow reverses. We perform a similar convergence study as in the previous two examples, which is summarized in Table \[table:defor\_d2\]. We observe similar convergence pattern that the rate $R_{\text{DOF}}$ is slightly smaller than $k+1$ and $R_\varepsilon$ is close to 1.
$\varepsilon$ DOF $L^2$ error $R_{\text{DOF}}$ $R_\varepsilon$
--------------- ------- ------------- ------------------ -----------------
1E-03 244 1.52E-02
5E-04 372 7.94E-03 1.53 0.93
1E-04 945 1.25E-03 1.98 1.15
5E-05 1248 1.00E-03 8.01 0.32
1E-05 2608 1.84E-04 2.30 1.05
5E-06 3508 9.96E-05 2.07 0.89
1E-06 5596 3.81E-05 2.06 0.60
5E-05 1143 5.41E-04
1E-05 2043 1.15E-04 2.67 0.96
5E-06 2736 6.91E-05 1.74 0.73
1E-06 4842 1.24E-05 3.00 1.07
5E-07 5994 8.29E-06 1.90 0.59
1E-07 9045 1.74E-06 3.79 0.97
5E-08 11142 1.08E-06 2.28 0.69
5E-05 1056 3.65E-04
1E-05 2048 8.85E-05 2.14 0.88
5E-06 2320 5.41E-05 3.94 0.71
1E-06 3904 1.45E-05 2.53 0.82
5E-07 4480 6.32E-06 6.02 1.20
1E-07 6224 1.30E-06 4.80 0.98
5E-08 7680 5.84E-07 3.82 1.16
: Example \[ex:deformational\] with initial condition . Numerical error and convergence rate. $N=7$. $T=1.5$. $d=2$.
\[table:defor\_d2\]
In Figure \[fig:defo\_n7\], we present the contour plots and the associated active elements of the numerical solutions computed with $N=7,\,k=3,\,\varepsilon=10^{-7}$ at $t=T/2$ when the shape of the bell is severely deformed, and $t=T$ when the solution is recovered into its initial state. The elements tend to cluster where the solution deforms as expected, and the shape of the cosine bell is well recovered at $t=T.$
\
We also consider the discontinuous initial condition , and use both $L^1$ and $L^2$ based refinement/coarsening criteria with $N=7$, $k=3$ and $\varepsilon=10^{-5}$. The numerical solutions and the associated active elements at $t=T/2$ and $t=T$ are plotted in Figures \[fig:defo\_dis\_t0\] and \[fig:defo\_dis\_t1\], respectively. For this challenging test, the numerical solution tends to be more oscillatory, again because no limiting mechanism is present in the scheme. The $L^2$ norm based criteria generate less oscillatory profiles but use more elements for computation.
\
\
Vlasov-Poisson simulations {#sec:kinetic}
==========================
In this section, we apply the adaptive multiresolution DG methods to solve the kinetic transport equation. Here, we consider the VP system, which is a fundamental model in plasma simulation. The solution is known to develop filamentation (fine structures) in the phase space. Therefore, it is a good test problem for the adaptive algorithm. For simplicity, we restrict our attention to two dimensional cases, but comment that the algorithm can be readily generalized to higher dimensions and other types of kinetic models.
\[exa:vp\] We first consider the non-dimensionalized single-species nonlinear VP system for plasma simulations in the zero-magnetic limit $$\begin{aligned}
&&f_t + {{\bf v}}\cdot\nabla_{\mathbf{x}}f + {\mathbf{E}}(t,{\mathbf{x}})\cdot\nabla_{{\bf v}}f=0, \label{eq:V}\\
&&-\Delta_{\mathbf{x}}\Phi({\mathbf{x}}) = \rho - 1,\quad {\mathbf{E}}({\mathbf{x}})=-\nabla_{\mathbf{x}}\Phi \label{eq:poisson}\end{aligned}$$ where $f(t,{\mathbf{x}},{{\bf v}})$ denotes the probability distribution function of electrons. ${\mathbf{E}}(t,{\mathbf{x}})$ is the self-consistent electrostatic field given by Poisson’s equation and $\rho(t,{\mathbf{x}})=\int_{{{\bf v}}} f(t,{\mathbf{x}},{{\bf v}}) d{{\bf v}}$ denotes the electron density. Ions are assumed to form a neutralizing background.
Periodic boundary condition is imposed in $x$-space. As a standard practice, the computational domain in $v$ is truncated to $ [-V_c,V_c]$, where $V_c$ is a constant chosen large enough to impose zero boundary condition in the $v$-direction $f_h(t,x,\pm V_c) = 0.$ The following set of initial conditions will be considered as classical benchmark numerical tests.
- Landau damping: $$f(0,x,v) = f_M(v)(1+A\cos(kx)),\quad x\in[0,L],\,v\in[-V_c,V_c],$$ where $A=0.5$, $k=0.5$, $L=4\pi$, $V_c=2\pi$, and $f_M(v)=\frac{1}{\sqrt{2\pi}}e^{-v^2/2}$.
- Bump-on-tail instability: $$f(0,x,v) = f_{BT}(v)(1+A\cos(kx)),\quad x\in[0,L],\,v\in[-V_c,V_c],$$ where $A=0.04$, $k=0.3$, $L=20\pi/3$, $V_c=13$, and $$f_{BT}(v)=n_p\exp\left(-\frac{v^2}{2}\right)+n_b\exp\left(-\frac{|v-u|^2}{2v_t^2}\right),$$ where $n_p = \frac{9}{10\sqrt{10\pi}},\,n_b = \frac{2}{10\sqrt{10\pi}},\,u=4.5,\,v_t=0.5.$
- Two-stream instability I: $$f(0,x,v) = f_{TSI}(v)(1+A\cos(kx)),\quad x\in[0,L],\,v\in[-V_c,V_c],$$ where $A=0.05$, $k=0.5$, $L=4\pi$, $V_c=2\pi$, and $f_{TSII}(v)=\frac{1}{\sqrt{2\pi}}v^2e^{-v^2/2}$.
- Two-stream instability II: $$f(0,x,v) = f_{TSII}(v)(1+A\cos(kx)),\quad x\in[0,L],\,v\in[-V_c,V_c],$$ where $A=0.05$, $k=2/13$, $L=13\pi$, $V_c=5$, and $$f_{TSII}(v)=\frac{1}{2v_t\sqrt{2\pi}}\left(\exp\left(-\frac{|u+v|^2}{2v_t^2}\right)+\exp\left(-\frac{|u-v|^2}{2v_t^2}\right)\right),$$ where $u=0.99,\,v_t=0.3.$
In the literature, RKDG schemes for the VP system [@Ayuso2009; @heath2012discontinuous; @cheng_vp] have been extensively studied. They are shown to have superior performance in conservation. Our previous work on sparse grid DG method [@guo_sparsedg] focused on the closely related Vlasov-Ampère (VA) system. The solver in [@guo_sparsedg] successfully reduced the DOFs of the equations while maintaining key conservation properties. However, when $t$ gets large and filamentation becomes severe, the sparse grid method has difficulties resolving the fine structures in the phase space. It is therefore to our interest to investigate if the adaptive multiresolution scheme can achieve a good balance between computational cost and numerical resolution.
We apply the adaptive algorithm to the Vlasov equation as outlined in Section \[sec:method\]. The Poisson equation is solved by a standard local DG method [@Arnold_2002_SIAM_DG] on the finest level mesh in the $x$-direction. In the simulations, we use $N=7$, $\varepsilon=10^{-5}$ and $k=3$. First we investigate the conservative properties of the scheme. The VP system is known to preserve many physical invariants, including the particle number $\int_{\mathbf{x}}\int_{{\bf v}}f(t,{\mathbf{x}},{{\bf v}})\,d{\mathbf{x}}d{{\bf v}},$ momentum $\int_{\mathbf{x}}\int_{{\bf v}}{{\bf v}}f(t,{\mathbf{x}},{{\bf v}})\,d{\mathbf{x}}d{{\bf v}},$ enstrophy $\int_{\mathbf{x}}\int_{{\bf v}}|f(t,{\mathbf{x}},{{\bf v}})|^2\,d{\mathbf{x}}d{{\bf v}},$ and total energy $\frac12\int_{\mathbf{x}}\int_{{\bf v}}f(t,{\mathbf{x}},{{\bf v}})|{{\bf v}}|^2\,d{\mathbf{x}}d{{\bf v}}+\frac12 \int_{\mathbf{x}}|{\mathbf{E}}(t,{\mathbf{x}})|^2\,d{\mathbf{x}}.$ Generally speaking, it is difficult for a numerical method to preserve all those invariants. By careful design, DG methods have been designed to preserve the particle number and the energy of the system [@Ayuso2009; @cheng2014energy]. For our scheme, in Figure \[fig:evo\_vp\], we report the time evolution of the relative errors in total particle number, total energy, enstrophy, and evolution of error in momentum. It is observed that the total particle number is conserved up to the magnitude of $\varepsilon$. This is not as well conserved as a traditional RKDG method. However, it is expected because the adaptive algorithm only keeps elements above the error threshold in the hash table and causing the truncation errors at the velocity boundary to be on the same magnitude of $\varepsilon$, contributing to the numerical errors in particle numbers. However, we do comment that the addition and removal of elements other than level ${\mathbf{l}}=\mathbf{0}$ in the refinement and coarsening steps will not change the numerical mass because the basis functions are orthogonal. Similarly, the total energy and momentum also show visible and slightly larger errors than the standard RKDG method. The enstrophy exhibits the most visible decay because of the choice of upwind numerical flux.
\
\
In Figures \[fig:con\_lan\]-\[fig:con\_two2\], we present the phase space contour plots and the associated active elements at several instances of time for all four initial conditions. In Figure \[fig:vp\_elements\], the time evolution of the numbers of active degrees of freedom are plotted. It is observed that when the solution has not developed rich filamentation structures, only a small amount of degrees of freedom are used. As time evolves, thiner and thiner filaments are generated because of phase mixing. The adaptive scheme can automatically add degrees of freedom to adequately resolve the fine structures. We remark that the quality of the numerical results are quite comparable to those computed by the more expensive full grid DG method with similar mesh size, e.g., see [@cheng_vp], while much less degrees of freedom are needed, leading to computational savings.
\
\
\
\
\
\
\
\
\
\
\
\
Lastly, we present a numerical comparison between the sparse grid DG method [@guo_sparsedg] and the adaptive scheme in this paper. We consider two-stream instability I with parameter choice $N=7$, $k=3$ for both methods solving the VP system. The phase space contour plots of the sparse grid DG method at $t=10$ when the solution is very smooth and at $t=20$ when the solution has developed filamentations are provided in Figure \[fig:vp\_sparse\], to be compared with the results of the adaptive scheme in Figure \[fig:con\_two\]. In Figure \[fig:vp\_density\], we plot the percentage of used elements for each incremental space ${\mathbf{W}}_{\mathbf{l}}$ at at time $t=10$ and $20$ for the adaptive method. While both schemes actually provide similar accurate description of the macroscopic moments, there is a qualitative difference when the numerical resolution of $f$ is concerned. As expected, when the solution is smooth at $t=10$, both the sparse grid DG method and the adaptive method can generate reliable results with comparable degrees of freedom, see Figure \[fig:con\_two\](a) and Figure \[fig:vp\_density\](a). At $t=20$, the sparse grid DG method does not resolve all the fine structures when compared with the adaptive method (see Figure \[fig:con\_two\](c) versus Figure \[fig:vp\_sparse\](b)). At this time, the adaptive method uses more degrees of freedom than the sparse grid method (see Figure \[fig:vp\_density\](b)), but the DOFs are still much less than the full grid method.
\[exa:ovp\] We consider the following oscillatory VP system in the polar coordinates: $$\begin{aligned}
&&f_t + \frac{v}{\epsilon} f_r + (E(t,r)+E_{ext}(t,r)) f_v=0, \label{eq:osc_v}\\
&&\partial_r(rE(t,r)) = r\rho(t,r), \label{eq:ocs_poisson}\end{aligned}$$ where the dimensionless parameter $\epsilon=0.05$ denotes the ratio between the characteristic lengths in the transverse and the longitudinal directions [@crouseilles2016]. $E_{ext}$ is the external electric field specified as $$E_{ext}(t,r) = -\frac{r}{\epsilon}+r\cos^2\left(\frac{t}{\epsilon}\right).$$ The initial condition is set to be a discontinuous function $$\label{eq:beam}
f(0,r,v)=\frac{n_0}{v_t\sqrt{2\pi}}\exp\left(-\frac{v^2}{2v_t^2}\right)\chi_{[-r_m,r_m]}(r),\quad (r,v)\in[-3,3]^2,$$ where $n_0=4$, $v_t=0.1,\, r_m=1.85,$ and $$\chi_{[-r_m,r_m]}(r) =\left\{\begin{array}{ll} 1& \text{if}\quad -r_m\leq r\leq r_m,\\
0&\text{otherwise}.\end{array}
\right.$$
This example has been intensively studied in [@crouseilles2013asymptotic; @Frenod2015169; @crouseilles2016], where several effective schemes have been developed. Note that the initial condition considered here is discontinuous and represents a semi-Gaussian beam in particle accelerator physics [@crouseilles2016]. We impose zero boundary conditions in both $r$ and $v$ directions. An LDG method is used to solve Poisson’s equation and the closure condition $E(0)=0$ is strongly imposed in the formulation. In the simulation, we let $\epsilon=0.05$, and $k=2$, $N=7$, $\varepsilon=10^{-4}$. We consider both $L^1$ ( ) and $L^2$ ( ) norm based criteria as the refining and coarsening indicators.
We first present the time evolution of the relative errors in total particle number and enstrophy in Figure \[fig:evo\_vp\_conser\]. Similar to the previous VP system, the scheme with both adaptive indicators is able to conserve the particle number up to the magnitude of $\varepsilon$. The enstrophy decays due to the choice of the numerical flux.
The phase space contours from our scheme agrees well with those in the literature [@crouseilles2016]. In Figure \[fig:con\_osc\_l1\], we present the contour plots and the associated active elements at three instances of time, for which the $L^1$ norm based criteria are used as the adaptive indicator. In Figure \[fig:con\_osc\_l2\], we also report the contour plot and associated adaptive mesh at final time with the $L^2$ norm based criteria to compare the performance of the two criteria as adaptive indicators. It is observed that the numerical results are qualitatively the same, but more elements are used by the scheme with the $L^2$ norm based criteria. In Figure \[fig:evo\_vp\_conser\], the time evolution of the number of active degrees of freedom are plotted. Again, when the solution develops filaments, more degrees of freedom are added thanks to the adaptive mechanism. In summary, for this example, the $L^1$ norm based criteria is preferred for the sake of efficiency.
\
\
\
Conclusions and future work {#sec:conclusion}
===========================
In this paper, we develop an adaptive multiresolution DG scheme for computing time-dependent transport equations. The key ingredients of the scheme are the weak formulation of the DG method and adaptive error thresholding based on hierarchical surplus. Extensive numerical tests show that our scheme performs similarly to a sparse grid DG method when the solution is smooth, and can automatically capture fine local structures when the solution is no longer smooth. Detailed comparison between several refinement/coarsening error indicators are performed. The method is demonstrated to work well for kinetic simulations. Future work consists of the study of limiters and further improvement of the scheme including local time stepping and adaptivity with both the mesh and polynomial degrees.
[^1]: Department of Mathematics, Michigan State University, East Lansing, MI 48824 U.S.A. [wguo@math.msu.edu]{}
[^2]: Department of Mathematics, Michigan State University, East Lansing, MI 48824 U.S.A. [ycheng@math.msu.edu]{}. Research is supported by NSF grant DMS-1453661.
|
[**The Zrank Conjecture and Restricted Cauchy Matrices**]{}
Guo-Guang Yan$^{1}$, Arthur L. B. Yang$^{2}$ and Joan J. Zhou$^{3}$\
$^{1}$Information School, Zhongnan University of Economics and Law\
Wuhan 430060, P. R. China\
Email: $^{1}$[guogyan@eyou.com]{}\
$^{1,2,3}$Center for Combinatorics, LPMC\
Nankai University, Tianjin 300071, P. R. China\
Email: $^{2}$[yang@nankai.edu.cn]{}, $^{3}$[jinjinzhou@hotmail.com]{}\
March 28, 2005
[**Abstract.**]{} The rank of a skew partition $\lambda/\mu$, denoted ${\rm rank}(\lambda/\mu)$, is the smallest number $r$ such that $\lambda/\mu$ is a disjoint union of $r$ border strips. Let $s_{\lambda/\mu}(1^t)$ denote the skew Schur function $s_{\lambda/\mu}$ evaluated at $x_1=\cdots=x_t=1,\,x_i=0$ for $i>t$. The zrank of $\lambda/\mu$, denoted ${\rm zrank}(\lambda/\mu)$, is the exponent of the largest power of $t$ dividing $s_{\lambda/\mu}(1^t)$. Stanley conjectured that ${\rm rank}(\lambda/\mu)={\rm zrank}(\lambda/\mu)$. We show the equivalence between the validity of the zrank conjecture and the nonsingularity of restricted Cauchy matrices. In support of Stanley’s conjecture we give affirmative answers for some special cases.
[**Keywords:**]{} zrank, rank, outside decomposition, border strip decomposition, snakes, interval sets, restricted Cauchy matrix, reduced code.
[**MSC2000 Subject Classification:**]{} 05E10, 15A15.
[**Suggested Running Title:**]{} The Zrank Conjecture
[**Corresponding Author:**]{} Arthur L. B. Yang, yang@nankai.edu.cn
Introduction
============
Let $\lambda=(\lambda_1,\,\lambda_2,\ldots)$ be a partition of an integer $n$, i.e., $\lambda_1\geq \lambda_2\geq \cdots \geq 0$ and $\lambda_1+\lambda_2+\cdots=n$. The number of positive parts of $\lambda$ is called the length of $\lambda$, denoted $\ell(\lambda)$. The *Young diagram* of $\lambda$ may be defined as the set of points $(i,j)\in \Z^2$ such that $1\leq
j\leq\lambda_i$ and $1\leq i\leq \ell(\lambda)$. A Young diagram can also be represented in the plane by an array of squares justified from the top and left corner with $\ell(\lambda)$ rows and $\lambda_i$ squares in row $i$. A square $(i,j)$ in the diagram is the square in row $i$ from the top and column $j$ from the left. The content of $(i,j)$, denoted $\tau((i,j))$, is given by $j-i$. The *rank* of $\lambda$, denoted ${\rm
rank}(\lambda)$, is the length of the main diagonal of the diagram of $\lambda$. Given two partitions $\lambda$ and $\mu$, we say that $\mu\subseteq\lambda$ if $\mu_i\leq \lambda_i$ for all $i$. If $\mu\subseteq\lambda$, we define a *skew partition* $\lambda/\mu$, whose Young diagram is obtained from the Young diagram of $\lambda$ by peeling off the Young diagram of $\mu$ from the upper left corner.
We assume that the reader is familiar with the notation and terminology on symmetric functions in [@S1]. In connection with tensor products of Yangian modules, Nazarov and Tarasov [@NT] give a generalization of a rank to a skew partition $\lambda/\mu$. Recently Stanley developed a general theory of minimal border strip decompositions and gave several simple equivalent characterizations of ${\rm rank}(\lambda/\mu)$ in [@S2]. One of the characterizations of the rank of a skew partition $\lambda/\mu$ says that ${\rm rank}(\lambda/\mu)$ is the smallest integer $r$ such that the Young diagram of $\lambda/\mu$ is the disjoint union of $r$ border strips. Let $s_{\lambda/\mu}(1^t)$ denote the skew Schur function $s_{\lambda/\mu}$ evaluated at $x_1=\cdots=x_t=1,\,x_i=0$ for $i>t$. The *zrank* of $\lambda/\mu$, denoted ${\rm
zrank}(\lambda/\mu)$, is the largest power of $t$ dividing the polynomial $s_{\lambda/\mu}(1^t)$. Stanley conjectured that the equality ${\rm rank}(\lambda/\mu)={\rm zrank}(\lambda/\mu)$ always holds, which we call the *zrank conjecture*.
In his combinatorial approach to the zrank conjecture in [@S2], Stanley defined the snake sequence and the interval sets for a skew partition $\lambda/\mu$. In Section 2 for each interval set $\mathcal{I}$ of $\lambda/\mu$ we define an interval permutation $\sigma_{\mathcal{I}}$. Let ${\rm cr}(\mathcal{I})$ be the number of crossings of $\mathcal{I}$, and let $\rm{inv}(\sigma_{\mathcal{I}})$ be the number of inversions of $\sigma_{\mathcal{I}}$. We show that ${\rm cr}(\mathcal{I})$ and $\rm{inv}(\sigma_{\mathcal{I}})$ have the same parity.
Stanley generalized the code of a partition to the code of a skew partition, and obtained a two-line binary sequence in [@S2]. This sequence is called the *partition sequence* by Bessenrodt [@B1; @B2]. Given a minimal border strip decomposition $\mathbf{D}$ of $\lambda/\mu$, let $P_{\mathbf{D}}$ be the set of the contents of the lower left-hand squares of the border strips in $\mathbf{D}$, and let $Q_{\mathbf{D}}$ be the set of the contents of the upper right-hand squares. Using the partition sequence, we show that $P_{\mathbf{D}}$ and $Q_{\mathbf{D}}$ are uniquely determined by the shape of the skew partition $\lambda/\mu$ in Section 3, i.e., these two sets are independent of the minimal border strip decomposition $\mathbf{D}$. For a given skew partition, we find a connection between the values of these two sets and the paired integers of the interval set.
Outside decompositions are introduced by Hamel and Goulden [@HG] and are used to give a unified approach to the determinantal expressions for the skew Schur funtions including the Jacobi-Trudi determinant, its dual, the Giambelli determinant and the ribbon determinant. For any outside decomposition, Hamel and Goulden derive a determinantal formula with ribbon Schur functions as entries. Their proof is based on a lattice path construction and the Gessel-Viennot methodology [@GV1; @GV2]. In Section 4 we employ the determinantal formula in the case of the greedy border strip decomposition and give the evaluation of $(t^{-{\rm rank}(\lambda/\mu)}s_{\lambda/\mu}(1^t))_{t=0}$. As a consequence we obtain the combinatorial description of $(t^{-{\rm
rank}(\lambda/\mu)}s_{\lambda/\mu}(1^t))_{t=0}$ in terms of the interval sets of $\lambda/\mu$ given by Stanley [@S2 Eq. (30)].
Based on the above results, we give an equivalent characterization of the zrank conjecture. Given two positive integer sequences, we define a *restricted Cauchy matrix* corresponding to these two sequences. The main objective of this paper is to show that the zrank conjecture holds for any skew partition if and only if all the restricted Cauchy matrices are nonsingular. We present a constructive proof for this equivalence in Section 5. Using some fundamental properties of determinants, we confirm the nonsingularity of the restricted Cauchy matrices for several special classes of skew partitions.
Snake sequences and interval sets
=================================
We follow the terminology of Stanley [@S2] on snake sequences and interval sets, which are helpful notions for the enumeration of the minimal border strip decompositions of a skew partition $\lambda/\mu$. Let us consider the bottom-right boundary lattice path with steps $(0,1)$ or $(1,0)$ from the bottom-leftmost point of the diagram of $\lambda/\mu$ to the top-rightmost point. We regard this path as a sequence of edges $e_1,\,e_2,\,\ldots,\,e_k$. For an edge $e$ in this path we define a subset $S_{e}$ of squares of $\lambda/\mu$, called a *snake*. If there exists no square having $e$ as an edge, then we have the set $S_e=\emptyset$. Let $(i,j)$ be the unique square of $\lambda/\mu$ having $e$ as an edge. If $e$ is horizontal, then we define $$\label{right-snake}
S_e=\lambda/\mu\cap\{(i,j),\,(i-1,j),\,(i-1,j-1),\,(i-2,j-1),\,(i-2,j-2),\,\ldots\}.$$ If $e$ is vertical, we then define $$\label{left-snake}
S_e=\lambda/\mu\cap\{(i,j),\,(i,j-1),\,(i-1,j-1),\,(i-1,j-2),\,(i-2,j-2),\,\ldots\}.$$ For example, the nonempty snakes of the skew shape $(7,6,6,3)/(3,1)$ are shown in Figure \[snake\], and the two snakes with just one square are shown with a single bullet. The *length* $\ell(S)$ of a snake $S$ is defined to be one less than its number of squares. For an empty snake $S$, let $\ell(S)=-1$. A *right snake* is a snake of even length and of the form , and a *left snake* is a snake of even length and of the form . From the boundary lattice path we obtain a sequence of snakes: $(S_{e_1},\,S_{e_2},\,\ldots,\,S_{e_k})$. The *snake sequence* of $\lambda/\mu$, denoted ${\rm SS}(\lambda/\mu)$, is defined by replacing a left snake of length $2m$ with the symbol $L_m$ in the sequence $(S_{e_1},\,S_{e_2},\,\ldots,\,S_{e_k})$, replacing a right snake of length $2m$ with $R_m$, and replacing a snake of odd length with $O$. From Figure \[snake\], we see that $${\rm SS}((7,6,6,3)/(3,1))=L_0 L_1 O O O O L_2 R_2 R_1 O R_0.$$
(160,100) (80,85)[(1,0)[80]{}]{} (40,65)[(1,0)[120]{}]{} (20,45)[(1,0)[120]{}]{} (20,25)[(1,0)[120]{}]{} (20,5)[(1,0)[60]{}]{}
(20,5)[(0,1)[40]{}]{} (40,5)[(0,1)[60]{}]{} (60,5)[(0,1)[60]{}]{} (80,5)[(0,1)[80]{}]{} (100,25)[(0,1)[60]{}]{} (120,25)[(0,1)[60]{}]{} (140,25)[(0,1)[60]{}]{} (160,65)[(0,1)[20]{}]{}
(25,13)[$\bullet$]{} (30,9)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(30,11)[$\line(0,1){2}$]{} (30,15)[$\line(0,1){2}$]{} (30,19)[$\line(0,1){2}$]{} (30,23)[$\line(0,1){2}$]{} (30,27)[$\line(0,1){2}$]{}
(50,9)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(50,11)[$\line(0,1){2}$]{} (50,15)[$\line(0,1){2}$]{} (50,19)[$\line(0,1){2}$]{} (50,23)[$\line(0,1){2}$]{} (50,27)[$\line(0,1){2}$]{} (50,31)[$\line(0,1){2}$]{}
(27,34)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(70,15)[$\line(0,1){2}$]{} (70,11)[$\line(0,1){2}$]{} (70,19)[$\line(0,1){2}$]{} (70,23)[$\line(0,1){2}$]{} (70,27)[$\line(0,1){2}$]{} (70,31)[$\line(0,1){2}$]{} (70,35)[$\line(0,1){2}$]{}
(47,38)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(47,40)[$\line(0,1){2}$]{} (47,44)[$\line(0,1){2}$]{} (47,48)[$\line(0,1){2}$]{} (47,52)[$\line(0,1){2}$]{} (47,56)[$\line(0,1){2}$]{}
(64,40)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(64,42)[$\line(0,1){2}$]{} (64,46)[$\line(0,1){2}$]{} (64,50)[$\line(0,1){2}$]{} (64,54)[$\line(0,1){2}$]{}
(52,56)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(95,30)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(95,33)[$\line(0,1){2}$]{} (95,37)[$\line(0,1){2}$]{} (95,41)[$\line(0,1){2}$]{} (95,45)[$\line(0,1){2}$]{} (95,49)[$\line(0,1){2}$]{}
(72,52)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(115,32)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(115,33)[$\line(0,1){2}$]{} (115,37)[$\line(0,1){2}$]{} (115,41)[$\line(0,1){2}$]{} (115,45)[$\line(0,1){2}$]{} (115,49)[$\line(0,1){2}$]{} (115,53)[$\line(0,1){2}$]{}
(95,55)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(95,57)[$\line(0,1){2}$]{} (95,61)[$\line(0,1){2}$]{} (95,65)[$\line(0,1){2}$]{} (95,69)[$\line(0,1){2}$]{} (95,74)[$\line(0,1){2}$]{}
(130,33)[$\line(0,1){2}$]{} (130,37)[$\line(0,1){2}$]{} (130,41)[$\line(0,1){2}$]{} (130,45)[$\line(0,1){2}$]{} (130,49)[$\line(0,1){2}$]{} (130,53)[$\line(0,1){2}$]{}
(108,57)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(108,65)[$\line(0,1){2}$]{} (108,69)[$\line(0,1){2}$]{} (108,74)[$\line(0,1){2}$]{} (108,59)[$\line(0,1){2}$]{} (108,79)[$\line(0,1){2}$]{}
(88,82)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(135,63)[$\line(0,1){2}$]{} (135,67)[$\line(0,1){2}$]{} (135,59)[$\line(0,1){2}$]{} (135,55)[$\line(0,1){2}$]{} (135,51)[$\line(0,1){2}$]{} (135,71)[$\line(0,1){2}$]{}
(115,73)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(135,82)[$\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$ $\line(1,0){2}$]{}
(148,73)[$\bullet$]{}
Let ${\rm rank}(\lambda/\mu)=r$, and let ${\rm
SS}(\lambda/\mu)=q_1q_2\cdots q_k$. An *interval set* $\mathcal{I}$ of $\lambda/\mu$ is defined to be a collection of $r$ ordered pairs $\{(u_1,v_1),\,(u_2,v_2),\,\ldots,\,(u_r,v_r)\}$ such that
1. $u_i\neq u_j$ and $v_i\neq v_j$ for $1\leq i<j\leq r$.
2. $1\leq u_i<v_i\leq k$ and $u_i\neq v_j$ for $1\leq i,j\leq
r$.
3. $q_{u_i}=L_s$ and $q_{v_i}=R_{s'}$ for some $s$ and $s'$ (depending on $i$).
Let ${\rm cr}(\mathcal{I})$ denote the number of crossings of $\mathcal{I}$, i.e., the number of pairs $(i,j)$ for which $u_i<u_j<v_i<v_j$. According to [@S2 Proposition 4.3], there exists a unique interval set $\mathcal{I}_0=\{(w_1,y_1),\,(w_2,y_2),\,\ldots,\,(w_r,y_r)\}$ such that ${\rm cr}(\mathcal{I}_0)=0$. From [@S2], we see that ${\rm SS}(\lambda/\mu)$ has exactly $r$ left snakes and $r$ right snakes. For an interval set $\mathcal{I}=\{(u_1,v_1),\,(u_2,v_2),\,\ldots,\,(u_r,v_r)\}$, we may impose a linear order $u_1<u_2<\cdots<u_r$ on its elements. Then there exists a unique permutation $\sigma$ relative to $\mathcal{I}_0$ such that for each $i$ $$u_i=w_i \mbox{ and } v_i=y_{\sigma_i}.$$ Thus, each interval set $\mathcal{I}$ is associated to a permutation $\sigma_I$, which we call the *interval permutation* of $\mathcal{I}$ with respect to $\mathcal{I}_0$. Given a permutation $\sigma$, let $\rm{inv}(\sigma)$ denote the number of inversions of $\sigma$, i.e., the number of pairs $(i,j)$ satisfying $i<j$ but $\sigma_i>\sigma_j$.
\[key-le\] Given a skew partition $\lambda/\mu$ and an interval set $\mathcal{I}$ of $\lambda/\mu$, let $\sigma_{\mathcal{I}}$ be the interval permutation with respect to $\mathcal{I}_0$. Then we have $${\rm cr}(\mathcal{I})\equiv \rm{inv}(\sigma_{\mathcal{I}})\
(\rm{mod}\ 2).$$
First we give a geometric representation of ${\rm
cr}(\mathcal{I})$. For each interval $(u_i,\,v_i)$ of $\mathcal{I}$ we draw an arc on top of ${\rm SS}(\lambda/\mu)$ which connects two snakes $q_{u_i}$ and $q_{v_i}$. For a given pair $(i,j)$ with $i<j$, the two arcs $(u_i,\,v_i)$ and $(u_j,\,v_j)$ are said to be noncrossing if $u_i<u_j<v_j<v_i$. In this terminology ${\rm cr}(\mathcal{I})$ equals the number of crossings.
To determine the inversions of $\sigma_{\mathcal{I}}$, we replace $q_{w_i}$ by $F_i$ and $q_{y_i}$ by $G_i$ in ${\rm
SS}(\lambda/\mu)$ for each $i$. Clearly, $\sigma_{\mathcal{I}}$ is a bijection from $\{F_1,\,F_2,\,\ldots,\,F_r\}$ to $\{G_1,\,G_2,\,\ldots,\,G_r\}$. We now represent the snakes of ${\rm SS}(\lambda/\mu)$ with respect to the order $F_1,F_2,\cdots,
F_r,G_r,G_{r-1},\cdots, G_1$ by moving $G_1$ to the right of the rightmost element if $G_1$ itself is not the rightmost element, and repeating this process until we achieve the desired order. It follows that $\rm{inv}(\sigma_{\mathcal{I}})$ equals the number of crossings in the above representation. Note that at each step of moving $G_i$ to the proper position, the number of crossings in the diagram can only change by an even number. This completes the proof.
For example, let $\lambda/\mu=(8,8,7,4)/(4,1,1)$. Figure \[figure-snake\] shows the snake sequence ${\rm
SS}((8,8,7,4)/(4,1,1))$, from which we see that $$\mathcal{I}_0=\{(1, 12), (3, 11), (4, 5), (8, 9)\}.$$
(160,35) (40,5)[[$L_0\,O\,L_1\,L_2\,R_2\,O\,O\,L_2\,R_2\,O\,R_1\,R_0$]{}]{} (43,13)(80,35)(117,13) (55,13)(82,30)(109,13) (62,13)(66,17)(70,13) (89,13)(93,17)(97, 13)
Let us illustrate the proof of Proposition \[key-le\] by the example $\mathcal{I}=\{(1, 9), (3, 12), (4, 5), (8,
11)\}$, for which we have $\sigma_{\mathcal{I}}=[4, 1, 3, 2 ]$. The crossings of $\mathcal{I}$ are shown in Figure \[figure-b\], where we relabel the snakes as described in the proof. Figure \[figure-a\] demonstrates the diagram after moving $G_3$ which has two more crossings. It is evident that $${\rm cr}(\mathcal{I})=2, \quad {\rm inv}(\pi)=4, \quad {\rm
cr}(\mathcal{I})\equiv \rm{inv}(\pi)\ (\rm{mod}\ 2).$$
(160,35) (40,5)[[$F_1\,O\,F_2\,F_3\,\underline{G_3}\,O\,O\,F_4\,G_4\,O\,G_2\,G_1$]{}]{}
(43,13)(70,35)(97, 13) (55,13)(81,30)(117,13) (62,13)(66,17)(70,13) (89,13)(99,17)(109,13)
(160,35) (40,5)[[$F_1\,O\,F_2\,F_3\,O\,O\,F_4\,G_4\,\underline{G_3}\,O\,G_2\,G_1$]{}]{} (43,13)(70,35)(89,13) (55,13)(81,30)(117,13) (62,13)(80,25)(97, 13) (80,13)(90,20)(109,13)
Minimal border strip decompositions
===================================
We recall the notion of the *reduced code* of a skew partition $\lambda/\mu$, denoted ${\rm c}(\lambda/\mu)$. The reduced code ${\rm c}(\lambda/\mu)$ is also known as the *partition sequence* of $\lambda/\mu$ [@B1; @B2]. Consider the two boundary lattice paths of the diagram of $\lambda/\mu$ with steps $(0,1)$ or $(1,0)$ from the bottom-leftmost point to the top-rightmost point. Replacing each step $(0,1)$ by $1$ and each step $(1,0)$ by $0$, we obtain two binary sequences by reading the lattice paths from the bottom-left corner to the top-right corner. Denote the top-left binary sequence by $f_1,\,f_2,\,\ldots,\,f_k$, and the bottom-right binary sequence by $g_1,\,g_2,\,\ldots,\,g_k$. The reduced code ${\rm
c}(\lambda/\mu)$ is defined by the two-line array $$\begin{array}{cccc}
f_1 & f_2 & \cdots & f_k\\
g_1 & g_2 & \cdots & g_k
\end{array}.$$ The reduced code of the skew partition $(5,4,3,2)/(2,1,1)$ in Figure \[bound\] is $$\begin{array}{ccccccccc}1 & 0 & 1 & 1 & 0 &
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1
\end{array}.$$
A *diagonal* with content $j$ of $\lambda/\mu$, denoted $d_j(\lambda/\mu)$, is the set of all the squares in $\lambda/\mu$ having content $j$. Suppose that the length of ${\rm
c}(\lambda/\mu)$ is $k$. It is obvious that $\lambda/\mu$ has $k-1$ diagonals. Let $\epsilon$ be the smallest content of $\lambda/\mu$. For each $i:1\leq i\leq k-1$, we put the diagonal $d_{\epsilon+i-1}$ between the $i$-th column and $(i+1)$-th column of ${\rm c}(\lambda/\mu)$. Then we obtain a connection between the diagonals of $\lambda/\mu$ and the reduced code ${\rm
c}(\lambda/\mu)$.
(186,100) (80,85)[(1,0)[60]{}]{} (60,65)[(1,0)[80]{}]{} (60,45)[(1,0)[60]{}]{} (40,25)[(1,0)[60]{}]{} (40,5)[(1,0)[40]{}]{}
(40,5)[(0,1)[20]{}]{} (60,5)[(0,1)[60]{}]{} (80,5)[(0,1)[80]{}]{} (100,25)[(0,1)[60]{}]{} (120,45)[(0,1)[40]{}]{} (140,65)[(0,1)[20]{}]{}
(142, 73)[[1]{}]{}(130, 55)[0]{}(122, 53)[[1]{}]{} (110, 35)[0]{}(102, 33)[[1]{}]{} (90, 15)[0]{}(82, 13)[[1]{}]{} (70, -5)[0]{} (50, -5)[0]{}
(75, 73)[[1]{}]{}(130, 90)[0]{}(55, 53)[[1]{}]{} (110, 90)[0]{}(55, 33)[[1]{}]{} (90, 90)[0]{}(35, 13)[[1]{}]{} (70, 67)[0]{} (50, 27)[0]{}
Recall that a skew partition $\lambda/\mu$ is said to be *connected* if the interior of the Young diagram of $\lambda/\mu$ is a connected set. A *border strip* is a connected skew partition with no $2\times 2$ square. Define the size of a border strip $B$ as the number of squares of $B$, and define the *height* $ht(B)$ of $B$ as one less than its number of rows. We say that $B\subset \lambda/\mu$ is a border strip of $\lambda/\mu$ if $\lambda/\mu-B$ is a skew partition $\nu/\mu$. A border strip $B$ of $\lambda/\mu$ is said to be *maximal* if there does not exist another border strip $B'\subset \lambda/\mu$ such that $B\subset B'$. A *border strip decomposition* [@S1] of $\lambda/\mu$ is a partition of the squares of $\lambda/\mu$ into pairwise disjoint border strips. A *greedy border strip decomposition* of $\lambda/\mu$ is obtained by successively removing the maximal border strip from $\lambda/\mu$. A border strip decomposition is *minimal* if there does not exist a border strip decomposition with a fewer number of border strips.
Stanley [@S2 Proposition 2.2] has shown that the rank of a skew partition $\lambda/\mu$ is equal to the number of border strips in a minimal border strip decomposition of $\lambda/\mu$, and it is also equal to the number of ${1 \atop 0}$ columns of ${\rm c}(\lambda/\mu)$. As a consequence, a greedy border strip decomposition is minimal, because when we successively remove the maximal border strips from $\lambda/\mu$ a column ${1\atop 0}$ of ${\rm c}(\lambda/\mu)$ changes into ${1\atop 1}$ and a column ${0\atop 1}$ changes into ${0\atop 0}$.
Suppose that ${\rm rank}(\lambda/\mu)=r$. Given a minimal border strip decomposition $\mathbf{D}=\{B_1,\,B_2,\,\ldots,\,B_r\}$ of $\lambda/\mu$, let $$P_{\mathbf{D}}=\{\tau({\rm init}(B_1)),\,\tau({\rm init}(B_2)),\,\ldots,
\tau({\rm init}(B_r))\}$$ and $$Q_{\mathbf{D}}=\{\tau({\rm fin}(B_1)),\,\tau({\rm fin}(B_2)),\,\ldots,
\tau({\rm fin}(B_r))\},$$ where ${\rm init}(B_i)$ is the lower left-hand square of $B_i$ and ${\rm fin}(B_i)$ is the upper right-hand square. The following proposition shows that $P_{\mathbf{D}}$ and $Q_{\mathbf{D}}$ are independent of the minimal border strip decomposition $\mathbf{D}$.
\[cont-bord\] Let $\mathcal{I}_0=\{(w_1,y_1),\,(w_2,y_2),\,\ldots,\,(w_r,y_r)\}$ be the interval set of $\lambda/\mu$ with ${\rm
cr}(\mathcal{I}_0)=0$. Let $\epsilon$ be the smallest value among the contents of the squares of $\lambda/\mu$. Let $\mathbf{D}$ be a minimal border strip decomposition of $\lambda/\mu$. Then we have $$P_{\mathbf{D}}=\{\epsilon+w_i-1\, |\, 1\leq i\leq r\} \mbox{ and }
Q_{\mathbf{D}}=\{\epsilon+y_i-2\, |\, 1\leq i\leq r\}.$$
By [@S2 Proposition 2.1], we see that the operation of removing a border strip $B$ of size $p$ from $\lambda/\mu$ corresponds to the operation of choosing $i$ with the $i$-th column being ${1\atop 0}$ and the $(i+p)$-th column being ${0\atop
1}$, and then replacing the $i$-th column with ${1\atop 1}$ and the $(i+p)$-th column with ${0\atop 0}$. Moreover, the lower left-hand square of $B$ lies on the diagonal $d_{i}$, and the upper right-hand square of $B$ lies on the diagonal $d_{i+p-1}$. Therefore $$\tau({\rm init}(B))=\epsilon+i-1 \mbox{ and }
\tau({\rm fin}(B))=\epsilon+i+p-2.$$ It follows that $P_{\mathbf{D}}$ and $Q_{\mathbf{D}}$ are determined by the indices of the columns ${1\atop 0}$ and ${0\atop 1}$ of ${\rm c}(\lambda/\mu)$ respectively. Since $\{w_i\}$ is the set of indices of columns ${1\atop 0}$ of ${\rm c}(\lambda/\mu)$, and $\{y_i\}$ is the set of indices of ${0\atop 1}$, we get the desired assertion.
Giambelli-type determinantal formulas
======================================
In this section, we obtain a determinantal formula for the quantity given by Stanley based on the Giambelli-type formula for skew Schur functions. Let $\lambda/\mu$ be a skew diagram. A border strip decomposition of $\lambda/\mu$ is said to be an *outside decomposition* if every strip in the decomposition has an initial square on the left or bottom perimeter of the diagram and a terminal square on the right or top perimeter, see Figure \[bd1\]. It is obvious that a greedy border strip decomposition of $\lambda/\mu$ is an outside decomposition.
(16,4)
(1,1)[(1,0)[2.1]{}]{} (1,1.7)[(1,0)[3.5]{}]{} (1.7,2.4)[(1,0)[3.5]{}]{} (1.7,3.1)[(1,0)[3.5]{}]{} (2.4,3.8)[(1,0)[2.8]{}]{}
(1,1)[(0,1)[0.7]{}]{}(1.7,1)[(0,1)[2.1]{}]{} (2.4,1)[(0,1)[2.8]{}]{}(3.1,1)[(0,1)[2.8]{}]{} (3.8,1.7)[(0,1)[2.1]{}]{}(4.5,1.7)[(0,1)[2.1]{}]{} (5.2,2.4)[(0,1)[1.4]{}]{}
(1.35,1.35)(0.35,0)[4]{}[(1,0)[0.175]{}]{} (2.75,3.45)(0.35,0)[6]{}[(1,0)[0.175]{}]{} (4.15,2.75)(0.35,0)[2]{}[(1,0)[0.175]{}]{}
(2.75,1.35)(0,0.35)[6]{}[(0,1)[0.175]{}]{} (2.05,1.875)(0,0.35)[3]{}[(0,1)[0.175]{}]{} (3.45,1.875)(0,0.35)[3]{}[(0,1)[0.175]{}]{} (4.15,1.875)(0,0.35)[3]{}[(0,1)[0.175]{}]{}
(10,1)[(1,0)[2.1]{}]{} (10,1.7)[(1,0)[3.5]{}]{} (10.7,2.4)[(1,0)[3.5]{}]{} (10.7,3.1)[(1,0)[3.5]{}]{} (11.4,3.8)[(1,0)[2.8]{}]{}
(10,1)[(0,1)[0.7]{}]{}(10.7,1)[(0,1)[2.1]{}]{} (11.4,1)[(0,1)[2.8]{}]{}(12.1,1)[(0,1)[2.8]{}]{} (12.8,1.7)[(0,1)[2.1]{}]{}(13.5,1.7)[(0,1)[2.1]{}]{} (14.2,2.4)[(0,1)[1.4]{}]{}
(10.35,1.35)(0.35,0)[2]{}[(1,0)[0.175]{}]{} (11.05,2.75)(0.35,0)[2]{}[(1,0)[0.175]{}]{} (12.45,3.45)(0.35,0)[4]{}[(1,0)[0.175]{}]{} (11.75,2.05)(0.35,0)[2]{}[(1,0)[0.175]{}]{} (13.15,2.75)(0.35,0)[2]{}[(1,0)[0.175]{}]{}
(11.05,1.35)(0,0.35)[4]{}[(0,1)[0.175]{}]{} (11.75,2.75)(0,0.35)[3]{}[(0,1)[0.175]{}]{} (12.45,2.05)(0,0.35)[4]{}[(0,1)[0.175]{}]{} (11.75,1.35)(0,0.35)[2]{}[(0,1)[0.175]{}]{} (13.15,2.05)(0,0.35)[2]{}[(0,1)[0.175]{}]{}
(-0.5,0)[a. A border strip decomposition]{} (9,0)[b. An outside decomposition]{}
The notion of the cutting strip of an outside decomposition is introduced by Chen, Yan and Yang [@CYY], which is used to give a transformation theorem on the Giambelli-type determinantal formulas for the skew Schur function.
We proceed to construct a cutting strip for an edgewise connected skew partition $\lambda/\mu$. Suppose that $\lambda/\mu$ has $k$ diagonals. The cutting strip of an outside decomposition is defined to be a border strip of length $k$. Given an outside decomposition, we may assign a direction to each square in the diagram. Starting with the bottom-left corner of a strip, we say that a square of a strip has up direction (resp. right direction) if the next square in the strip lies on its top (resp. to its right). Notice that the strips in any outside decomposition of $\lambda/\mu$ are nested in the sense that the squares in the same diagonal of $\lambda/\mu$ all have up direction or all have right direction. Based on this property, the cutting strip $\phi$ of an outside decomposition $\mathbf{D}$ of $\lambda/\mu$ is defined as follows: for $i=1,\,2,\,\ldots,\,k-1$ the $i$-th square in $\phi$ keeps the same direction as the $i$-th diagonal of $\lambda/\mu$ with respect to $\mathbf{D}$. For any two integers $p,q$ a strip $[p,q]$ is defined by the following rule: if $p\leq q$, then let $[p,q]$ be the segment of $\phi$ from the square with content $p$ to the square with content $q$; if $p=q+1$, then let $[p,q]$ be the empty strip; if $p>q+1$, then $[p,q]$ is undefined. Using the above notation, Hamel and Goulden’s theorem on the Giambelli-type formulas for the skew Schur function can be formulated as follows.
\[schur-dec\] For an outside decomposition $\mathbf{D}$ with $k$ border strips $B_1,\,B_2,\,\ldots,\,B_k$, we have $$s_{\lambda/\mu}=\det\left(s_{[\tau({\rm init}(B_i)),\tau({\rm
fin}(B_j))]}\right)_{i,j=1}^{k}.$$
By choosing the outside decomposition whose border strips are the rows of the diagram of $\lambda/\mu$ in the above theorem, we obtain the Jacobi-Trudi identity for the skew Schur function, which states that $$s_{\lambda/\mu}=\det\left(h_{\lambda_i-\mu_j-i+j}\right)_{i,j=1}^{\ell(\lambda)},$$ where $h_k$ denotes the $k$-th complete symmetric function, $h_0=1$ and $h_k=0$ for $k<0$.
Let $y(\lambda/\mu)=(t^{-{\rm
rank}(\lambda/\mu)}s_{\lambda/\mu}(1^t))_{t=0}$. The zrank conjecture says that $y(\lambda/\mu)\neq 0$ for any skew partition $\lambda/\mu$. Now we give the evaluation of $y(\lambda/\mu)$ by using Theorem \[schur-dec\]. First we consider the case when $\lambda/\mu$ is a border strip. In this case we have ${\rm
rank}(\lambda/\mu)=1$, $\mu_i=\lambda_{i+1}-1$ for $i\leq
\ell(\lambda)-1$ and $\mu_{\ell(\lambda)}=0$. From the Jacobi-Trudi identity one easily deduces the following lemma.
\[ribbon-lemm\] For a border strip $\lambda/\mu$ we have $$\label{ribbon-eq}
y(\lambda/\mu)=\frac{(-1)^{\ell(\lambda)+1}}{\lambda_1+\ell(\lambda)-1}.$$
In order to compute $y(\lambda/\mu)$ for a general skew partition $\lambda/\mu$, we need to consider the greedy border strip decomposition $\mathbf{D_0}$ of $\lambda/\mu$. Suppose that ${\rm rank}(\lambda/\mu)=r$. It follows that $\mathbf{D_0}$ has $r$ border strips. We may apply Theorem \[schur-dec\] to $\mathbf{D_0}$ because it is also an outside decomposition. Furthermore, we may impose a canonical order on the strips $B_1,\,B_2,\,\ldots,\,B_r$ of $\mathbf{D_0}$ by the contents of their lower left-hand squares such that $\tau({\rm init}(B_i))< \tau({\rm init}(B_{i+1}))$ for $i<r$. Since the sum of the heights of border strips in $\mathbf{D}_0$ is uniquely determined by the shape $\lambda/\mu$, one sees that $$z(\lambda/\mu)=ht(B_1)+ht(B_2)+\cdots+ht(B_r)$$ is well defined. Let $\mathcal{I}_0=\{(w_1,y_1),\,(w_2,y_2),\,\ldots,\,(w_r,y_r)\}$ be the interval set of $\lambda/\mu$ with ${\rm
cr}(\mathcal{I}_0)=0$. By Proposition \[cont-bord\] and the properties of $\mathbf{D}_0$ and $\mathcal{I}_0$, we obtain that $$\label{con-int}
\tau({\rm init}(B_i))=\epsilon+w_i-1 \mbox{ and } \tau({\rm
fin}(B_i))=\epsilon+y_i-2,$$ where $\epsilon$ is the smallest value among the contents of the squares of $\lambda/\mu$.
The following theorem gives a determinantal formula for $y(\lambda/\mu)$ based on a matrix related to the Cauchy matrix.
\[th-main\]Let $\lambda/\mu$ be a skew partition with ${\rm rank}(\lambda/\mu)=r$, and let $\mathcal{I}_0$ be the noncrossing interval set $\{(w_1,y_1),\,(w_2,y_2),\,\ldots,\,(w_r,y_r)\}$ of $\lambda/\mu$. Then we have $$\label{th-main-eq}
y(\lambda/\mu)=(-1)^{z(\lambda/\mu)}\det(d_{ij})_{i,j=1}^r,$$ where $$d_{ij}=\left\{
\begin{array}{ll}
\displaystyle\frac{1}{y_j-w_i}, & \mbox{if } y_j>w_i\\[12pt]
0, & \mbox{if } y_j<w_i
\end{array}
\right.$$
Take the greedy outside decomposition $\textbf{D}_0=\{B_1,\,B_2,\,\ldots,\,B_r\}$ of $\lambda/\mu$, and let $\phi_0$ be the cutting strip corresponding to $\textbf{D}_0$. By Theorem \[schur-dec\] we have $$s_{\lambda/\mu}=\det\left(s_{[\tau({\rm init}(B_i)),\tau({\rm
fin}(B_j))]}\right)_{i,j=1}^{r}.$$ Suppose that the square with content $\tau({\rm init}(B_i))$ lies in the $p_i$-th row of $\phi_0$, and the square with content $\tau({\rm fin}(B_j))$ lies in the $q_j$-th row. Applying Lemma \[ribbon-lemm\], we get $$\label{j1}
(t^{-1}s_{[\tau({\rm init}(B_i)),\tau({\rm
fin}(B_j))]})_{t=0}=\displaystyle\frac{(-1)^{{p_i}-{q_j}}}{\tau({\rm
fin}(B_j))+1-\tau({\rm init}(B_i))}$$ if $[\tau({\rm init}(B_i)),\tau({\rm fin}(B_j))]$ is a substrip of $\phi_0$. Otherwise, the above entry is set $0$. Note that $[\tau({\rm init}(B_i)),\tau({\rm fin}(B_j))]$ cannot be an empty strip for the greedy border strip decomposition. Using we may write as $$(t^{-1}s_{[\tau({\rm init}(B_i)),\tau({\rm
fin}(B_j))]})_{t=0}=\displaystyle\frac{(-1)^{{p_i}-{q_j}}}{y_j-w_i}$$ for $y_j>w_i$, or $0$ for $y_j<w_i$. Thus, we have $$y(\lambda/\mu)=(t^{-r}s_{\lambda/\mu}(1^t))_{t=0}=\det\left((t^{-1}s_{[\tau({\rm
init}(B_i)),\tau({\rm fin}(B_j))]})_{t=0}\right)_{i,j=1}^{r}.$$ Extracting the signs from the determinant, we obtain $$y(\lambda/\mu)=(-1)^{({p_1}+\cdots+{p_r})-({q_1}+\cdots+{q_r})}\det(d_{ij})_{i,j=1}^r=(-1)^{z(\lambda/\mu)}\det(d_{ij})_{i,j=1}^r.$$ This completes the proof.
**Remark.** Stanley [@stanleyprv] pointed out that one can also get a matrix for $y(\lambda/\mu)$ by taking the Jacobi-Trudi matrix (the matrix appearing in the Jacobi-Trudi determinant formula of $s_{\lambda/\mu}$) for the skew Schur function $s_{\lambda/\mu}$, and deleting all rows and columns that contain a $1$, and then substituting $1/i$ for $h_i$. This matrix coincides with the matrix $(d_{ij})_{i,j=1}^r$ defined in , subject to permutations of rows and columns. This fact can be verified by using the transformation formula in [@CYY].
From Theorem \[th-main\] and Proposition \[key-le\] one can recover the following expansion formula of Stanley [@S2 Equation (30)].
\[coro-main\] We have $$\label{eq-comb-v}
y(\lambda/\mu)=(-1)^{z(\lambda/\mu)}\sum_{\mathcal{I}=\{(u_1, v_1
), \ldots, (u_r, v_r)\}}\frac{(-1)^{{\rm
cr}(\mathcal{I})}}{\prod_{i=1}^r(v_i-u_i)},$$ summed over all interval sets $\mathcal{I}$ of $\lambda/\mu$.
An equivalent description of the zrank conjecture
=================================================
We begin this section with the definition of a restricted Cauchy matrix. Let $a=(a_1, \ldots, a_n)$ and $b=(b_1, \ldots, b_n)$ be two integer sequences. Suppose that $a$ is strictly decreasing and $b$ is strictly increasing, and for any $i,j$ we have $a_i>b_{n+1-i}$ and $a_i\neq b_j$. We define a matrix $C(a,b)=(c_{ij})_{i,j=1}^n$ by setting $$c_{ij}=\left\{
\begin{array}{ll}
{\displaystyle \frac{1}{a_i-b_j}}, & \mbox{ if $a_i>b_j$}\\[12pt]
0, & \mbox{ if $a_i<b_j$}
\end{array}
\right..$$
A matrix $M$ is called a *restricted Cauchy matrix* if there exist two integer sequences $a$ and $b$ satisfying the above conditions such that $M=C(a,b)$.
For a matrix $M$ we say it is *singular* if $\det(M)=0$; or *nonsingular*, otherwise. We now come to the main result of this paper.
\[eq-des\] The following two statements are equivalent:
\(i) The zrank conjecture is true for any skew partition.
\(ii) Any restricted Cauchy matrix is nonsingular.
Suppose that (ii) is true. For a skew partition $\lambda/\mu$, consider the noncrossing interval set $\mathcal{I}_0=\{(w_1,y_1),\,(w_2,y_2),\,\ldots,\,(w_r,y_r)\}$ of $\lambda/\mu$. Clearly, $w_i\neq y_j$ for $1\leq i,j\leq r$. Let $w=(w_1',\,w_2',\ldots,\,w_r')$ be the rearrangement of $(w_1,\,w_2,\ldots,\,w_r)$ in increasing order, and let $y=(y_1',\,y_2',\ldots,\,y_r')$ be the rearrangement of $(y_1,\,y_2,\ldots,\,y_r)$ in decreasing order. For $1\leq i\leq
r$, we have $y_i'>w_{r+1-i}'$ since the number of ${1\atop 0}$ columns in the first $\ell$ columns of the reduced code ${\rm
c}(\lambda/\mu)$ is bigger than or equals to the number of ${0\atop 1}$ columns for $1\leq\ell\leq k$, where $k$ is the length of ${\rm c}(\lambda/\mu)$. Notice that the determinant $\det(d_{ij})_{i,j=1}^r$ appearing in is equal to the determinant of the restricted Cauchy matrix $C(y,w)$ up to a sign. By Theorem \[th-main\] we see that $$y(\lambda/\mu)\neq 0 \Leftrightarrow \det(d_{ij})_{i,j=1}^r\neq 0
\Leftrightarrow \det(C(y,w))\neq 0.$$ Since the matrix $C(y,w)$ is nonsingular, we have ${\rm
rank}(\lambda/\mu)={\rm zrank}(\lambda/\mu)$.
Now we proceed to prove (ii) by assuming that (i) is true. Given a restricted Cauchy matrix $C(a,b)$ of order $r$, without loss of generality, we may assume that $a$ and $b$ are sequences of positive integers. Let $\lambda$ be the partition with $\lambda_i=a_i-r+i$, and let $\mu$ be the partition with $\mu_i=b_{r+1-i}-r+i$. From $a_i>b_{r+1-i}$ we may deduce $\lambda_i>\mu_i$ for all $i$. Thus we can construct a skew diagram $\lambda/\mu$. Observe that the Jacobi-Trudi matrix $(h_{\lambda_i-\mu_j-i+j})$ of $s_{\lambda/\mu}$ does not have a column containing $1$ since $$\lambda_i-\mu_j-i+j=a_i-b_{r+1-j}\neq 0, \mbox{for $1\leq i,j\leq r$}.$$ It follows that ${\rm rank}(\lambda/\mu)=r$ from [@S2 Proposition]. Therefore, we have $$y(\lambda/\mu)=(t^{-r}s_{\lambda/\mu}(1^t))_{t=0}=\det\left((t^{-1}h_{\lambda_i-\mu_j-i+j}(1^t))_{t=0}\right)_{i,j=1}^{r},$$ which is the determinant $\det(C(a,b))$ up to a sign. If the zrank conjecture is true for $\lambda/\mu$, then we have $y(\lambda/\mu)\neq 0$, implying that $C(a,b)$ is nonsingular. This completes the proof.
We remark that we may restrict our attention to irreducible restricted Cauchy matrices for the verification of the zrank conjecture. In other words, if every irreducible restricted Cauchy matrix is nonsigular, then every restricted Cauchy matrix is nonsingular.
Special Cases
=============
In this section we consider several classes of restricted Cauchy matrices $C(a,b)=(c_{ij})_{i,j=1}^r$ for which we can prove that they are nonsingular.
**Class I.** For all $i,j$ we have $r_{ij}\neq 0$.
In this case, $(c_{ij})_{i,j=1}^r$ is a Cauchy matrix. Cauchy [@Muir] showed that $$\det\left(\frac{1}{a_i-b_j}\right)_{i,j=1}^r=\prod_{i<j}(a_i-a_j)\prod_{i<j}(b_j-b_i)\prod_{i,j}\frac{1}{a_i-b_j}.$$ It follows that $$\det(c_{ij})_{i,j=1}^r>0.$$
From the proof of [[@S2 Theorem 3.2 (b)]]{}, we get
\[sp\] For a connected skew diagram $\lambda/\mu$, if every row of the Jacobi-Trudi matrix that contains a $0$ also contains a $1$, then the matrix $(d_{ij})_{i,j=1}^r$ appearing in must satisfy that $d_{ij}\neq 0$ for all $i,j$.
Theorem \[th-main\] and Proposition \[sp\] yield another proof of [[@S2 Theorem 3.2]]{} of Stanley. Some skew partitions do not have the property stated in the above proposition, but the matrices $(d_{ij})_{i,j=1}^r$ are Cauchy matrices. For instance, taking $\lambda/\mu=(8,8,7,7,7,6,1)/(5,5,3,3,2)$, its Jacobi-Trudi matrix is $$\displaystyle
s_{(8,8,7,7,7,6,1)/(5,5,3,3,2)}=\displaystyle\begin{vmatrix}
h_3 & h_4 & h_7 & h_8 & h_{10} & h_{13} & h_{14}\\[8pt]
h_2 & h_3 & h_6 & h_7 & h_9 & h_{12} & h_{13}\\[8pt]
1 & h_1 & h_4 & h_5 & h_7 & h_{10} & h_{11}\\[8pt]
0 & 1 & h_3 & h_4 & h_6 & h_9 & h_{10}\\[8pt]
0 & 0 & h_2 & h_3 & h_5 & h_8 & h_9\\[8pt]
0 & 0 & 1 & h_1 & h_3 & h_6 & h_7\\[8pt]
0 & 0 & 0 & 0 & 0 & 1 & h_1
\end{vmatrix}.$$
**Class II.** For all $(i, j)\neq (r, r)$, we have $c_{ij}\neq 0$ and $c_{rr}=0$.
Let $$M=\prod_{i=1}^{r-1}\frac{(a_r-a_i)(b_i-b_r)}{(a_r-b_i)(a_i-b_r)}.$$ Since $b_r>a_r$, it is easy to show that $M>1$. We see that the restricted Cauchy matrix in this case is of the following form: $$(c_{ij})_{i,j=1}^r=
\begin{pmatrix}
\displaystyle\frac{1}{a_1-b_1} & \ldots & \displaystyle\frac{1}{a_1-b_{r-1}} & \displaystyle\frac{1}{a_1-b_r}\\
\displaystyle\frac{1}{a_2-b_1} & \ldots & \displaystyle\frac{1}{a_2-b_{r-1}} & \displaystyle\frac{1}{a_2-b_r}\\
\vdots & \ldots & \vdots & \vdots\\
\displaystyle\frac{1}{a_{r-1}-b_1} & \ldots & \displaystyle\frac{1}{a_{r-1}-b_{r-1}} & \displaystyle\frac{1}{a_{r-1}-b_r}\\
\displaystyle\frac{1}{a_r-b_1} & \ldots &
\displaystyle\frac{1}{a_r-b_{r-1}} & 0
\end{pmatrix}.$$ Then we have $$\begin{aligned}
\det(c_{ij})_{i,j=1}^r&=& \prod_{{i,j=1 \atop i<j
}}^{r}(a_i-a_j)(b_j-b_i)\prod_{i,j=1}^{r}\frac{1}{a_i-b_j}\\&&
-\frac{1}{a_r-b_{r}}\prod_{{i,j=1 \atop i<j
}}^{r-1}(a_i-a_j)(b_j-b_i)\prod_{i,j=1}^{r-1}\frac{1}{a_i-b_j}
\\ &=&
\frac{1}{a_r-b_{r}}\prod_{{i,j=1 \atop i<j
}}^{r-1}(a_i-a_j)(b_j-b_i)\prod_{i,j=1}^{r-1}\frac{1}{a_i-b_j}(
M-1).\end{aligned}$$ It follows that $$\det(c_{ij})_{i,j=1}^r< 0.$$
**Class III.** [$c_{ij}\neq 0$ except for $c_{rr},\,
c_{r,r-1}\, \mbox{and}\, c_{r-1,r}$.]{}
In this case, we have $a_r>b_{r-2}$ but $a_r<b_{r-1}$, $a_{r-2}>b_r$ but $a_{r-1}<b_r$. Recall that the *rank* of a matrix is the maximum number of linearly independent rows or columns of the matrix. For a matrix $M=(m_{ij})_{i,j=1}^r$, let $M^*$ be the matrix $(M_{ji})_{i,j=1}^r$, where $M_{ij}$ is the cofactor of $m_{ij}$ in the expansion $\det(M)=\sum_{i=1}^rm_{ij}M_{ij}$. Recall the following property:
$$\label{bd-prop}
{\rm rank}(M^{*})=\left\{
\begin{array}{ll}
r, & \mbox{ if ${\rm rank}(M)=r$}\\[8pt]
1, & \mbox{ if ${\rm rank}(M)=r-1$}\\[8pt]
0, & \mbox{ if ${\rm rank}(M)<r-1$}
\end{array}
\right.$$
We now consider the rank of $C^*=(C_{ji})_{i,j=1}^r$ where $C_{ij}$ is the cofactor of $m_{ij}$ in the expansion $\det(C(a,b))=\sum_{i=1}^rc_{ij}C_{ij}$. Recall that the minor $C_{rr}$ is the determinant of the submatrix obtained from $C(a,b)$ by deleting row $r$ and column $r$, which turns out to be the restricted Cauchy matrix of Class **I**, and the underlying matrices of $C_{r-1, r-1},\, C_{r, r-1},\,C_{r-1, r}$ are the restricted Cauchy matrices of Class **II**. Thus we have $$C_{r, r}>0,\quad C_{r-1, r-1}<0,\quad C_{r, r-1}>0 \quad \mbox{and} \quad C_{r-1, r}>0.$$ This implies that ${\rm rank}(C^*)\geq 2$. Hence ${\rm
rank}(C(a,b))=r$ because of , namely $\det(c_{ij})_{i,j=1}^r\neq 0$.
**Class IV.** [For all $i\leq r$, $j\leq r-1$ we have $c_{ij}\neq 0$; $c_{1r}\neq 0$, and $c_{2r}\neq 0$; $c_{ir}=0$ if $i>2$.]{}
In this case, the restricted Cauchy matrix has the form $$(c_{ij})_{i,j=1}^r=
\begin{pmatrix}
\displaystyle\frac{1}{a_1-b_1} & \ldots & \displaystyle\frac{1}{a_1-b_{r-1}} & \displaystyle\frac{1}{a_1-b_r}\\
\displaystyle\frac{1}{a_2-b_1} & \ldots & \displaystyle\frac{1}{a_2-b_{r-1}} & \displaystyle\frac{1}{a_2-b_r}\\
\displaystyle\frac{1}{a_3-b_1} & \ldots & \displaystyle\frac{1}{a_3-b_{r-1}} & 0\\
\vdots & \ldots & \vdots & \vdots\\
\displaystyle\frac{1}{a_r-b_1} & \ldots &
\displaystyle\frac{1}{a_r-b_{r-1}} & 0
\end{pmatrix}.$$ Expanding along the last column, we get $$\begin{aligned}
\det(c_{ij})_{i,j=1}^r & = & (-1)^{r+1} \frac{1}{a_1-b_r}
\prod_{2\leq i<j\leq r} (a_i-a_j)\prod_{\atop {1\leq i<j\leq
r-1}}(b_j-b_i) \prod_{{2\leq i\leq r}\atop {1\leq j\leq r-1}}
\frac{1}{a_i-b_j}
\\&&+ (-1)^{r+2} \frac{1}{a_2-b_r} \prod_{{i\neq 2, j\neq 2}\atop {1\leq i<j\leq r}} (a_i-a_j)\prod_{\atop {1\leq i<j\leq r-1}}(b_j-b_i) \prod_{{i\neq 2}\atop
{1\leq j\leq r-1}} \frac{1}{a_i-b_j}
\\ & = & (-1)^{r+1}\prod_{{1\leq i<j\leq r}}(a_i-a_j)\prod_{\atop {1\leq i<j\leq r-1}}(b_j-b_i)\prod_{{i,j=1}\atop {j\neq r}}^{r}
\frac{1}{a_i-b_j}N,\end{aligned}$$ where $$N=\frac{f(a_1)-f(a_2)}{a_1-a_2}$$ and $$f(x)=\frac{(x-b_1)(x-b_2)\cdots(x-b_{r-1})}{(x-b_r)(x-a_3)\cdots(x-a_r)}.$$ Let $\delta=a_1-a_2$. We obtain $$\begin{aligned}
\frac{f(a_1)}{f(a_2)} & = &
\frac{\displaystyle\frac{(a_1-b_1)(a_1-b_2)\cdots(a_1-b_{r-1})}{(a_1-b_r)(a_1-a_3)\cdots(a_1-a_r)}}
{\displaystyle\frac{(a_2-b_1)(a_2-b_2)\cdots(a_2-b_{r-1})}{(a_2-b_r)(a_2-a_3)\cdots(a_2-a_r)}}\\[5pt]
&=&
\frac{\displaystyle\frac{(a_1-b_1)}{(a_2-b_1)}\frac{(a_1-b_2)}{(a_2-b_2)}\cdots\frac{(a_1-b_{r-1})}{(a_2-b_{r-1})}}{\displaystyle
\frac{(a_1-b_r)}{(a_2-b_r)}\frac{(a_1-a_3)}{(a_2-a_3)}\cdots\frac{(a_1-a_{r})}{(a_2-a_{r})}}\\[5pt]
&=&
\frac{\displaystyle\frac{(\delta+a_2-b_1)}{(a_2-b_1)}\frac{(\delta+a_2-b_2)}{(a_2-b_2)}\cdots\frac{(\delta+a_2-b_{r-1})}{(a_2-b_{r-1})}}{
\displaystyle\frac{(\delta+a_2-b_r)}{(a_2-b_r)}\frac{(\delta+a_2-a_3)}{(a_2-a_3)}\cdots\frac{(\delta+a_2-a_{r})}{(a_2-a_{r})}}.\end{aligned}$$ Let $s\in\{b_1, \ldots, b_{r-1}\}$ and $s'\in\{a_3, \ldots, a_{r},
b_r\}$. Then we have $s<s'$ and $$\frac{(\delta+a_2-s)}{(a_2-s)}<\frac{(\delta+a_2-s')}{(a_2-s')}.$$ It follows that $f(a_1)<f(a_2)$, namely $N<0$. Thus we have $\det(c_{ij})_{i,j=1}^r> 0$ if $r$ is even and $\det(c_{ij})_{i,j=1}^r< 0$ if $r$ is odd.
[**Acknowledgments.**]{} This work was done under the auspices of the 973 Project on Mathematical Mechanization, the Ministry of Education, the Ministry of Science and Technology, and the National Science Foundation of China. We thank Professor Richard Stanley for bringing this problem to our attention and for valuable comments. We also thank the referee for the very pertinent comments and suggestions which helped to significantly improve this paper.
[1]{}
C. Bessenrodt, On hooks of Young diagrams, *Ann. Combin.* **2** (1998), 103-110.
C. Bessenrodt, On hooks of skew Young diagrams and bars, *Ann. Combin.* **5** (2001), 37-49.
William Y. C. Chen, G.-G. Yan, and Arthur L. B. Yang, Transformations of border strips and Schur function determinants, *J. Algebraic Combin.*, to appear.
V. Drinfeld, Hopf algebras and the Yang-Baxter equation, *Soviet Math. Dokl.* **32** (1985), 254-258.
I. Gessel and G. Viennot, Binomial determinants, paths, and hook length formulae, *Adv. Math.* **58** (1985), 300-321.
I. Gessel and G. Viennot, Determinants, paths, and plane partitions, preprint, 1989; available at [http://www.cs.brandeis.edu/\~ira]{}.
A. M. Hamel and I. P. Goulden, Planar decompositions of tableaux and Schur function determinants, *European J. Combin.* **16**, 461-477.
T. Muir, A Treatise on the Theory of Determinants, revised and enlarged by W. H. Metzler, Dover, New York, 1960.
M. Nazarov and V. Tarasov, On irreducibility of tensor products of Yangian modules associated with skew Young diagrams, *Duke Math. J.* **112** (2002), 343-378.
R. P. Stanley, Enumerative Combinatorics, vol. 2, Cambridge University Press, New York/Cambridge, 1999.
R. P. Stanley, The rank and minimal border strip decompositions of a skew partition, *J. Combin. Theory Ser. A* **100** (2002), 349-375.
R. P. Stanley, private communication.
|
---
abstract: 'The swimming of an assembly of rigid spheres immersed in a viscous fluid of infinite extent is studied in low Reynolds number hydrodynamics. The instantaneous swimming velocity and rate of dissipation are expressed in terms of the time-dependent displacements of sphere centers about their collective motion. For small amplitude swimming with periodically oscillating displacements, optimization of the mean swimming speed at given mean power leads to an eigenvalue problem involving a velocity matrix and a power matrix. The corresponding optimal stroke permits generalization to large amplitude motion in a model of spheres with harmonic interactions and corresponding actuating forces. The method allows straightforward calculation of the swimming performance of structures modeled as assemblies of interacting rigid spheres. A model of three collinear spheres with motion along the common axis is studied as an example.'
author:
- 'B. U. Felderhof'
title: Efficient swimming of an assembly of rigid spheres at low Reynolds number
---
\[1\]Introduction
=================
In earlier work [@1] we presented a method to analyze the performance of a microswimmer modeled as an assembly of $N$ rigid spheres immersed in a viscous incompressible fluid of infinite extent, with a no-slip boundary condition on the surface of each sphere. The motion of the whole system is determined by the Stokes equations of low Reynolds number hydrodynamics. The swimming motion of such a system was discussed earlier by Alouges et al. [@2],[@3]. The particular case of collinear spheres was studied by Vladimirov [@4] using a two-timing method.
For small displacements of the spheres from fixed positions in the collective rest frame the time-averaged swimming velocity and rate of dissipation can be evaluated in terms of a $(3N-3)\times(3N-3)$ velocity matrix and a $(3N-3)\times(3N-3)$ power matrix, which can be constructed from the mobility matrix for each relative rest configuration [@1]. Optimization of the velocity at fixed power leads to a generalized eigenvalue problem involving the two matrices. Optimal efficiency corresponds to the maximum eigenvalue.
In a model with harmonically interacting spheres the optimal stroke of small amplitude motion can be used to calculate a set of corresponding actuating forces. Large amplitude motion can be studied by solving the equations of Stokesian dynamics for the same actuating forces multiplied by a factor. The mean swimming velocity and the mean power of the large amplitude motion can then be determined numerically from the limit cycle of the solution.
In the following we present an alternative method based on a purely kinematic point of view. Expressions are derived for the instantaneous swimming velocity and power in terms of the sphere displacements from the center and their instantaneous time derivative. This allows calculation of the mean swimming velocity and mean power for given periodic stroke of any amplitude. The present method also provides an alternative derivation of the velocity matrix and power matrix of small amplitude motion.
For large amplitude swimming the present method is more straightforward than the earlier one [@1], since it does not require numerical solution of the equations of Stokesian dynamics. A large amplitude stroke may be determined by amplifying the optimal stroke found from the eigenvalue problem of the small amplitude theory for a given equilibrium structure. The instantaneous swimming velocity and power are then determined from explicit expressions in terms of the given displacements. Subsequently the mean swimming velocity and mean power can be found by integration over a period.
Both methods are tested on a model of three collinear spheres with motion along the common axis, as formulated by Najafi and Golestanian [@5] and studied in detail by Golestanian and Ajdari [@6]. The two methods of calculation lead to similar numerical results for a wide range of amplitude.
\[2\]Displacement and swimming velocity
=======================================
We consider a set of $N$ rigid spheres of radii $a_1,...,a_N$ immersed in a viscous incompressible fluid of shear viscosity $\eta$. The fluid is of infinite extent in all directions. At low Reynolds number and on a slow time scale the flow velocity ${\mbox{\boldmath $v$}}$ and the pressure $p$ satisfy the Stokes equations [@7] $$\label{2.1}\eta\nabla^2{\mbox{\boldmath $v$}}-\nabla p=0,\qquad\nabla\cdot{\mbox{\boldmath $v$}}=0.$$ The flow velocity ${\mbox{\boldmath $v$}}$ is assumed to satisfy the no-slip boundary condition on the surface of the spheres. The fluid is set in motion by time-dependent motions of the spheres. At each time $t$ the velocity field ${\mbox{\boldmath $v$}}({\mbox{\boldmath $r$}},t)$ tends to zero at infinity, and the pressure $p({\mbox{\boldmath $r$}},t)$ tends to the constant ambient pressure $p_0$. We shall study periodic relative motions which lead to swimming motion of the collection of spheres.
We assume that the motion is caused by time-dependent periodic forces ${\mbox{\boldmath $F$}}_1(t),...,{\mbox{\boldmath $F$}}_N(t)$ which satisfy the condition that their sum vanishes at any time. The forces are transmitted by the spheres to the fluid. The spheres can rotate freely, so that they exert no torques on the fluid. Hence the rotational velocities ${\mbox{\boldmath $\Omega$}}_1(t),..., {\mbox{\boldmath $\Omega$}}_N(t)$ can be ignored. The translational velocities ${\mbox{\boldmath $U$}}_1,...,{\mbox{\boldmath $U$}}_N$ are linearly related to the forces, $$\label{2.2}{\mbox{\boldmath $U$}}_j=\sum^N_{k=1}{\mbox{\boldmath $\mu$}}^{tt}_{jk}\cdot{\mbox{\boldmath $F$}}_k,\qquad j=1,...,N,$$ with translational mobility tensors ${\mbox{\boldmath $\mu$}}^{tt}_{jk}$. The tensors have many-body character and depend in principle on the positions of all particles [@8]-[@10]. By translational invariance only relative distance vectors $\{{\mbox{\boldmath $R$}}_i-{\mbox{\boldmath $R$}}_j\}$ occur in the functional dependence. We abbreviate eq. (2.2) as $$\label{2.3}{{\bf\sf U}}={\mbox{\boldmath $\mu$}}\cdot{{\bf\sf F}},$$ with a symmetric $3N\times 3N$ mobility matrix ${\mbox{\boldmath $\mu$}}$. Conversely $$\label{2.4}{{\bf\sf F}}={\mbox{\boldmath $\zeta$}}\cdot{{\bf\sf U}},$$ with friction matrix ${\mbox{\boldmath $\zeta$}}$. The friction matrix is the inverse of the mobility matrix, ${\mbox{\boldmath $\zeta$}}={\mbox{\boldmath $\mu$}}^{-1}$, and is also symmetric.
The positions of the centers change as a function of time. The equations of motion of Stokesian dynamics read $$\label{2.5}\frac{d{\mbox{\boldmath $R$}}_j}{dt}={\mbox{\boldmath $U$}}_j({\mbox{\boldmath $R$}}_1,...,{\mbox{\boldmath $R$}}_N,t),\qquad j=1,...,N.$$ The explicit time-dependence on the right originates in the time-dependence of the forces ${{\bf\sf F}}(t)$. In the swimming motion the forces are periodic in time with period $T$, so that ${{\bf\sf F}}(t+T)={{\bf\sf F}}(t)$. As mentioned, we impose the condition that at no time is there a net force acting on the set of spheres, so that $$\label{2.6}\sum^{N}_{j=1}{\mbox{\boldmath $F$}}_j(t)=0.$$ We look for a solution of eq. (2.5) corresponding to swimming motion, of the form $$\label{2.7}{\mbox{\boldmath $R$}}_j(t)={\mbox{\boldmath $S$}}_{j0}+\int^t_0{\mbox{\boldmath $U$}}(t')\;dt'+{\mbox{\boldmath $\delta$}}_j(t),\qquad j=1,...,N,$$ where the first two terms describe the collective motion of the configuration ${{\bf\sf S}}_0=({\mbox{\boldmath $S$}}_{10},...,{\mbox{\boldmath $S$}}_{N0})$ with swimming velocity ${\mbox{\boldmath $U$}}(t)$ caused by the displacements $\{{\mbox{\boldmath $\delta$}}_j(t)\}$. We require that the latter are periodic with period $T$, and exclude uniform displacements, so that the $3N$-dimensional vector ${{\bf\sf d}}(t)=\{ {\mbox{\boldmath $\delta$}}_1(t),...,{\mbox{\boldmath $\delta$}}_N(t)\}$ satisfies $$\label{2.8}{{\bf\sf d}}(t)\cdot{{\bf\sf u}}_\alpha=0,\qquad(\alpha=x,y,z),$$ where the symbol ${{\bf\sf u}}_x$ denotes a $3N$-dimensional vector with $1$ on the $x$ positions, $0$ on the $y,z$ positions, and cyclic. Periodicity implies $$\label{2.9}{\mbox{\boldmath $U$}}(t+T)={\mbox{\boldmath $U$}}(t),\qquad{{\bf\sf d}}(t+T)={{\bf\sf d}}(t).$$ The mean swimming velocity is defined as $$\label{2.10}\overline{{\mbox{\boldmath $U$}}}_{sw}=\frac{1}{T}\int^T_0{\mbox{\boldmath $U$}}(t)\;dt.$$ We require that ${{\bf\sf d}}(t)$ is purely oscillating, so that $$\label{2.11}\int^T_0{{\bf\sf d}}(t)\;dt={{\bf\sf 0}}.$$
We show in the following that the instantaneous swimming velocity ${\mbox{\boldmath $U$}}(t)$ can be calculated from the displacement vector ${{\bf\sf d}}(t)$ and its time derivative $\dot{{{\bf\sf d}}}(t)$. Later we compare the present kinematic description to a dynamical model, in which the forces are decomposed into actuating forces and elastic restoring forces.
\[3\]Swimming velocity and dissipation
======================================
By substitution of eq. (2.7) into eqs. (2.4) and (2.5) one finds $$\label{3.1}{{\bf\sf F}}={\mbox{\boldmath $\zeta$}}\cdot(U_\beta{{\bf\sf u}}_\beta+\dot{{{\bf\sf d}}}),$$ where summation over repeated greek indices is implied. The condition (2.6) can be expressed as ${{\bf\sf u}}_\alpha\cdot{{\bf\sf F}}=0$, so that $$\label{3.2}Z_{\alpha\beta}U_\beta=-{{\bf\sf u}}_\alpha\cdot{\mbox{\boldmath $\zeta$}}\cdot\dot{{{\bf\sf d}}}$$ with friction tensor $$\label{3.3}Z_{\alpha\beta}={{\bf\sf u}}_\alpha\cdot{\mbox{\boldmath $\zeta$}}\cdot{{\bf\sf u}}_\beta.$$ Hence we obtain the swimming velocity $$\label{3.4}U_\alpha=-M_{\alpha\beta}{{\bf\sf u}}_\beta\cdot{\mbox{\boldmath $\zeta$}}\cdot\dot{{{\bf\sf d}}},$$ where $M_{\alpha\beta}$ is the inverse of the friction tensor. The $3N\times 3N$ friction matrix ${\mbox{\boldmath $\zeta$}}$ depends only on the instantaneous relative positions. Therefore the friction tensor ${\mbox{\boldmath $Z$}}$ and the mobility tensor ${\mbox{\boldmath $M$}}$ depend on the displacement vector ${{\bf\sf d}}$, but not on the central coordinates $R_\alpha={{\bf\sf u}}_\alpha\cdot{{\bf\sf R}}/N$.
By series expansion of the mobility tensor ${\mbox{\boldmath $M$}}$ and the friction matrix ${\mbox{\boldmath $\zeta$}}$ in powers of the displacement vector ${{\bf\sf d}}$ we obtain a corresponding expansion of the swimming velocity $$\label{3.5}{\mbox{\boldmath $U$}}={\mbox{\boldmath $U$}}^{(1)}+{\mbox{\boldmath $U$}}^{(2)}+{\mbox{\boldmath $U$}}^{(3)}+...,$$ with first order term $$\label{3.6}U^{(1)}_\alpha=-M^0_{\alpha\beta}{{\bf\sf u}}_\beta\cdot{\mbox{\boldmath $\zeta$}}^0\cdot\dot{{{\bf\sf d}}},$$ with mobility tensor $M^0_{\alpha\beta}$ and friction matrix ${\mbox{\boldmath $\zeta$}}^0$ calculated for the configuration ${{\bf\sf S}}_0$. By periodicity of ${{\bf\sf d}}(t)$ the time average of the first order swimming velocity vanishes, $\overline{{\mbox{\boldmath $U$}}^{(1)}}={\mbox{\boldmath $0$}}$.
We introduce the friction vectors $$\label{3.7}{{\bf\sf f}}_\alpha={{\bf\sf u}}_\alpha\cdot{\mbox{\boldmath $\zeta$}}={\mbox{\boldmath $\zeta$}}\cdot{{\bf\sf u}}_\alpha,$$ where we have used the symmetry of the friction matrix ${\mbox{\boldmath $\zeta$}}$. The vectors are related to the friction tensor by $$\label{3.8}{{\bf\sf u}}_\alpha\cdot{{\bf\sf f}}_\beta={{\bf\sf u}}_\beta\cdot{{\bf\sf f}}_\alpha=Z_{\alpha\beta}.$$ From the Taylor series expansion of eq. (3.4) we find that the second order instantaneous swimming velocity can be expressed as $$\label{3.9}U^{(2)}_\alpha=-{{\bf\sf d}}\cdot {{\bf\sf V}}^\alpha\big{|}_0\cdot\dot{{{\bf\sf d}}},$$ with matrix ${{\bf\sf V}}^\alpha$ given by $$\label{3.10}{{\bf\sf V}}^\alpha={\mbox{\boldmath $\nabla$}}\big[M_{\alpha\beta}{{\bf\sf f}}_\beta\big],$$ where ${\mbox{\boldmath $\nabla$}}$ is the gradient vector in $3N$-dimensional configuration space. The notation $\big|_0$ in eq. (3.9) indicates that the matrix-function is to be evaluated at ${{\bf\sf R}}={{\bf\sf S}}_0$.
The expression on the right of eq. (3.10) may be written as a sum of two terms, $$\label{3.11}{{\bf\sf V}}^\alpha=({\mbox{\boldmath $\nabla$}}M_{\alpha\beta}){{\bf\sf f}}_\beta+M_{\alpha\beta}{{\bf\sf D}}^\beta,$$ with derivative friction matrix $$\label{3.12}{{\bf\sf D}}^\beta={\mbox{\boldmath $\nabla$}}{{\bf\sf f}}_\beta.$$ We introduce the gradient vectors $$\label{3.13}{{\bf\sf g}}^\beta_\gamma={{\bf\sf D}}^\beta\cdot{{\bf\sf u}}_\gamma={\mbox{\boldmath $\nabla$}}Z_{\beta\gamma},$$ and use the identity $$\label{3.14}Z_{\alpha\gamma}M_{\gamma\beta}=\delta_{\alpha\beta}$$ to show that $$\label{3.15}{\mbox{\boldmath $\nabla$}}M_{\alpha\beta}=-M_{\alpha\gamma}{{\bf\sf g}}^\gamma_\delta M_{\delta\beta}.$$ Then eq. (3.11) may be expressed alternatively as $$\label{3.16}{{\bf\sf V}}^\alpha=M_{\alpha\beta}\breve{{{\bf\sf D}}}^{\beta},$$ with reduced derivative friction matrix $$\label{3.17}\breve{{{\bf\sf D}}}^{\beta}={{\bf\sf D}}^\beta-{{\bf\sf g}}^\beta_\gamma M_{\gamma\delta}{{\bf\sf f}}_\delta.$$ This matrix has the property $$\label{3.18}\breve{{{\bf\sf D}}}^{\beta}\cdot{{\bf\sf u}}_\alpha=0.$$ From the fact that ${\mbox{\boldmath $\zeta$}}$ depends only on relative coordinates it follows that ${{\bf\sf u}}_\alpha\cdot{\mbox{\boldmath $\nabla$}}{\mbox{\boldmath $\zeta$}}={{\bf\sf 0}}$, and hence $$\label{3.19}{{\bf\sf u}}_\alpha\cdot{{\bf\sf D}}^{\beta}={{\bf\sf 0}},\qquad{{\bf\sf u}}_\alpha\cdot{{\bf\sf g}}^\beta_\gamma=0.$$ As a consequence $$\label{3.20}{{\bf\sf u}}_\alpha\cdot{{\bf\sf V}}^{\beta}={{\bf\sf 0}},\qquad{{\bf\sf V}}^\alpha\cdot{{\bf\sf u}}_\beta={{\bf\sf 0}}.$$
The time-dependent rate of dissipation can be expressed in the same matrix formalism. The rate of dissipation is given by $$\label{3.21}\mathcal{D}={{\bf\sf F}}\cdot{{\bf\sf U}}={{\bf\sf F}}\cdot\dot{{{\bf\sf d}}},$$ since ${{\bf\sf F}}\cdot{{\bf\sf u}}_\alpha=0$ on account of the condition eq. (2.6). Substituting eq. (3.1) we find $$\label{3.22}\mathcal{D}=\dot{{{\bf\sf d}}}\cdot{\mbox{\boldmath $\zeta$}}\cdot\dot{{{\bf\sf d}}}+U_\alpha\dot{{{\bf\sf d}}}\cdot{{\bf\sf f}}_\alpha.$$ It follows from eq. (3.4) that the rate of dissipation is at least of second order in ${{\bf\sf d}}$ and $\dot{{{\bf\sf d}}}$. To second order, by use of eq. (3.6), $$\label{3.23}\mathcal{D}^{(2)}=\dot{{{\bf\sf d}}}\cdot{{\bf\sf P}}\cdot\dot{{{\bf\sf d}}}$$ with matrix $$\label{3.24}{{\bf\sf P}}={\mbox{\boldmath $\zeta$}}^0-M^0_{\alpha\beta}{{\bf\sf f}}^0_\alpha{{\bf\sf f}}^0_\beta.$$ The matrix is symmetric and has the properties $$\label{3.25}{{\bf\sf u}}_\alpha\cdot{{\bf\sf P}}={{\bf\sf 0}},\qquad{{\bf\sf P}}\cdot{{\bf\sf u}}_\alpha={{\bf\sf 0}}.$$ The properties eq. (3.20) and (3.25) allow us to reduce the dimension of the matrix description by three by the introduction of center and relative coordinates.
\[4\]Velocity matrix vector and power matrix
============================================
The center of the assembly is given by $$\label{4.1}{\mbox{\boldmath $R$}}=\frac{1}{N}\sum_{j=1}^N{\mbox{\boldmath $R$}}_j=\frac{1}{N}\;{\mbox{\boldmath $e$}}_\alpha{{\bf\sf u}}_\alpha\cdot{{\bf\sf R}}$$ with Cartesian unit vectors ${\mbox{\boldmath $e$}}_\alpha$. We define relative coordinates $\{{\mbox{\boldmath $r$}}_j\}$ as $$\begin{aligned}
\label{4.2}{\mbox{\boldmath $r$}}_1&=&{\mbox{\boldmath $R$}}_2-{\mbox{\boldmath $R$}}_1,\qquad{\mbox{\boldmath $r$}}_2={\mbox{\boldmath $R$}}_3-{\mbox{\boldmath $R$}}_2,\qquad ...,\nonumber\\
{\mbox{\boldmath $r$}}_{N-1}&=&{\mbox{\boldmath $R$}}_N-{\mbox{\boldmath $R$}}_{N-1}, \qquad j=1,...,N-1,\end{aligned}$$ and the corresponding $(3N-3)$-vector ${{\bf\sf r}}=({\mbox{\boldmath $r$}}_1,...,{\mbox{\boldmath $r$}}_{N-1})$. The $3N$-vector $({\mbox{\boldmath $R$}},{{\bf\sf r}})$ is related to the vector ${{\bf\sf R}}$ by a transformation matrix ${{\bf\sf T}}$ according to $$\label{4.3}({\mbox{\boldmath $R$}},{{\bf\sf r}})={{\bf\sf T}}\cdot{{\bf\sf R}}$$ with explicit form given by eqs. (4.1) and (4.2).
The matrices ${{\bf\sf V}}^\alpha$ and ${{\bf\sf P}}$ are transformed to $$\label{4.4}{{\bf\sf V}}^\alpha_T={{\bf\sf T}}\cdot{{\bf\sf V}}^\alpha\cdot{{\bf\sf T}}^{-1},\qquad{{\bf\sf P}}_T={{\bf\sf T}}\cdot{{\bf\sf P}}\cdot{{\bf\sf T}}^{-1}.$$ The first three rows of ${{\bf\sf T}}$ consist of ${{\bf\sf u}}_\alpha/N$ and the first three columns of ${{\bf\sf T}}^{-1}$ consist of ${{\bf\sf u}}_\alpha$. It follows from the properties eq. (3.20) and (3.25) that the first three rows and columns of the transformed matrices ${{\bf\sf V}}^\alpha_T$ and ${{\bf\sf P}}_T$ vanish identically. Hence in this representation we can drop the center coordinates and truncate the matrices by erasing the first three rows and columns. We denote the truncated $(3N-3)\times(3N-3)$-matrices as $\hat{{{\bf\sf V}}}_T^\alpha$ and $\hat{{{\bf\sf P}}}_T$ and define displacements ${\mbox{\boldmath $\xi$}}$ in relative space by $$\label{4.5}({\mbox{\boldmath $0$}},{\mbox{\boldmath $\xi$}})={{\bf\sf T}}\cdot{{\bf\sf d}}.$$ With this notation the second order swimming velocity and rate of dissipation are given by $$\label{4.6}U^{(2)}_\alpha={\mbox{\boldmath $\xi$}}\cdot{{\bf\sf C}}_T\cdot\hat{{{\bf\sf V}}}_T^\alpha\cdot\dot{{\mbox{\boldmath $\xi$}}},\qquad \mathcal{D}^{(2)}=\dot{{\mbox{\boldmath $\xi$}}}\cdot{{\bf\sf C}}_T\cdot\hat{{{\bf\sf P}}}_T\cdot\dot{{\mbox{\boldmath $\xi$}}},$$ with the matrix $$\label{4.7}{{\bf\sf C}}_T=[\widetilde{{{\bf\sf T}}^{-1}}\cdot{{\bf\sf T}}^{-1}]\;{\mbox{\boldmath $\hat{}$}}.$$ This $(3N-3)\times(3N-3)$ dimensional matrix consists of numerical coefficients and is obtained from the corresponding $3N\times 3N$ matrix by truncation, as indicated by the final hat symbol.
We consider in particular harmonically varying displacements of the form $$\label{4.8}{{\bf\sf d}}(t)={{\bf\sf d}}_s\sin\omega t+{{\bf\sf d}}_c\cos\omega t,$$ with a corresponding expression for ${\mbox{\boldmath $\xi$}}(t)$. The time-averaged second order swimming velocity and rate of dissipation are then given by $$\begin{aligned}
\label{4.9}\overline{U^{(2)}_\alpha}&=&\frac{1}{2}\omega\big[{\mbox{\boldmath $\xi$}}_s\cdot{{\bf\sf C}}_T\cdot\hat{{{\bf\sf V}}}_T^\alpha\big|_0\cdot{\mbox{\boldmath $\xi$}}_c
-{\mbox{\boldmath $\xi$}}_c\cdot{{\bf\sf C}}_T\cdot\hat{{{\bf\sf V}}}_T^\alpha\big|_0\cdot{\mbox{\boldmath $\xi$}}_s\big],\nonumber\\ \overline{\mathcal{D}^{(2)}}&=&\frac{1}{2}\omega^2\big[{\mbox{\boldmath $\xi$}}_s\cdot{{\bf\sf C}}_T\cdot\hat{{{\bf\sf P}}}_T\cdot{\mbox{\boldmath $\xi$}}_s+{\mbox{\boldmath $\xi$}}_c\cdot{{\bf\sf C}}_T\cdot\hat{{{\bf\sf P}}}_T\cdot{\mbox{\boldmath $\xi$}}_c\big].\end{aligned}$$
We introduce the complex dimensionless vector $$\label{4.10}{\mbox{\boldmath $\xi$}}^c=\frac{1}{b}({\mbox{\boldmath $\xi$}}_c+i{\mbox{\boldmath $\xi$}}_s),$$ where $b$ is a typical length scale. With the definitions $$\label{4.11}{{\bf\sf B}}^\alpha=\frac{1}{2}ib\big({{\bf\sf C}}_T\cdot\hat{{{\bf\sf V}}}_T^\alpha\big|_0-\widetilde{{{\bf\sf C}}_T\cdot\hat{{{\bf\sf V}}}_T^\alpha\big|_0}\big),\qquad
{{\bf\sf A}}=\frac{1}{b\eta}\;{{\bf\sf C}}_T\cdot\hat{{{\bf\sf P}}}_T,$$ and the scalar product $$\label{4.12}({\mbox{\boldmath $\xi$}}^c|{\mbox{\boldmath $\eta$}}^c)=\sum^{N-1}_{j=1}{\mbox{\boldmath $\xi$}}_j^{c*}\cdot{\mbox{\boldmath $\eta$}}^c_j$$ the mean swimming velocity and mean rate of dissipation can then be expressed as $$\label{4.13}\overline{U^{(2)}_\alpha}=\frac{1}{2}\omega
b({\mbox{\boldmath $\xi$}}^c|{{\bf\sf B}}^{\alpha}|{\mbox{\boldmath $\xi$}}^c),\qquad\overline{\mathcal{D}^{(2)}}=\frac{1}{2}\eta\omega^2b^3({\mbox{\boldmath $\xi$}}^c|{{\bf\sf A}}|{\mbox{\boldmath $\xi$}}^c).$$ We have normalized such that the matrix elements of ${{\bf\sf B}}^{\alpha}$ and ${{\bf\sf A}}$ are dimensionless. We call ${{\bf\sf B}}^{\alpha}$ the velocity matrix and ${{\bf\sf A}}$ the power matrix.
We ask for the stroke with maximum swimming velocity in a class of strokes with equal rate of dissipation for fixed values of the geometric parameters, fixed frequency $\omega$, and fixed viscosity $\eta$. This leads to the generalized eigenvalue problem $$\label{4.14}{{\bf\sf B}}^\alpha{\mbox{\boldmath $\xi$}}^c=\lambda^\alpha{{\bf\sf A}}{\mbox{\boldmath $\xi$}}^c.$$ The eigenvalues $\{\lambda^\alpha\}$ are real. The maximum efficiency for motion in direction $\alpha$ is given by the maximum eigenvalue as $$\label{4.15}E^\alpha_{Tmax}=\lambda^\alpha_{max}.$$ The set $\{E^x_{Tmax},E^y_{Tmax},E^z_{Tmax}\}$ depends on the choice of Cartesian coordinate system. Further optimization may be possible by a rotation of axes. In particular cases a natural choice of axes will suggest itself.
In the formulation of the mobility matrix in Eq. (2.2) the nature of the forces $\{{\mbox{\boldmath $F$}}_j\}$ need not be specified. In an earlier calculation [@11] we considered microswimmers with internal harmonic interactions, driven by actuating forces. In matrix form the forces may be expressed as $$\label{4.16}{{\bf\sf F}}={{\bf\sf E}}+{{\bf\sf H}}\cdot({{\bf\sf R}}-{{\bf\sf S}}_0),$$ with a real symmetric matrix ${{\bf\sf H}}$ with the property ${{\bf\sf H}}\cdot{{\bf\sf u}}_\alpha=0$. The actuating forces $\{{\mbox{\boldmath $E$}}_j(t)\}$ are assumed to satisfy $$\label{4.17}\sum_{j=1}^N{\mbox{\boldmath $E$}}_j(t)=0.$$ They can be generated internally or externally.
\[5\]Three-sphere swimmer
=========================
The simplest application of the theory is to a three-sphere swimmer with three spheres aligned on the $x$ axis, as studied by Golestanian and Ajdari [@6]. The spheres move along the $x$ axis, and the $y$ and $z$ coordinates can be ignored. There are only two relative coordinates $r_1=x_2-x_1$ and $r_2=x_3-x_2$, and the relevant parts of the matrices ${{\bf\sf B}}^x$ and ${{\bf\sf A}}$ are two-dimensional. The elements of the $3\times 3$ mobility matrix are approximated by use of the Oseen interaction as [@7] $$\label{5.1}\mu^{tt}_{jk}=\frac{1}{6\pi\eta}\bigg[\frac{1}{a_j}\delta_{jk}+\frac{3}{2|x_j-x_k|}(1-\delta_{jk})\bigg].$$ In the bilinear theory we consider a point ${{\bf\sf r}}_0$ in ${{\bf\sf r}}$-space with coordinates $(d_1,d_2)$, corresponding to the configuration ${{\bf\sf S}}_0$ of the rest system. As an example we consider the case of equal-sized spheres with $a_1=a_2=a_3=a$ and equal distances between centers $d_1=d_2=d$. For this case the explicit expressions for the matrices ${{\bf\sf B}}^x$ and ${{\bf\sf A}}$ are identical to those derived earlier by a different method [@1]. Explicit expressions for the eigenvectors ${\mbox{\boldmath $\xi$}}_\pm$ and eigenvalues $\lambda_\pm$ of the two-dimensional eigenvalue problem ${{\bf\sf B}}^x\cdot{\mbox{\boldmath $\xi$}}=\lambda{{\bf\sf A}}{\mbox{\boldmath $\xi$}}$, as functions of the ratio $d/a$, were derived in ref. 1.
In the bilinear theory, corresponding to small $\varepsilon$, the orbit $(r_1(t),r_2(t))=(x_2(t)-x_1(t),x_3(t)-x_2(t))$ in relative space is given by ${\mbox{\boldmath $r$}}(t)={\mbox{\boldmath $r$}}_0+{\mbox{\boldmath $\xi$}}_0(t)$ with ${\mbox{\boldmath $r$}}_0=(d,d)$ and $$\label{5.2}{\mbox{\boldmath $\xi$}}_0(t)=\varepsilon a\;\mathrm{Re}\;{\mbox{\boldmath $\xi$}}_+\exp(-i\omega t),$$ with amplitude factor $\varepsilon$ and eigenvector ${\mbox{\boldmath $\xi$}}_+=(1,\xi_+)$ corresponding to the largest eigenvalue. In fig. 1 of ref. 1 we have shown the elliptical orbit in relative space for $d=5a$ and $\varepsilon=0.1$. The corresponding displacement vector in configuration space is given by $$\label{5.3}{{\bf\sf d}}_0(t)={{\bf\sf T}}^{-1}\cdot\left(\begin{array}{c}0\\{\mbox{\boldmath $\xi$}}_0(t)\end{array}\right),
\qquad{{\bf\sf T}}=\left(\begin{array}{ccc}\frac{1}{3}&\frac{1}{3}&\frac{1}{3}\\-1&1&0\\0&-1&1
\end{array}\right).$$ In fig. 1 we show the reduced mean swimming velocity $\overline{U}_{sw}/(\varepsilon^2\omega a)$ as a function of $\varepsilon$ for $d=5a$, as calculated from eq. (3.4). In fig. 2 we show the reduced mean rate of dissipation $\overline{\mathcal{D}}/(\varepsilon^2\eta\omega^2a^3)$, as calculated from eq. (3.22). In fig. 3 we show the efficiency $E_T=\eta\omega a^2\overline{U}_{sw}/\overline{\mathcal{D}}$ as a function of $\varepsilon$. The efficiency increases monotonically with the amplitude factor.
It is of interest to compare the above results with values obtained by the numerical solution of the Stokesian equations of motion eq. (2.5) with hydrodynamic interactions given by eq. (5.1) and prescribed oscillating actuating forces. We use harmonic interactions given by the $3\times 3$-matrix $$\label{5.4}{{\bf\sf H}}=k\left(\begin{array}{ccc}-1&1&0\\1&-2&1\\0&1&-1
\end{array}\right)$$ with elastic constant $k$. This corresponds to nearest neighbor interactions of equal strength $k$ between the three spheres. The stiffness of the swimmer is characterized by the dimensionless number $\sigma$ defined by $$\label{5.5}\sigma=\frac{k}{\pi\eta a\omega}.$$
In general, the first order forces ${{\bf\sf F}}^{(1)}_0(t)$ corresponding to the displacement vector ${{\bf\sf d}}_0(t)$ and the corresponding first order swimming velocity ${\mbox{\boldmath $U$}}^{(1)}_0(t)$, calculated from eq. (3.6), follow from eq. (3.1) as $$\label{5.6}{{\bf\sf F}}^{(1)}_0={\mbox{\boldmath $\zeta$}}^0\cdot(U^{(1)}_{0\beta}{{\bf\sf u}}_\beta+\dot{{{\bf\sf d}}}_0).$$ In the present case only the $x$ components are relevant. The corresponding actuating forces ${{\bf\sf E}}_0(t)$ are found from eq. (4.16) as $$\label{5.7}{{\bf\sf E}}_0(t)={{\bf\sf F}}^{(1)}_0(t)-{{\bf\sf H}}\cdot{{\bf\sf d}}_0(t).$$ These have the property ${{\bf\sf u}}_\alpha\cdot{{\bf\sf E}}_0(t)=0$, so that the sum of actuating forces vanishes. We choose initial conditions for the $x$ coordinates $$\label{5.8}x_1(0)=0,\qquad x_2(0)=d+\varepsilon a,\qquad x_3(0)=2d+\varepsilon a+\varepsilon a\;\mathrm{Re}\;\xi_+.$$ In fig. 4 we show the numerical solution of the equations of Stokesian dynamics eq. (2.5) with forces given by $$\label{5.9}{{\bf\sf F}}(t)={{\bf\sf E}}_0(t)+{{\bf\sf H}}\cdot({{\bf\sf R}}(t)-{{\bf\sf S}}_0)$$ for $d=5a$, stiffness $\sigma=1$, and amplitude factor $\varepsilon=2$ for the first ten periods. We compare the orbit with the ellipse given by eq. (5.2). The mean swimming velocity and mean power, calculated as time-averages over the last period for values of the amplitude factor in the range $0<\varepsilon<2$, are shown in figs. 1 and 2. The corresponding efficiency is shown in fig. 3. The dashed curves in figures $1-3$ replace figs. 3, 4, and 5 of ref. 1, which were calculated from inappropriate actuating forces. The efficiency is approximately twice as large as calculated in ref. 1.
It is true that in fig. 3 the efficiency for given $\varepsilon$ calculated by the kinematic method is always larger than that calculated by the dynamic method from the limit cycle with actuating forces. However, we must compare the mean swimming velocity for two different strokes of the same mean power. In fig. 5 we plot the power as a function of $\varepsilon$ in the range $1.9<\varepsilon<2$ as calculated by the two different methods. The value $\overline{\mathcal{D}}=52\;\eta\omega^2a^3$ of the mean power occurs at $\varepsilon_k=1.949$ in the kinematic method, and at $\varepsilon_d=1.970$ in the dynamic method. For these values the mean swimming velocity is found to be $\overline{U}_{sw}=0.0546\;\omega a$ for the elliptical orbit of the kinematic method, and $\overline{U}_{sw}=0.0538\;\omega a$ for the limit cycle of the dynamical method. Thus in the present case the elliptical orbit is the most efficient of the two. This does not exclude that for the same power an orbit with yet higher speed can be found.
At $\varepsilon=1.38$ and for $d=5a$ we have $\overline{U}_{sw}\approx 0.026\;\omega a$ from eq. (3.4) and $\overline{\mathcal{D}}\approx 25.8\;\eta\omega^2 a^3$ from eq. (3.22) for the orbit given by eq. (5.3). This can be compared with the numerical calculation of Alouges et al. [@2],[@3] on the basis of a Stokes solver. The authors used radius $a=0.05$ mm, and period $T=1$ s. For viscosity of water $\eta=0.01$ poise our calculation yields $\Delta=\overline{U}_{sw}T\approx$ 0.0081 mm and $\overline{\mathcal{D}}T\approx 0.127\times 10^{-12}J$. The latter value is somewhat less than the one given in table 1 of ref. 3, and the displacement agrees well with the value $0.01$ mm of Alouges et al..
Finally we consider the efficiency calculated from eqs. (3.4) and (3.22) for displacement in relative space of the form eq. (5.2), but with the eigenvector ${\mbox{\boldmath $\xi$}}_+$ replaced by ${\mbox{\boldmath $\xi$}}=(1,A\exp(i\delta))$ with absolute value $A$ and phase $\delta$. The values of $A$ and $\delta$ can be related to the Stokes parameters of the elliptical orbit [@12]. In fig. 6 we show the efficiency for amplitude factor $\varepsilon=2$ and ratio $d/a=5$ as a function of $A$ and $\delta$. The maximum is not very pronounced.
\[6\]Discussion
===============
The swimming performance of an assembly of spheres as a function of the amplitude of a chosen stroke can be studied in a purely kinematic formulation. From eq. (3.4) we find the instantaneous swimming velocity, and from eq. (3.22) we find the instantaneous rate of dissipation or power. The mean swimming velocity and the mean power follow by averaging over a period. The ratio of these two quantities yields the efficiency of the stroke.
Alternatively one may use a dynamic approach [@1],[@11] in which the swimmer is modeled as a set of spheres bound harmonically to equilibrium positions and with harmonic interactions. The spheres are subject to actuating forces which sum to zero. The corresponding swimming motion may be found as the limit cycle of the solution of the equations of Stokesian dynamics. The mean swimming velocity and the mean power may be found numerically from the limit cycle.
We have shown in sect. 5 that for a collinear three-sphere swimmer the two methods lead to similar results over a wide range of amplitude, provided that for small amplitude the actuating forces correspond to the chosen kinematic stroke. We have chosen the latter to be the optimal one at small amplitude, as determined from the velocity matrix and the power matrix of the bilinear theory.
The kinematic method is the more straightforward one, since it does not require numerical solution of the equations of Stokesian dynamics. The dynamic approach has the advantage that it provides a physical model of the swimmer. It will be of interest to explore the difference in efficiency for given stroke or given actuating forces as a function of amplitude factor for more sophisticated model swimmers, with actuating forces chosen to agree with the optimal stroke at small amplitude.
[99]{}
B. U. Felderhof, Eur. Phys. J. E [[**37**]{}]{},110 (2014).
F. Alouges, A. DeSimone, and A. Lefebvre, J. Nonlinear Sci. [[**18**]{}]{}, 277 (2008).
F. Alouges, A. DeSimone, and A. Lefebvre, Eur. Phys. J. E [[**28**]{}]{}, 279 (2009).
V. A. Vladimirov, J. Fluid Mech. [[**716**]{}]{}, R1-1 (2013).
A. Najafi and R. Golestanian, Phys. Rev. E [[**69**]{}]{}, 062901 (2004).
R. Golestanian and A. Ajdari, Phys. Rev. E [[**77**]{}]{}, 036308 (2008).
J. Happel and H. Brenner, [*Low Reynolds number hydrodynamics*]{} (Noordhoff, Leyden, 1973).
B. Cichocki, B. U. Felderhof, K. Hinsen, E. Wajnryb, and J. Blawzdziewicz, J. Chem. Phys. [[**100**]{}]{}, 3780 (1994).
B. Cichocki, M. L. Ekiel-Jeżewska, and E. Wajnryb, J. Chem. Phys. [[**111**]{}]{}, 3265 (1999).
M. L. Ekiel-Jeżewska and E. Wajnryb, in [*Theoretical Methods for Micro Scale Viscous Flows*]{}, edited by F. Feuillebois and A. Sellier (Transworld Research Network, Kerala, 2009).
B. U. Felderhof, Phys. Fluids [[**18**]{}]{}, 063101 (2006).
C. F. Bohren and D. R. Huffman, [*Absorption and Scattering of Light by Small Particles*]{} (Wiley, New York, 1983).
Figure captions {#figure-captions .unnumbered}
===============
Fig. 1 {#fig.-1 .unnumbered}
------
Plot of the reduced mean swimming velocity $\overline{U}_{sw}/(\varepsilon^2\omega a)$ for $d=5a$ as a function of the amplitude $\varepsilon$ as calculated by the kinematic method (solid curve), and by the dynamic method with stiffness parameter $\sigma=1$ (dashed curve).
Fig. 2 {#fig.-2 .unnumbered}
------
Plot of the reduced mean swimming power $\overline{\mathcal{D}}/(\varepsilon^2\eta\omega^2 a^3)$ for $d=5a$ as a function of the amplitude $\varepsilon$ as calculated by the kinematic method (solid curve), and by the dynamic method with stiffness parameter $\sigma=1$ (dashed curve).
Fig. 3 {#fig.-3 .unnumbered}
------
Plot of the efficiency $E_T=\eta\omega a^2\overline{U}_{sw}/\overline{\mathcal{D}}$ for $d=5a$ as a function of the amplitude $\varepsilon$ as calculated by the kinematic method (solid curve), and by the dynamic method with stiffness parameter $\sigma=1$ (dashed curve).
Fig. 4 {#fig.-4 .unnumbered}
------
Plot of the orbit in the $r_1r_2$ plane calculated from the equations of Stokesian dynamics for $d=5a,\;\varepsilon=2,\;\sigma=1$ for ten periods. The initial values correspond to Eq. (5.8) and the forces follow from eq. (5.9). We also plot the elliptical orbit for $d=5a,\;\varepsilon=2$ (dashed curve).
Fig. 5 {#fig.-5 .unnumbered}
------
Plot of the mean swimming power $\overline{\mathcal{D}}/(\eta\omega^2 a^3)$ for $d=5a$ as a function of the amplitude $\varepsilon$ in the range $1.9<\varepsilon<2$ as calculated by the kinematic method (solid curve), and by the dynamic method with stiffness parameter $\sigma=1$ (dashed curve).
Fig. 6 {#fig.-6 .unnumbered}
------
Plot of the efficiency $E_T=\eta\omega a^2\overline{U}_{sw}/\overline{\mathcal{D}}$ calculated by the kinematic method for the elliptical orbit in the $r_1r_2$ plane given by eq. (5.2) for $d=5a$ with $\varepsilon=2$ and ${\mbox{\boldmath $\xi$}}_+$ replaced by ${\mbox{\boldmath $\xi$}}=(1,A\exp(i\delta))$ as a function of amplitude $A$ and phase $\delta$.
 (-9.1,3.1) (-1.2,-.2)
 (-9.1,3.1) (-1.2,-.2)
 (-9.1,3.1) (-1.2,-.2)
 (-9.1,3.1) (-1.2,-.2)
 (-9.1,3.1) (-1.2,-.2)
 (-9.1,3.1) (-1.2,-.2)
|
---
abstract: 'We introduce a novel stochastic volatility model where the squared volatility of the asset return follows a Jacobi process. It contains the Heston model as a limit case. We show that the joint density of any finite sequence of log returns admits a Gram–Charlier A expansion with closed-form coefficients. We derive closed-form series representations for option prices whose discounted payoffs are functions of the asset price trajectory at finitely many time points. This includes European call, put, and digital options, forward start options, and can be applied to discretely monitored Asian options. In a numerical analysis we show that option prices can be accurately and efficiently approximated by truncating their series representations.'
author:
- 'Damien Ackerer[^1]'
- 'Damir Filipović[^2]'
- 'Sergio Pulido[^3]'
bibliography:
- 'JSVM.bib'
date: 'February 20, 2018'
title: 'The Jacobi Stochastic Volatility Model[^4]'
---
forthcoming in *Finance and Stochastics*
[**Keywords:**]{} Jacobi process, option pricing, polynomial model, stochastic volatility\
[**MSC (2010):**]{} 91B25, 91B70, 91G20, 91G60\
[**JEL Classification:**]{} C32, G12, G13
Introduction {#sec:intro}
============
Stochastic volatility models for asset returns are popular among practitioners and academics because they can generate implied volatility surfaces that match option price data to a great extent. They resolve the shortcomings of the Black–Scholes model [@black1973pricing], where the return has constant volatility. Among the the most widely used stochastic volatility models is the Heston model [@heston1993closed], where the squared volatility of the return follows an affine square-root diffusion. European call and put option prices in the Heston model can be computed using Fourier transform techniques, which have their numerical strengths and limitations; see for instance @carr1999option, @bakshi2000spanning, @duffie2003, @Fang2009, and @chen2012generalized.
In this paper we introduce a novel stochastic volatility model, henceforth the Jacobi model, where the squared volatility $V_t$ of the log price $X_t$ follows a Jacobi process with values in some compact interval $[v_{min},v_{max}]$. As a consequence, Black–Scholes implied volatilities are bounded from below and above by $\sqrt{v_{min}}$ and $\sqrt{v_{max}}$. The Jacobi model $(V_t,X_t)$ belongs to the class of polynomial diffusions studied in @eri_pis_11, @cuchiero2012polpres, and @filipovic2015polpres. It includes the Black–Scholes model as a special case and converges weakly in the path space to the Heston model for $v_{max}\to\infty$ and $v_{min}=0$.
We show that the log price $X_T$ has a density $g$ that admits a Gram–Charlier A series expansion with respect to any Gaussian density $w$ with sufficiently large variance. More specifically, the likelihood ratio function $\ell =g/w $ lies in the weighted space $L^2_w$ of square-integrable functions with respect to $w$. Hence it can be expanded as a generalized Fourier series with respect to the corresponding orthonormal basis of Hermite polynomials $H_n(X_0), \, n \ge0$. Boundedness of $V_t$ is essential, as the Gram–Charlier A series of $g$ does not converge for the Heston model.
The Fourier coefficients $\ell_n$ of $\ell$ are given by the Hermite moments of $X_T$, $\ell_n={{\mathbb E}}[H_n(X_T)]$. Due to the polynomial property of $(V_t,X_t)$ the Hermite moments admit easy to compute closed-form expressions. This renders the Jacobi model extremely useful for option pricing. Indeed, the price $\pi_f$ of a European option with discounted payoff $f(X_T)$ for some function $f$ in $L^2_w$ is given by the $L^2_w$-scalar product $\pi_f={(f,\ell)_w}=\sum_{n\ge 0} f_n \ell_n$. The Fourier coefficients $f_n$ of $f$ are given in closed-form for many important examples, including European call, put, and digital options. We approximate $\pi_f$ by truncating the price series at some finite order $N$ and derive truncation error bounds.
We extend our approach to price exotic options whose discounted payoff $f(Y)$ depends on a finite sequence of log returns $Y_i=(X_{t_i}-X_{t_{i-1}}), \,1\le i\le d$. As in the univariate case we derive the Gram–Charlier A series expansion of the density $g$ of $Y$ with respect to a properly chosen multivariate Gaussian density $w$. Assuming that $f$ lies in $L^2_w$ the option price $\pi_f$ is obtained as a series representation of the $L^2_w$-scalar product in terms of the Fourier coefficients of $f$ and of the likelihood ratio function $\ell=g/w$ given by the corresponding Hermite moments of $Y$. Due to the polynomial property of $(V_t,X_t)$ the Hermite moments admit closed-form expressions, which can be efficiently computed. The Fourier coefficients of $f$ are given in closed-form for various examples, including forward start options and forward start options on the underlying return.
Consequently, the pricing of these options is extremely efficient and does not require any numerical integration. Even when the Fourier coefficients of the discounted payoff function $f$ are not available in closed-form, e.g. for Asian options, prices can be approximated by integrating $f$ with respect to the Gram–Charlier A density approximation of $g$. This boils down to a numerically feasible integration with respect to the underlying Gaussian density $w$. In a numerical analysis we find that the price approximations become accurate within short CPU time. This is in contrast to the Heston model, for which the pricing of exotic options using Fourier transform techniques is cumbersome and creates numerical difficulties as reported in @kruse2005pricing, @kahl2005not, and @albrecher2006little. In view of this, the Jacobi model also provides a viable alternative to approximate option prices in the Heston model.
The Jacobi process, also known as Wright–Fisher diffusion, was originally used to model gene frequencies; see for instance @karlin1981second and @ethierkurtz86. More recently, the Jacobi process has also been used to model financial factors. For example, @delbaen2002interest model interest rates by the Jacobi process and study moment-based techniques for pricing bonds. In their framework, bond prices admit a series representation in terms of Jacobi polynomials. These polynomials constitute an orthonormal basis of eigenfunctions of the infinitesimal generator and the stationary beta distribution of the Jacobi process; additional properties of the Jacobi process can be found in @mazet97 and @demni2009large. The multivariate Jacobi process has been studied in @gourieroux2006multivariate where the authors suggest it to model smooth regime shifts and give an example of stochastic volatility model without leverage effect. The Jacobi process has been also applied recently to model stochastic correlation matrices in @AhdidaAlfonsi13 and credit default swap indexes in @bernisscotti16.
Density series expansion approaches to option pricing were pioneered by @jarrow1982approximate. They propose expansions of option prices that can be interpreted as corrections to the pricing biases of the Black–Scholes formula. They study density expansions for the law of underlying prices, not the log returns, and express them in terms of cumulants. Evidently, since convergence cannot be guaranteed in general, their study is based on strong assumptions that imply convergence. In subsequent work, @corrado1996skewness and @Corrado97impliedvolatility study Gram–Charlier A expansions of 4$^\text{th}$ order for options on the S&P 500 index. These expansions contain skewness and kurtosis adjustments to option prices and implied volatility with respect to the Black–Scholes formula. The skewness and kurtosis correction terms, which depend on the cumulants of $3^{\text{rd}}$ and $4^{\text{th}}$ order, are estimated from data. Due to the instability of the estimation procedure, higher order expansions are not studied. Similar studies on the biases of the Black–Scholes formula using Gram–Charlier A expansions include @backus2004 and @limelnikov12. More recently, @drimus2013closed and @necula2015general study related expansions with Hermite polynomials. In order to guarantee the convergence of the Gram–Charlier A expansion for a general class of diffusions, @Ait-Sahalia2002 develop a technique based on a suitable change of measure. As pointed out in @filipovic2013density, in the affine and polynomial settings this change of measure usually destroys the polynomial property and the ability to calculate moments efficiently. More recently a similar study has been carried out by @Xiu2014. Gram–Charlier A expansions, under a change of measure, are also mentioned in the work of @MadanMilne94, and the subsequent studies of @Longstaff95, @AbkenMadanRamamurtie96 and @brenner-eom97, where they use these moment expansions to test the martingale property with financial data and hence the validity of a given model.
Our paper is similar to @filipovic2013density in that it provides a generic framework to perform density expansions using orthonormal polynomial basis in weighted $L^2$ spaces for affine models. They show that a bilateral Gamma density weight works for the Heston model. However, that expansion is numerically more cumbersome than the Gram–Charlier A expansion because the orthonormal basis of polynomials has to be constructed using Gram–Schmidt orthogonalization. In a related paper @heston2016spanning study polynomial expansions of prices in the Heston, Hull-White and Variance Gamma models using logistic weight functions.
The remainder of the paper is as follows. In Section \[sec:model\] we introduce the Jacobi stochastic volatility model. In Section \[sec:optionprice\] we derive European option prices based on the Gram–Charlier A series expansion. In Section \[sec:exotic\] we extend this to the multivariate case, which forms the basis for exotic option pricing and contains the European options as special case. In Section \[sec:numeric\] we give some numerical examples. In Section \[sec:conclusions\] we conclude. In Appendix \[appHm\] we explain how to efficiently compute the Hermite moments. All proofs are collected in Appendix \[a:proofs\].
Model specification {#sec:model}
===================
We study a stochastic volatility model where the squared volatility follows a Jacobi process. Fix some real parameters $0\leq v_{min}< v_{max}$, and define the quadratic function $$Q(v) = \frac{(v-v_{min})(v_{max}-v)}{(\sqrt{v_{max}}-\sqrt{v_{min}})^2} .$$ Inspection shows that $v\ge Q(v)$, with equality if and only if $v=\sqrt{v_{min}v_{max}}$, and $Q(v)\ge 0$ for all $v\in [v_{min},v_{max}]$, see Figure \[fig:varcor\] for an illustration.
We consider the diffusion process $(V_t,X_t)$ given by $$\label{sdeXV}
\begin{aligned}
dV_t &= \kappa(\theta - V_t)\,dt + \sigma\sqrt{Q(V_t)}\,dW_{1t}\\
dX_t &= \left(r-\delta-V_t/2\right)dt + \rho\, \sqrt{Q(V_t)}\,dW_{1t} + \sqrt{V_t -\rho^2 \,Q(V_t)}\,dW_{2t}
\end{aligned}$$ for real parameters $\kappa> 0$, $\theta\in (v_{min},v_{max}]$, $\sigma> 0$, interest rate $r$, dividend yield $\delta$, and $\rho\in [-1,1]$, and where $W_{1t}$ and $W_{2t}$ are independent standard Brownian motions on some filtered probability space $(\Omega,{{\mathcal F}},{{\mathcal F}}_t,{{\mathbb Q}})$. The following theorem shows that $(V_t,X_t)$ is well defined.
\[thmexiuni\] For any deterministic initial state $(V_0,X_0)\in [v_{min},v_{max}]\times {{\mathbb R}}$ there exists a unique solution $(V_t,X_t)$ of taking values in $[v_{min},v_{max}]\times {{\mathbb R}}$ and satisfying $$\label{Vtvstart}
\int_0^\infty {\mathbf{1}}_{\{V_t=v\}} dt = 0\quad\text{for all $v\in [v_{min},v_{max})$.}$$ Moreover, $V_t$ takes values in $(v_{min},v_{max})$ if and only if $V_0\in (v_{min},v_{max})$ and $$\label{eq:boundary}
\frac{\sigma^2(v_{max}-v_{min})}{(\sqrt{v_{max}}-\sqrt{v_{min}})^2}\leq 2\kappa\min\{v_{max}-\theta, \theta-v_{min}\}.$$
Property implies that no state $v\in [v_{min},v_{max})$ is absorbing. It also implies that conditional on $\{ V_{t},\, t\in [0,T]\}$, the increments $X_{t_i}-X_{t_{i-1}}$ are non-degenerate Gaussian for any $t_{i-1}<t_i\le T$ as will be shown in the proof of Theorem \[thmdensitymd\]. Taking $v_{min}=0$ and the limit as $v_{max}\to\infty$, condition coincides with the known condition that precludes the zero lower bound for the CIR process, $\sigma^2\leq 2\kappa\theta$.
We specify the price of a traded asset by $S_t = {{\rm e}}^{X_t}$. Then $\sqrt{V_t}$ is the stochastic volatility of the asset return, $d\langle X,X \rangle_t = V_t\,dt $. The cumulative dividend discounted price process ${{\rm e}}^{-(r-\delta)t} S_t$ is a martingale. In other words, ${{\mathbb Q}}$ is a risk-neutral measure. The parameter $\rho$ tunes the instantaneous correlation between the asset return and the squared volatility, $$\frac{d\langle V,X\rangle_t}{\sqrt{d\langle V,V\rangle_t}\sqrt{d\langle X,X\rangle_t}} = \rho\,\sqrt{Q(V_t)/V_t}.$$ This correlation is equal to $\rho$ if $V_t=\sqrt{v_{min}v_{max}}$, see Figure \[fig:varcor\]. In general, we have $\sqrt{Q(V_t)/V_t}\le 1$. Empirical evidences suggest that $\rho$ is negative when $S_t$ is a stock price or index. This is commonly referred as the leverage effect, that is, an increase in volatility often goes along with a decrease in asset value.
Since the instantaneous squared volatility $V_t$ follows a bounded Jacobi process on the interval $[v_{min},v_{max}]$, we refer to as the *Jacobi model.* For $V_{0}=\theta=v_{max}$ we have constant volatility $V_t=V_0$ for all $t\geq 0$ and we obtain the Black–Scholes model $$\label{BSmodel}
dX_t= \left(r-\delta-V_0/2\right)dt+ \sqrt{V_0}\,dW_{2t}.$$ For $v_{min}= 0$ and the limit $v_{max}\to\infty$ we have $Q(v)\to v$, and we formally obtain the Heston model as limit case of , $$\label{sdeHeston}
\begin{aligned}
dV_t &= \kappa(\theta - V_t)\,dt + \sigma\sqrt{V_t}\,dW_{1t}\\
dX_t &= \left(r-\delta-V_t/2\right)dt + \sqrt{V_t}\left(\rho \,dW_{1t} + \sqrt{(1-\rho^2)}\,dW_{2t}\right).
\end{aligned}$$
In fact, the Jacobi model is robust with respect to perturbations, or mis-specifications, of the model parameters $v_{min}$, $v_{max}$ and initial state $(V_0,X_0)$. Specifically, the following theorem shows that the diffusion is weakly continuous in the space of continuous paths with respect to $v_{min}$, $v_{max}$ and $(V_0,X_0)$. In particular, the Heston model is indeed a limit case of our model .
Consider a sequence of parameters $0\le v_{min}^{(n)}<v_{max}^{(n)} $ and deterministic initial states $(V_0^{(n)},X_0^{(n)}) \in [v_{min}^{(n)},v_{max}^{(n)}]\times{{\mathbb R}}$ converging to ${0\le v_{min}<v_{max}\le \infty}$ and $(V_0,X_0)\in [0,\infty)\times{{\mathbb R}}$ as $n\to\infty$, respectively. We denote by $(V_t^{(n)},X_t^{(n)})$ and $(V_t,X_t)$ the respective solutions of , or if $v_{max}=\infty$. Here is our main convergence result.
\[thmconv\] The sequence of diffusions $(V_t^{(n)},X_t^{(n)})$ converges weakly in the path space to $(V_t,X_t)$ as $n\to\infty$.
As the discounted put option payoff function $f_{put}(x) = {{\rm e}}^{-rT} ({{\rm e}}^k - {{\rm e}}^x)^+$ is bounded and continuous on ${{\mathbb R}}$, it follows from the weak continuity stated in Theorem \[thmconv\] that the put option prices based on $(V_t^{(n)},X_t^{(n)})$ converge to the put option price based on the limiting model $(V_t,X_t)$ as $n\to\infty$. The put-call parity, $\pi_{call} - \pi_{put} = {{\rm e}}^{-\delta T}S_0-{{\rm e}}^{-rT+k} $, then implies that also call option prices converge as $n\to\infty$. This carries over to more complex path-dependent options with bounded continuous payoff functional.
Polynomial property {#polynomial-property .unnumbered}
-------------------
Moments in the Jacobi model are given in closed-form. Indeed, let $${{\mathcal G}}f(v,x)=b(v)^\top\nabla f(v,x)+\frac{1}{2}\operatorname{Tr}\left(a(v)\nabla^2f(v,x)\right)$$ denote the generator of $(V_t,X_t)$ with drift vector $b(v)$ and the diffusion matrix $a(v)$ given by $$\label{eqgenerator}
b(v) =\begin{pmatrix}\kappa(\theta-v)\\r-\delta-v/2\end{pmatrix},\quad a(v)=\begin{pmatrix} \sigma^2 Q(v) & \rho\sigma Q(v) \\ \rho\sigma Q(v) & v\end{pmatrix}.$$ Observe that $a(v)$ is continuous in the parameters $v_{min}$, $v_{max}$, so that for $v_{min}=0$ and $v_{max}\to\infty$ we obtain $$a(v)\to \begin{pmatrix} \sigma^2 v & \rho\sigma v \\ \rho\sigma v & v\end{pmatrix},$$ which corresponds to the generator of the Heston model . Let ${{\rm Pol}}_n$ be the vector space of polynomials in $(v,x)$ of degree less than or equal to $n$. It then follows by inspection that the components of $b(v)$ and $a(v)$ lie in ${{\rm Pol}}_1$ and ${{\rm Pol}}_2$, respectively. As a consequence, ${{\mathcal G}}$ maps any polynomial of degree $n$ onto a polynomial of degree $n$ or less, ${{\mathcal G}}\, {{\rm Pol}}_n\subset{{\rm Pol}}_n$, so that $(V_t,X_t)$ is a polynomial diffusion, see @filipovic2015polpres [Lemma 2.2]. From this we can easily calculate the conditional moments of $(V_T,X_T)$ as follows. For $N\in{{\mathbb N}}$, let $M=(N+2)(N+1)/2$ denote the dimension of ${{\rm Pol}}_N$. Let $h_1(v,x),\ldots,h_{M}(v,x)$ be a basis of polynomials of ${{\rm Pol}}_N$ and denote by $G$ the matrix representation of the linear map ${{\mathcal G}}$ restricted to ${{\rm Pol}}_N$ with respect to this basis.
\[thmoments\] For any polynomial $p\in{{\rm Pol}}_N$ and $0\le t\le T$ we have $${{\mathbb E}}\big[p(V_T,X_T)\bigm| {{\mathcal F}}_t\big]=\begin{pmatrix}h_1(V_t,X_t) & \cdots & h_{M}(V_t,X_t)\end{pmatrix}{{\rm e}}^{(T-t)G}\vv p$$ where $\vv p\in{{\mathbb R}}^{M}$ is the coordinate representation of the polynomial $p(v,x)$ with respect to the basis $h_1(v,x),\ldots,h_{M}(v,x)$.
See @filipovic2015polpres [Theorem 3.1].
The moment formula in Theorem \[thmoments\] is crucial in order to efficiently implement the numerical schemes described below.
European option pricing {#sec:optionprice}
=======================
Henceforth we assume that $(V_0,X_0)\in [v_{min},v_{max}]\times {{\mathbb R}}$ is a deterministic initial state and fix a finite time horizon $T>0$. We first establish some key properties of the distribution of $X_T$. Denote the quadratic variation of the second martingale component of $X_t$ in by $$\label{eq:defC_T}
{C_t}=\int_0^t \left(V_s -\rho^2 Q(V_s)\right) ds.$$
The following theorem is a special case of Theorem \[thmdensitymd\] below.
\[thmdensity\] Let $\epsilon<1/(2 v_{max} T)$. The distribution of $X_T$ admits a density $g_T(x)$ on ${{\mathbb R}}$ that satisfies $$\label{gint}
\int_{{\mathbb R}}{{\rm e}}^{\epsilon x^2 } g_T(x)\,dx<\infty .$$ If $$\label{assint}
{{\mathbb E}}\left[ {C_T}^{-1/2-k} \right]<\infty$$ for some $k\in{{\mathbb N}}_0$ then $g_T(x)$ and ${{\rm e}}^{\epsilon x^2 } g_T(x)$ are uniformly bounded and $g_T(x)$ is $k$-times continuously differentiable on ${{\mathbb R}}$. A sufficient condition for to hold for any $k\ge 0$ is $$\label{assint_2}
v_{min}>0\,\text{ and }\,\rho^2<1.\footnote{We conjecture that \eqref{assint} holds for any $k\ge 0$ also when $v_{min}=0$ (and $\kappa\theta>0$) or $\rho^2=1$. For the Heston model~\eqref{sdeHeston} with $Q(v)=v$ and $\rho^2<1$ the conjecture follows from \citet[Theorem~4.1]{duf_01}.}$$
The condition that $\epsilon<1/(2 v_{max} T)$ is sharp for to hold. Indeed, consider the Black–Scholes model where $V_t=\theta=v_{max}$ for all $t\ge 0$. Then $X_T$ is Gaussian with variance $C_T=v_{max}T$. Hence the integral in is infinite for any $\epsilon\ge 1/(2 v_{max} T)$.
Since any uniformly bounded and integrable function on ${{\mathbb R}}$ is square integrable on ${{\mathbb R}}$, as an immediate consequence of Theorem \[thmdensity\] we have the following corollary.
\[corL2w\] Assume holds for $k=0$. Then $$\label{eq:ginL2}
\int_{{\mathbb R}}\frac{g_T(x)^2}{w(x)}\,dx <\infty$$ for any Gaussian density $w(x)$ with variance $\sigma_w^2$ satisfying $$\label{sicon}
\sigma_w^2 >\frac{v_{max}T}{2}.$$
\[remHESTON\] It follows from the proof that the statements of Theorem \[thmdensity\] also hold for the Heston model with $Q(v)=v$ and $\epsilon = 0$. However, the Heston model does not satisfy for any $\epsilon>0$. Indeed, otherwise its moment generating function $$\label{eqmmf}
\widehat{g_T}(z)=\int_{{\mathbb R}}{{\rm e}}^{ zx}g_T(x)\,dx$$ would extend to an entire function in $z\in{{\mathbb C}}$. But it is well known that $\widehat{g_T}(z)$ becomes infinite for large enough $z\in{{\mathbb R}}$, see @andersen2007moment. As a consequence, the Heston model does not satisfy for any finite $\sigma_w$. Indeed, by the Cauchy-Schwarz inequality, implies for any $\epsilon<1/(4\sigma_w^2)$.
We now compute the price at time $t=0$ of a European claim with discounted payoff $f(X_T)$ at expiry date $T>0$. We henceforth assume that holds with $k=0$, and we let $w(x)$ be a Gaussian density with mean $\mu_w$ and variance $\sigma_w^2$ satisfying . We define the weighted Lebesgue space $$L^2_w=\left\{ f(x) : \| f\|_w^2 = \int_{{\mathbb R}}f(x)^2 \,w(x)dx<\infty\right\},$$ which is a Hilbert space with scalar product $${(f,g)_w} = \int_{{\mathbb R}}f(x)g(x)\,w(x)dx .$$ The space $L^2_w$ admits the orthonormal basis of generalized Hermite polynomials $H_n(x)$, $n\ge 0$, given by $$\label{HHcal}
H_n(x) = \frac{1}{\sqrt{n!}} {{\mathcal H}}_n \left(\frac{x-\mu_w}{\sigma_w}\right)$$ where ${{\mathcal H}}_n(x)$ are the standard Hermite polynomials defined by $$\label{PHdef}
{{\mathcal H}}_n(x)=(-1)^n{{\rm e}}^{\frac{x^2}{2}}\frac{d^n}{d x^n}{{\rm e}}^{-\frac{x^2}{2}},$$ see @feller1960introduction [Section XVI.1]. In particular, the degree of $H_n(x)$ is $n$, and ${{(H_m,H_n)_w}=1}$ if $m=n$ and zero otherwise.
Corollary \[corL2w\] implies that the likelihood ratio function $\ell(x)=g_T(x)/w(x)$ of the density $g_T(x)$ of the log price $X_T$ with respect to $w(x)$ belongs to $L^2_w$. We henceforth assume that also the discounted payoff function $f(x)$ is in $L^2_w$. This hypothesis is satisfied for instance in the case of European call and put options. It implies that the price, denoted by $\pi_f$, is well defined and equals $$\label{pihinf}
\pi_f = \int_{{\mathbb R}}f(x) g_T(x)\,dx = {(f,\ell)_w} =\sum_{n\ge 0} f_n \ell_n,$$ for the *Fourier coefficients* of $f(x)$ $$\label{eq:Fcoef}
f_n={(f,H_n)_w},$$ and the Fourier coefficients of $\ell(x)$ that we refer to as *Hermite moments* $$\label{eq:Hmoments}
\ell_n={(\ell,H_n)_w}=\int_{{\mathbb R}}H_n(x) g_T(x)\,dx.$$
We approximate the price $\pi_f$ by truncating the series in at some order $N\ge 1$ and write $$\label{eqproxyprice}
\pi_f^{(N)} = \sum_{n= 0}^N f_n \ell_n,$$ so that $\pi_f^{(N)}\to \pi_f$ as $N\to\infty$. Due to the polynomial property of the Jacobi model, induces an efficient price approximation scheme because the Hermite moments $\ell_n$ are linear combinations of moments of $X_T$ and thus given in closed-form, see Theorem \[thmoments\]. In particular, since $H_0(x)=1$, we have $\ell_0=1$. More details on the computation of $\ell_n$ are given in Appendix \[appHm\].
With the Hermite moments $\ell_n$ available, the computation of the approximation boils down to a numerical integration, $$\label{eqpihNint}
\pi_f^{(N)} = \sum_{n=0}^N {(f,\ell_n H_n)_w}=\int_{{\mathbb R}}f(x) \ell^{(N)}(x) \,w(x)dx ,$$ of $f(x)\ell^{(N)}(x)$ with respect to the Gaussian distribution $w(x)dx$, where the polynomial $\ell^{(N)}(x)=\sum_{n=0}^N \ell_n H_n(x)$ is in closed-form. The integral can be computed by quadrature or Monte-Carlo simulation. In specific cases, we find closed-form formulas for the Fourier coefficients $f_n$ and no numerical integration is needed. This includes European call, put, and digital options, as shown below.
\[R:divHeston\] Formula shows that $g_T^{(N)}(x) = \ell^{(N)}(x)w(x)$ serves as an approximation for the density $g_T(x)$. In fact, we readily see that $g_T^{(N)}(x)$ integrates to one and converges to $g_T(x)$ in $L^2_{1/w}$ as $N\to\infty$. Hence, we have convergence of the Gram–Charlier A series expansion of the density of the log price $X_T$ in $L^2_{1/w}$.[^5] In view of Remark \[remHESTON\], this does not hold for the Heston model.
Matching the first moment or the first two moments of $w(x)$ and $g_T(x)$, we further obtain $$\ell_1=\int_{{\mathbb R}}H_1(x) g_T(x)\,dx= {(H_0,H_1)_w} = 0\quad\text{if $\mu_w={{\mathbb E}}[X_T]$,}\\$$ and similarly, $$\label{mm2}
\ell_1=\ell_2=0\quad\text{if $\mu_w={{\mathbb E}}[X_T]$ and $\sigma_w^2={\rm var}[X_T]$.}$$ Matching the first moment or the first two moments of $w(x)$ and $g_T(x)$ can improve the convergence of the approximation . Note however that and imply ${\rm var}[X_T]>v_{max}T/2$, so that second moment matching is not always feasible in empirical applications.
\[remBSoptionP\] If $\mu_w=X_0+(r-\delta)T-\sigma_w^2/2$, then $f_0=\int_{{{\mathbb R}}} f(x) w(x)dx$ is the Black–Scholes option price with volatility parameter $\sigma_{BS}=\sigma_w/\sqrt{T}$. Because ${{{\mathbb E}}[X_T]=X_0+(r-\delta)T-{\rm var}[X_T]/2}$, this holds in particular if the first two moments of $w(x)$ and $g_T(x)$ match, see . In this case, the higher order terms in $\pi_f^{(N)}=f_0+\sum_{n=3}^N f_n\ell_n$ can be thought of as corrections to the corresponding Black–Scholes price $f_0$ due to stochastic volatility.
The following result, which is a special case of Theorem \[thm:IVEx\] below, provides universal upper and lower bounds on the implied volatility of a European option with discounted payoff $f(X_T)$ at $T$ and price $\pi_f$. The implied volatility $\sigma_{\rm IV}$ is defined as the volatility parameter that renders the corresponding Black–Scholes option price equal to $\pi_f$.
\[thm:ivB\] Assume that the discounted payoff function $f(\log(s))$ is convex in $s>0$. Then the implied volatility satisfies $\sqrt{v_{min}} \le \sigma_{\rm IV} \le \sqrt{v_{max}}$.
Examples {#examples .unnumbered}
--------
We now present examples of discounted payoff functions $f(x)$ for which closed-form formulas for the Fourier coefficients $f_n$ exist. The first example is a call option.[^6]
\[thmoptionFC\] Consider the discounted payoff function for a call option with log strike $k$, $$\label{Eurcall}
f(x) = {{\rm e}}^{-rT} \left( {{\rm e}}^x-{{\rm e}}^k \right)^+ .$$ Its Fourier coefficients $f_n$ in are given by $$\label{E:fourier_coef} \begin{split} f_0&={{\rm e}}^{-rT+\mu_w}I_0\left(\frac{k-\mu_w}{\sigma_w};\sigma_w\right)-{{\rm e}}^{-rT+k}\Phi\left(\frac{\mu_w-k}{\sigma_w}\right);\\
f_n&={{\rm e}}^{-rT+\mu_w}\frac{1}{\sqrt{n!}}\sigma_wI_{n-1}\left(\frac{k-\mu_w}{\sigma_w};\sigma_w\right),\quad n\ge 1.
\end{split}$$ The functions $I_n(\mu;\nu)$ are defined recursively by $$\label{E:recursionI}
\begin{aligned}
I_0(\mu;\nu)&={{\rm e}}^{\frac{\nu^2}{2}} \Phi(\nu-\mu);\\
I_n(\mu;\nu)&= {{\mathcal H}}_{n-1}(\mu){{\rm e}}^{\nu \mu}\phi(\mu)+\nu I_{n-1}(\mu;\nu) ,\quad n\ge 1,
\end{aligned}$$ where $\Phi(x)$ denotes the standard Gaussian distribution function and $\phi(x)$ its density.
The Fourier coefficients of a put option can be obtained from the put-call parity. For digital options, the Fourier coefficients $f_n$ are as follows.
\[thmoptionDigital\] Consider the discounted payoff function for a digital option of the form $$f(x)={{\rm e}}^{-rT}{\mathbf{1}}_{[k,\infty)}(x).$$ Its Fourier coefficients $f_n$ are given by $$\label{E:fourier_coef_dig}
\begin{split}
f_0&={{\rm e}}^{-rT}\Phi\left(\frac{\mu_w-k}{\sigma_w}\right);\\
f_n&=\frac{{\rm e}^{-rT}}{\sqrt{n!}}{{\mathcal H}}_{n-1}\left(\frac{k-\mu_w}{\sigma_w}\right)\phi\left(\frac{k-\mu_w}{\sigma_w}\right),\quad n\ge1,
\end{split}$$ where $\Phi(x)$ denotes the standard Gaussian distribution function and $\phi(x)$ its density.
For a digital option with generic payoff ${\mathbf{1}}_{[k_1,k_2)}(x)$ the Fourier coefficients can be derived using Theorem \[thmoptionDigital\] and ${\mathbf{1}}_{[k_1,k_2)}(x)={\mathbf{1}}_{[k_1,\infty)}(x)-{\mathbf{1}}_{[k_2,\infty)}(x).$
Error bounds and asymptotics {#error-bounds-and-asymptotics .unnumbered}
----------------------------
We first discuss an error bound of the price approximation scheme . The error of the approximation is ${\epsilon^{(N)}=\pi_f-\pi_f^{(N)}=\sum_{n=N+1}^\infty f_n \ell_n}$ for a fixed order $N\ge 1$. The Cauchy–Schwarz inequality implies the following error bound $$\label{BAERR}
\lvert\epsilon^{(N)} \rvert \le \left( \| f \|_w^2 - \sum_{n=0}^N f_n^2 \right)^{\frac{1}{2}} \left( \| \ell \|_w^2 - \sum_{n=0}^N \ell_n^2 \right)^{\frac{1}{2}}.$$ The $L^2_w$-norm of $f(x)$ has an explicit expression, $\|f\|_w^2 = \int_{{\mathbb R}}f(x)^2 \, w(x)dx $, that can be computed by quadrature or Monte–Carlo simulation. The Fourier coefficients $f_n$ can be computed similarly. The Hermite moments $\ell_n$ are given in closed-form. It remains to compute the $L^2_w$-norm of $\ell(x)$. For further use we define $$\label{eq:defM_Tnew}
M_t=X_0+\int_0^t \left(r-\delta-V_s/2\right)ds +\frac{\rho}{\sigma}\left( V_t-V_0 -\int_0^t \kappa\left(\theta-V_s\right)ds \right),$$ so that, in view of , the log price $X_t= M_t + \int_0^t \sqrt{V_s -\rho^2 Q(V_s)}\,dW_{2s}$. Recall also $C_t$ given in .
\[lemellnorm\] The $L^2_w$-norm of $\ell(x)$ is given by $$\label{eqellExp}
\|\ell\|_w^2 = \int_{{\mathbb R}}\frac{g_T(x)^2 }{w(x)}dx = {{\mathbb E}}\left[ \frac{g_T(X_T) }{w(X_T)} \right] ={{\mathbb E}}\left[ \frac{ \phi\left(X_T,\widetilde{M}_T,{\widetilde{C}_T}\right)}{\phi\left(X_T,\mu_w,\sigma_w^2\right)} \right]$$ where $\phi(x,\mu,\sigma^2)$ is the normal density function in $x$ with mean $\mu$ and variance $\sigma^2$, and the pair of random variables $(\widetilde{M}_T,\widetilde{C}_T)$ is independent from $X_T$ and has the same distribution as $(M_T,C_T)$.
In applications, we compute the right hand side of by Monte–Carlo simulation of $(X_T,\widetilde{M}_T,\widetilde{C}_T)$ and thus obtain the error bound .
We next show that the Hermite moments $\ell_n$ decay at an exponential rate under some technical assumptions.
\[lemdecayH\] Suppose that holds and $\sigma_w^2>v_{max}T$. Then there exist finite constants $C>0$ and $0<q<1$ such that $\ell_n^2\leq Cq^n$ for all $n\geq 0$.
Comparison to Fourier transform {#comparison-to-fourier-transform .unnumbered}
-------------------------------
An alternative dual expression of the price $\pi_f$ in is given by the Fourier integral $$\label{pihFourier}
\pi_f = \frac{1}{2\pi}\int_{{{\mathbb R}}} \hat f(-\mu-{{\rm i}}\lambda) \hat g_T(\mu+{{\rm i}}\lambda)d\lambda,$$
where $\widehat f(z)$ and $\widehat{g_T}(z)$ denote the moment generating functions given by , respectively. Here $\mu\in{{\mathbb R}}$ is some appropriate dampening parameter such that ${{\rm e}}^{-\mu x}f(x)$ and ${{\rm e}}^{\mu x}g_T(x)$ are Lebesgue integrable and square integrable on ${{\mathbb R}}$. Indeed, Lebesgue integrability implies that $\widehat f(z)$ and $\widehat{g_T}(z)$ are well defined for $z\in\mu+{{\rm i}}{{\mathbb R}}$ through . Square integrability and the Plancherel Theorem then yield the representation . For example, for the European call option we have $\widehat f(z)={{\rm e}}^{-rT+k(1+z)}/(z(z+1))$ for ${\rm Re}(z)<-1$
Option pricing via is the approach taken in the Heston model , for which there exists a closed-form expression for $\widehat{g_T}(z)$. It is given in terms of the solution of a Riccati equation. The computation of $\pi_f$ boils down to the numerical integration of along with the numerical solution of a Riccati equation for every argument $z\in\mu+{{\rm i}}{{\mathbb R}}$ that is needed for the integration. The Heston model (which entails $v_{max}\to\infty$) does not adhere to the series representation that is based on condition , see Remark \[remHESTON\].
The Jacobi model, on the other hand, does not admit a closed-form expression for $\widehat{g_T}(z)$. But the Hermite moments $\ell_n$ are readily available in closed-form. In conjunction with Theorem \[thmoptionFC\], the (truncated) series representation thus provides a valuable alternative to the (numerical) Fourier integral approach for option pricing. Moreover, the approximation can be applied to any discounted payoff function $f(x)\in L^2_w$. This includes functions $f(x)$ that do not necessarily admit closed-form moment generating function $\widehat f(z)$ as is required in the Heston model approach. In Section \[sec:exotic\], we further develop our approach to price path dependent options, which could be a cumbersome task using Fourier transform techniques in the Heston model.
Exotic option pricing {#sec:exotic}
=====================
Pricing exotic options with stochastic volatility models is a challenging task. We show that the price of an exotic option whose payoff is a function of a finite sequence of log returns admits a polynomial series representation in the Jacobi model.
Henceforth we assume that $(V_0,X_0)\in [v_{min},v_{max}]\times {{\mathbb R}}$ is a deterministic initial state. Consider time points $0=t_0< t_1<t_2<\cdots<t_d$ and denote the log returns $Y_{t_i}=X_{t_i}-X_{t_{i-1}}$ for $i=1,\ldots,d$. The following theorem contains Theorem \[thmdensity\] as special case where $d=1$.
\[thmdensitymd\] Let $\epsilon_1,\ldots,\epsilon_d\in{{\mathbb R}}$ be such that $\epsilon_i< 1/(2 v_{max} (t_i-t_{i-1}))$ for ${i=1,\ldots,d}$. The random vector $(Y_{t_1},\ldots,Y_{t_d})$ admits a density ${g_{t_1,\ldots,t_d}(y)}$ on ${{\mathbb R}}^d$ satisfying $$\label{gint-md}
\int_{{{\mathbb R}}^d} {{\rm e}}^{\sum_{i=1}^d\epsilon_i y_i^2 } g_{t_1,\ldots,t_d}(y)\,dy<\infty.$$ If $$\label{assintmd}
{{\mathbb E}}\left[ \prod_{i=1}^d ({C_{t_{i}}}-C_{t_{i-1}})^{-1/2-n_i} \right]<\infty$$ for all $(n_1\ldots,n_d)\in{{\mathbb N}}_0^d$ with $\sum_{i=1}^d n_i\leq k\in{{\mathbb N}}_0$, for some $k\in{{\mathbb N}}_0$, then $g_{t_1,\ldots,t_d}(y)$ and ${{\rm e}}^{\sum_{i=1}^d\epsilon_i y_i^2 } g_{t_1,\ldots,t_d}(y)$ are uniformly bounded and $g_{t_1,\ldots,t_d}(y)$ is $k$-times continuously differentiable on ${{\mathbb R}}^d$. Property implies for any $k\ge 0$.
Since any uniformly bounded and integrable function on ${{\mathbb R}}^d$ is square integrable on ${{\mathbb R}}^d$, as an immediate consequence of Theorem \[thmdensitymd\] we have the following corollary.
\[corL2wmd\] Assume holds for $k=0$. Then $$\label{E:L2md}
\int_{{{\mathbb R}}^d} \frac{g_{t_1,\ldots,t_d}(y)^2}{\prod_{i=1}^d w_i(y_i)}\,dy <\infty$$ for all Gaussian densities $w_i(y_i)$ with variances $\sigma_{w_i}^2$ satisfying $$\label{siconmd}
\sigma_{w_i}^2 >\frac{v_{max}(t_i-t_{i-1})}{2},\quad i=1,\dots,d.$$
\[R:finitedimdensities\] There is a one-to-one correspondence between the vector of log returns $(Y_{t_1},\ldots,Y_{t_d})$ and the vector of log prices $(X_{t_1},\ldots,X_{t_d})$. Indeed, $$X_{t_i}=X_0+\sum_{j=1}^i Y_{t_j}.$$ Hence, a crucial consequence of Theorem \[thmdensitymd\] is that the finite-dimensional distributions of the process $X_t$ admit densities with nice decay properties. More precisely, the density of $(X_{t_1},\ldots,X_{t_d})$ is $g_{t_1,\ldots,t_d}(x_1-X_0,\ldots,x_d-x_{d-1})$.
Suppose that the discounted payoff of an exotic option is of the form $f(X_{t_1},...,X_{t_d})$. Assume that holds with $k=0$. Set the weight function ${w(y)=\prod_{i=1}^d w_i(y_i)}$, where $w_i(y)$ is a Gaussian density with mean $\mu_{w_i}$ and variance $\sigma_{w_i}^2$ satisfying . Define $$\widetilde{f}(y)=f(X_0+y_1,X_0+y_1+y_2,\ldots,X_0+y_1+\cdots+y_d).$$ Then by similar arguments as in Section \[sec:optionprice\] the price of the option is $$\pi_f={{\mathbb E}}\left[f(X_{t_1},...,X_{t_d})\right]=\sum_{n_1,\ldots,n_d\geq 0}\widetilde{f}_{n_1,\ldots,n_d}\ell_{n_1,\ldots,n_d}$$ where the Fourier coefficients $\widetilde{f}_{n_1,\ldots,n_d}$ and the Hermite moments $\ell_{n_1,\ldots,n_d}$ are given by $$\widetilde{f}_{n_1,\ldots,n_d}={(\widetilde{f},H_{n_1,\ldots,n_d})_w}=\int_{{{\mathbb R}}^d}\widetilde{f}(y)H_{n_1,\ldots,n_d}(y)w(y)\,dy$$ and $$\label{E:Hmommentmd}
\ell_{n_1,\ldots,n_d}={{\mathbb E}}\big[H_{n_1,\ldots,n_d}(Y_{t_1},\ldots,Y_{t_d})\big]$$ with $H_{n_1,\ldots,n_d}(y_1,\ldots,y_d)=\prod_{i=1}^d H^{(i)}_{n_i}(y_i)$, where $H^{(i)}_{n_i}(y_i)$ is the generalized Hermite polynomial of degree $n_i$ associated to parameters $\mu_{w_i}$ and $\sigma_{w_i}$, see . The price approximation at truncation order $N\ge 1$ is given, in analogy to , by $$\label{eqproxypriceMD}
\pi_f^{(N)} = \sum_{n_1+\cdots+n_d=0}^N \widetilde{f}_{n_1,\ldots,n_d} \ell_{n_1,\ldots,n_d},$$ so that $\pi_f^{(N)}\to \pi_f$ as $N\to\infty$.
We now derive universal upper and lower bounds on the implied volatility for the exotic option with discounted payoff function $f(X_{t_1},...,X_{t_d})$ and price $\pi_f$. We denote by $$\label{SBSdN}
dS^{{\rm BS}}_t = S^{{\rm BS}}_t (r-\delta)\,dt + S^{{\rm BS}}_t \sigma_{\rm BS}\,dB_t$$ the Black–Scholes price process with volatility $\sigma_{\rm BS}>0$ where $B_t$ is some Brownian motion. The Black–Scholes price is defined by $$\pi^{\sigma_{\rm IV}}_f ={{\mathbb E}}\Big[ f\left(\log S^{{\rm BS}}_{t_1},\dots,\log S^{{\rm BS}}_{t_d}\right) \Big] .$$ The implied volatility $\sigma_{\rm IV}$ is the volatility parameter $\sigma_{\rm BS}$ that renders the Black–Scholes option price $\pi^{\sigma_{\rm IV}}_f=\pi_f$. The following theorem provides bounds on the values that $\sigma_{\rm IV}$ may take.
\[thm:IVEx\] Assume that the payoff function $f(\log(s_1),\dots,\log(s_d))$ is convex in the prices $(s_1,\dots,s_d)\in (0,\infty)^d$. Then the implied volatility satisfies ${\sqrt{v_{min}} \le \sigma_{\rm IV} \le \sqrt{v_{max}}}$.
Examples {#examples-1 .unnumbered}
--------
We provide some examples of exotic options on the asset with price $S_t={{\rm e}}^{X_t}$ for which our method applies.
The payoff of a [*forward start call option on the underlying return*]{} between dates $t$ and $T$, and with strike $K$ is $( S_T/S_t - K )^+$ and its discounted payoff function is given by $$\widetilde{f}(y)={{\rm e}}^{-r T} \left( {{\rm e}}^{y_2} - K\right)^+$$ with the times $t_1=t$ and $t_2=T$. Note that $\widetilde{f}(y)= \widetilde{f}(y_2)$ only depends on $y_2$, so that this example reduces to the univariate case. In particular, the Fourier coefficients $\tilde{f}_n$ coincide with those of a call option and, as we shall see in Theroem \[thmhermitemoments\_md\], the *forward* Hermite moments $\ell_n^\ast={{\mathbb E}}[H_n(X_{t_2}-X_{t_1})]$ can be computed efficiently. Theorem \[thm:IVEx\] applies in particular to the forward start call option on the underlying return, so that its implied volatility is uniformly bounded for all maturities $T>t$. On the other hand, we know from @jacquier2015asymptotics that in the Heston model the same implied volatility explodes (except at the money) when $T\to t$.
The payoff of a *forward start call option* with maturity $T$, strike fixing date $t$ and proportional strike $K$ is $(S_T - K S_t )^+$ and its discounted payoff function is given by $$\widetilde{f}(y)= {{\rm e}}^{-r T} \left( {{\rm e}}^{X_{0} + y_1 + y_2} - K{{\rm e}}^{X_{0} + y_1 }\right)^+$$ with the times $t_1=t$ and $t_2=T$. In this case the Fourier coefficients have the form $$\label{eq:fourier_fwdoptionmd}
\begin{aligned}
\tilde{f}_{n_1,n_2}&={{\rm e}}^{X_0-rT}\int_{{{\mathbb R}}^2}{{\rm e}}^{y_1}H_{n_1}(y_1)w_1(y_1)({{\rm e}}^{y_2}-K)^+H_{n_2} (y_2)w_2(y_2)\,dy_1\,dy_2\\
&={{\rm e}}^{X_0-rT}f_{n_1}^{(0,-\infty)}f_{n_2}^{(0,\log K)} =f_{n_2}^{(0,\log K)}\frac{\sigma_w^{n_1}}{\sqrt{n_1!}}{{\rm e}}^{X_0-rT+\mu_{w_1}+\sigma_{w_1}^2/2},
\end{aligned}$$ where $f_n^{(r,k)}$ denotes the Fourier coefficient of a call option for interest rate $r$ and log strike $k$ as in . Here we have used – to deduce that $f_{n_1}^{(0,-\infty)}=\frac{\sigma_w^{n_1}}{\sqrt{n_1!}}{{\rm e}}^{\mu_{w_1}+\sigma_{w_1}^2/2}$. In particular no numerical integration is needed. Additionally, the Hermite moments $$\label{eq:hermitemommd}
\ell_{n_1,n_2}={{\mathbb E}}\big[H_{n_1}(Y_{t_1})H_{n_2}(Y_{t_2})\big]$$ can be calculated efficiently as explained in Theorem \[thmhermitemoments\_md\]. The pricing of forward start call options (on the underlying return) in the Black–Scholes model is straightforward. Analytical expressions for forward start call options (on the underlying return) have been provided in the Heston model by @kruse2005pricing. However, these integral expressions involve the Bessel function of first kind and are therefore rather difficult to implement numerically.
The payoff of an *Asian call option* with maturity $T$, discrete monitoring dates $t_{1}< \cdots <t_{d}= T$, and fixed strike $K$ is $( \sum_{i=1}^d S_{t_i}/d - K)^+$ and its discounted payoff function is given by $$\widetilde{f}(y)= {{\rm e}}^{-r T} \left(\frac{1}{d}\sum_{i=1}^d {{\rm e}}^{X_0+\sum_{j=1}^{i} y_i} - K\right)^+.$$ The payoff of an *Asian call option with floating strike* is ${(S_T - K\sum_{i=1}^d S_{t_i}/d)^+}$ and its discounted payoff function is given by $$\widetilde{f}(y)= {{\rm e}}^{-r T} \left( {{\rm e}}^{X_0+\sum_{j=1}^{d} y_j} - \frac{K}{d}\sum_{i=1}^d {{\rm e}}^{X_0+\sum_{j=1}^{i} y_j} \right)^+.$$ The valuation of Asian options with continuously monitoring in the Black–Scholes model has been studied in @rogers1995value and @yor2001bessel among others.
\[rem:cuba\] The Fourier coefficients may not be available in closed-form for some exotic options, such as the Asian options. In this case, we compute the multi-dimensional version of the approximation via numerical integration of with respect to a Gaussian density $w(x)$ in ${{\mathbb R}}^d$. This can be efficiently implemented using Gauss-Hermite quadrature, see for example @jackel2005note. Specifically, denote $z_m\in{{\mathbb R}}^d$ and $w_m\in(0,1)$ the $m$-th point and weight of an $d$-dimensional standard Gaussian cubature rule with $M$ points. The price approximation can then be computed as follows $$\label{eq:quadapp}
\begin{aligned}
\pi_f^{(N)} &= \int_{{{\mathbb R}}^d} \tilde{f}\big( \mu + \Sigma z\big) \; \ell^{(N)}\big( \mu + \Sigma z\big) \; \frac{1}{(2\pi)^{\frac d 2}}{{\rm e}}^{-\frac{\lVert z \rVert^2}{2}} dz \\
& \approx \sum_{m=1}^M w_m \, \tilde{f}_m \; \sum_{n_1+\dots+n_d\le N} \, \ell_{n_1,\dots,n_d} \; \prod_{i=1}^d \, \frac{1}{\sqrt{n_i !}}{{\mathcal H}}_{n_i}(z_{m,i})
\end{aligned}$$ where $\mu=(\mu_{w_1},\dots,\mu_{w_d})^\top$, $\Sigma = \operatorname{diag}(\sigma_{w_1},\dots,\sigma_{w_d})$, $\tilde{f}_m=\tilde{f}( \mu + \Sigma z_m)$, and $ {{\mathcal H}}_n$ denotes the standard Hermite polynomial . We emphasize that many elements in the above expression can be precomputed. A numerical example is given for the Asian option in Section \[sec:exotic\_num\] below.
Numerical analysis {#sec:numeric}
==================
We analyse the performance of the price approximation with closed-form Fourier coefficients and numerical integration of for European call options, forward start and Asian options. This includes price approximation error, model implied volatility, and computational time. The model parameters are fixed as: ${r=\delta=X_0=0}$, ${\kappa=0.5}$, ${\theta=V_0=0.04}$, ${v_{min}=10^{-4}}$, ${v_{max}=0.08}$, ${\rho=-0.5}$, and ${\sigma=1}$. The parameter values are in line with what could be obtained from a calibration to market prices, such as S&P500 option prices, with the exception of $v_{max}$ that is set smaller than the typical fitted value. The choice $v_{max}=0.08$ permits to match the first two moments of $w(x)$ and $g(x)$ as in , which improves the convergence of the approximation . We refer to @ackerer2017option for an extension of the polynomial option pricing method, which works well for arbitrary parameter values.
European call option
--------------------
Figure \[fig:fnlnpin\] displays Hermite moments $\ell_n$, Fourier coefficients $f_n$, and approximation option prices $\pi^{(N)}_f$ for a European call option with maturity $T=1/12$ and log strike $k=0$ (ATM) as functions of the truncation order $N$. The first two moments of the Gaussian density $w(x)$ match the first two moments of $X_T$, see .[^7] We observe that the $\ell_n$ and $f_n$ sequences oscillate and converge toward zero. The amplitudes of these oscillations negatively impact the speed at which the approximation price sequence converges. The gray lines surrounding the price sequence are the upper and lower price error bounds computed as in and Lemma \[lemellnorm\], using $10^5$ Monte-Carlo samples. The price approximation converges rapidly.
Table \[tab:conv\] reports the implied volatility values and absolute errors in percentage points for the log strikes $k=\{-0.1,\,0,\,0.1\}$ and for various truncation orders. The reference option prices have been computed at truncation order $N=50$. For all strikes the truncation order $N=10$ is sufficient to be within 10 basis points of the reference implied volatility.
Figure \[fig:smile\] displays the implied volatility smile for various $v_{min}$ and $v_{max}$ such that $\sqrt{v_{min} v_{max}}=\theta$, and for the Heston model . We observe that the smile of the Jacobi model approaches the Heston smile when $v_{min}$ is small and $v_{max}$ is large. Somewhat surprisingly, a relatively small value for $v_{max}$ seems to be sufficient for the two smiles to coincide for options around the money. Indeed, although the variance process has an unbounded support in the Heston model, the probability that it will visit values beyond some large threshold can be extremely small. Figure \[fig:smile\] also illustrates how the implied volatility smile flattens when the variance support shrinks, $v_{max}\downarrow \theta$. In the limit $v_{max}=\theta$, we obtain the flat implied volatility smile of the Black–Scholes model. This shows that the Jacobi model lies between the Black–Scholes model and the Heston model and that the parameters $v_{min}$ and $v_{max}$ offer additional degrees of flexibility to model the volatility surface.
As reported in Figure \[fig:cputimes\], the Fourier coefficients can be computed in less than a millisecond thanks to the recursive scheme -. Computing the Hermite moments is more costly, however they can be used to price all options with the same maturity. The most expensive task appears to be the construction of the matrix $G$, which however is a one-off. The Hermite moment $\ell_n$ in turn derives from the vector $v_{n,T}={{\rm e}}^{G T}{\bm e}_{\pi(0,n)}$ which can be used for any initial state $(V_0,X_0)$. Note that specific numerical methods have been developed to compute the action of the matrix exponential ${{\rm e}}^{G T}$ on the basis vector ${\bm e}_{\pi(0,n)}$, see for example @al2011computing, @hochbruck1997krylov, and references therein. The running times were realized with a standard desktop computer using a single 3.5 Ghz 64 bits CPU and the programming language.
Forward start and Asian options {#sec:exotic_num}
-------------------------------
The left panels of Figure \[fig:fwdasian\] display the approximation prices of a forward start call option with strike fixing time $t_1=1/52$ and maturity $t_2=5/52$, so that $d=2$, and of an Asian call option with weekly discrete monitoring and maturity four weeks, $t_i=i/52$ for $i\le d=4$. Both options have log strike $k=0$. The price approximations at order $N$ have been computed using . For the forward start call option, we match the first two moments of $w_i(y_i)$ and $Y_{t_i}$. For the Asian call option, we chose $\sigma_{w_i}=\sqrt{v_{max}/104}+10^{-4}$ and $\mu_{w_i}=E[X_{1/52}]$, which is in line with but does not match the first two moments of $Y_{t_i}$. The Fourier coefficients are not available in closed-form for the Asian call option, therefore we integrated its payoff function with respect to the density approximation using Gaussian cubature as described in Remark \[rem:cuba\]. We observe that with exotic payoffs the price approximation sequence may require a larger order before stabilizing. For example, for the forward start price approximation it seems necessary to truncate beyond $N=15$ in order to obtain a accurate price approximation.
The Asian option price is approximated by whose computational cost depends on the number of elements in the double summation. Therefore, in order to efficiently approximate the price, we used a truncation of the 4-dimensional product of the one-dimensional Gaussian quadrature with 20 points. More precisely, we selected the quadrature points having a weight larger than the $90\%$ quantile of all the weights. This means that, out of the $20^4$ initial points, $M=16\,000$ points were selected and their weights normalized. Note that the $144\,000 $ removed points had a total weight of $7.2\times10^{-4}$ percent which is extremely small. Hence, the selected points cover most of the non-negligible part of the multivariate Gaussian density support. An alternative approach would be to use optimal Gaussian quantizers, see @pages2003optimal.
The right panels of Figure 5 display the multi-index Hermite moments $\ell_{n_1,\dots,n_d}$ with multi-orders $n_1+\cdots+n_d=1,\dots,10$. Note that there are $\binom{N+d}{N}$ Hermite moments $\ell_{n_1,\dots,n_d}$ of total order $n_1+\cdots+n_d\le N$. In practice, we observe that a significant proportion of the Hermite moments is negligible so that they may simply be set to zero if they are smaller than a certain threshold to be computed online. As for the quadrature points, doing so reduces the computational cost of approximating the option price. Therefore, when approximating the Asian option price, we removed the Hermite moments having an absolute value smaller than the correspondning $10\%$ quantile. For example, when $N=20$, this implies removing all the Hermite moments with an absolute value $|\ell_{n_1,\dots,n_d}|$ smaller than $2.35\times10^{-6}$.
Conclusion {#sec:conclusions}
==========
The Jacobi model is a highly tractable and versatile stochastic volatility model. It contains the Heston stochastic volatility model as a limit case. The moments of the finite dimensional distributions of the log prices can be calculated explicitly thanks to the polynomial property of the model. As a result, the series approximation techniques based on the Gram–Charlier A expansions of the joint distributions of finite sequences of log returns allow us to efficiently compute prices of options whose payoff depends on the underlying asset price at finitely many time points. Compared to the Heston model, the Jacobi model offers additional flexibility to fit a large range of Black–Scholes implied volatility surfaces. Our numerical analysis shows that the series approximations of European call, put and digital option prices in the Jacobi model are computationally comparable to the widely used Fourier transform techniques for option pricing in the Heston model. The truncated series of prices, whose computations do not require any numerical integration, can be implemented efficiently and reliably up to orders that guarantee accurate approximations as shown by our numerical analysis. The pricing of forward start options, which does not involve any numerical integration, is significantly simpler and faster than the iterative numerical integration method used in the Heston model. The minimal and maximal volatility parameters are universal bounds for Black–Scholes implied volatilities and provide additional stability to the model. In particular, Black–Scholes implied volatilities of forward start options in the Jacobi model do not experience the explosions observed in the Heston model. Furthermore, our density approximation technique in the Jacobi model circumvents some limitations of the Fourier transform techniques in affine models and allows us to price discretely monitored Asian options.
Hermite moments {#appHm}
===============
We apply Theorem \[thmoments\] to describe more explicitly how the Hermite moments $\ell_0,\dots,\ell_N$ in can be efficiently computed for any fixed truncation order $N\ge 1$. We let $M=\dim{{\rm Pol}}_N$ and $\pi:{{\mathcal E}}\rightarrow \{1,\ldots, M\}$ be an enumeration of the set of exponents $${{\mathcal E}}=\{(m,n): m,n\ge 0;\,m+n\le N\}.$$ The polynomials $$\label{defhmn}
h_{\pi(m,n)}(v,x) = v^m H_n(x),\quad (m,n)\in{{\mathcal E}}$$ then form a basis of ${{\rm Pol}}_N$. In view of the elementary property $$H_n'(x)=\frac{\sqrt{n}}{\sigma_w}H_{n-1}(x),\quad n\ge 1,$$ we obtain that the $M\times M$–matrix $G$ representing ${{\mathcal G}}$ on ${{\rm Pol}}_N$ has at most 7 nonzero elements in column $\pi(m,n)$ with $(m,n)\in{{\mathcal E}}$ given by $$\label{eqmatrixG}
\begin{aligned}
G_{\pi(m-2,n),\pi(m,n)}&=-\frac{\sigma^2 m(m-1) v_{max}v_{min}}{2(\sqrt{v_{max}}-\sqrt{v_{min}})^2},\quad m\ge 2;\\
G_{\pi(m-1,n-1),\pi(m,n)}&=-\frac{\sigma\rho m\sqrt{n} v_{max}v_{min}}{\sigma_w(\sqrt{v_{max}}-\sqrt{v_{min}})^2}, \quad m,n\ge 1;\\
G_{\pi(m-1,n),\pi(m,n)}&=\kappa\theta m+\frac{\sigma^2m(m-1) (v_{max}+v_{min})}{2(\sqrt{v_{max}}-\sqrt{v_{min}})^2}, \quad m\ge 1;\\
G_{\pi(m,n-1),\pi(m,n)}&=\frac{(r-\delta)\sqrt{n}}{\sigma_w}+\frac{\sigma\rho m\sqrt{n} (v_{max}+v_{min})}{\sigma_w(\sqrt{v_{max}}-\sqrt{v_{min}})^2}, \quad n\ge 1;\\
G_{\pi(m+1,n-2),\pi(m,n)}&=\frac{\sqrt{n(n-1)}}{2\sigma_w^2}, \quad n\ge 2;\\
G_{\pi(m,n),\pi(m,n)}&=-\kappa m-\frac{\sigma^2 m(m-1)}{2(\sqrt{v_{max}}-\sqrt{v_{min}})^2}\\
G_{\pi(m+1,n-1),\pi(m,n)}&=-\frac{\sqrt{n}}{2\sigma_w}-\frac{\sigma\rho m\sqrt{n}}{\sigma_w(\sqrt{v_{max}}-\sqrt{v_{min}})^2}, \quad n\ge 1.
\end{aligned}$$
Theorem \[thmoments\] now implies the following result.
\[thmhermitemoments\] The coefficients $\ell_n$ are given by $$\label{eqell}
\ell_n=\begin{pmatrix}
h_1(V_0,X_0) & \cdots & h_M(V_0,X_0)
\end{pmatrix} \,{{\rm e}}^{TG}\, \mathbf{e}_{\pi(0,n)},\quad 0\le n\le N,$$ where $\bm e_{i}$ is the $i$–th standard basis vector in ${{\mathbb R}}^{M}$.
The choice of the basis polynomials $h_{\pi(m,n)}$ in is convenient for our purposes because: 1) each column of the $M\times M$-matrix $G$ has at most seven nonzero entries. 2) The coefficients $\ell_n$ in the expansion of prices , can be obtained directly from the action of ${{\rm e}}^{G T}$ on ${\bm e}_{\pi_{(0,n)}}$ as specified in . In practice, it is more efficient to compute directly this action, rather than computing the matrix exponential ${{\rm e}}^{G T}$ and then selecting the $\pi_{(0,n)}$-column.
We now extend Theorem \[thmhermitemoments\] to a multi-dimensional setting. The following theorem provides an efficient way to compute the multi-dimensional Hermite moments defined in . Before stating the theorem we fix some notation. Set $N=\sum_{i=1}^d n_i$ and $M=\dim{{\rm Pol}}_N$. Let $G^{(i)}$ be the matrix representation of the linear map ${{\mathcal G}}$ restricted to ${{\rm Pol}}_N$ with respect to the basis, in row vector form, $$h^{(i)}(v,x)=\begin{pmatrix}h^{(i)}_1(v,x) & \cdots & h^{(i)}_M(v,x)\end{pmatrix},$$ with $h^{(i)}_{\pi(m,n)}(v,x)=v^mH^{(i)}_n(x)$ as in where $H_n^{(i)}$ is the generalized Hermite polynomial of degree $n$ associated to the parameters $\mu_{w_i}$ and $\sigma_{w_i}$, see . Define the $M\times M$-matrix $A^{(k,l)}$ by $$A^{(k,l)}_{i,j} =
\begin{cases}
H_n^{(l)}(0) & \text{if $i=\pi(m,k)$ and $j=\pi(m,n)$ for some $m,n\in{{\mathbb N}}$} \\
0 & \text{otherwise.}
\end{cases}$$
\[thmhermitemoments\_md\] For any $n_1,\ldots,n_d\in{{\mathbb N}}_0$, the multi-dimensional Hermite moment in can be computed through $$\ell_{n_1,\ldots,n_d}=h^{(1)}(V_0,0) \left(\prod_{i=1}^{d-1}{{\rm e}}^{G^{(i)} \Delta t_i}A^{(n_{i},i+1)} \right){{\rm e}}^{G^{(d)}\Delta t_d}{\bm e}_{\pi(0,n_d)},$$ where $\Delta t_i=t_i-t_{i-1}$.
By an inductive argument it is sufficient to illustrate the case $n=2$. Applying the law of iterated expectation we obtain $$\ell_{n_1,n_2}= {{\mathbb E}}\left[H^{(1)}_{n_1}(Y_{t_1})H^{(2)}_{n_2}(Y_{t_2})\right] = {{\mathbb E}}\left[H^{(1)}_{n_1}(X_{t_1}-X_{0}){{\mathbb E}}_{t_1}\big[H^{(2)}_{n_2}(X_{t_2}-X_{t_1})\big]\right].$$ Since the increment $X_{t_2}-X_{t_1}$ does not depend on $X_{t_1}$ we can rewrite, using Theorem \[thmoments\], $${{\mathbb E}}_{t_1}\left[H^{(2)}_{n_2}(X_{t_2}-X_{t_1})\right] = {{\mathbb E}}\Big[H^{(2)}_{n_2}(X_{\Delta t_2})\Bigm| X_{0}=0, V_0=V_{t_1}\Big] = h^{(2)}(V_{t_1},0) v^{(n_2,2)}$$ where $v^{(n_2,2)}=e^{G^{(2)} \Delta t_2}{\bm e}_{\pi(0,n_2)}$. Note that this last expression is a polynomial solely in $V_{t_1}$ $$h^{(2)}(V_{t_1},0) v^{(n_2,2)} = \sum_{n=0}^{n_2} a_n \, V_{t_1}^n, \quad \text{with } a_n = \sum_{n+j\le n_2} \, H_j^{(2)}(0) \, v^{(n_2,2)}_{\pi(n,j)}.$$ Theorem \[thmoments\] now implies that the Hermite coefficient is given by $$\ell_{n_1,n_2}= {{\mathbb E}}\big[p(V_{t_1},X_{t_1})\bigm| X_0=0\big] = h^{(1)}(V_0,0) {{\rm e}}^{G^{(1)} \Delta t_1} \vec{p}$$ where $\vec{p}$ is the vector representation in the basis $h^{(1)}(v,x)$ of the polynomial $$p(v,x) = \sum_{n=0}^{n_2} a_n \, v^n \, H_{n_1}(x) = h^{(1)}(v,x) \vec{p} .$$ We conclude by observing that the coordinates of the vector $\vec{p}$ are given by ${{\rm e}}_i^\top \, \vec{p} = a_n$ if $i=\pi(n, n_1)$ for some integer $n\le n_2$ and equal to zero otherwise, which in turn shows that $\vec{p} = A^{(n_1,2)} \, v^{(n_2,2)} $.
Proofs {#a:proofs}
======
This appendix contains the proofs of all theorems and propositions in the main text.
Proof of Theorem \[thmexiuni\] {#proof-of-theoremthmexiuni .unnumbered}
------------------------------
For strong existence and uniqueness of , it is enough to show strong existence and uniqueness for the SDE for $V_t$, $$\label{SDEofV}dV_t = \kappa(\theta - V_t)\,dt + \sigma\sqrt{Q(V_t)}\,dW_{1t}.$$ Since the interval $[0,1]$ is an affine transformation of the unit ball in ${{\mathbb R}}$, weak existence of a $[v_{min},v_{max}]$-valued solution can be deduced from @LarssonPulido2015 [Theorem 2.1]. Path-wise uniqueness of solutions follows from @yamada1971 [Theorem 1]. Strong existence of solutions for the SDE is a consequence of path-wise uniqueness and weak existence of solutions, see for instance @yamada1971 [Corollary 1].
Now let $v\in [v_{min},v_{max})$. The occupation times formula @revuz1999continuous [Corollary VI.1.6] implies $$\int_0^\infty {\mathbf{1}}_{\{V_t=v\}} \sigma^2 Q(v)\,dt =0,\quad v>v_{min}.$$ Since $\sigma^2 Q(v)>0$ this proves for $v>v_{min}$. We can show that the local time at $v_{min}$ of $V_t$ is zero as in @filipovic2015polpres [Theorem 5.3] which in turn proves for $v=v_{min}$ by applying [@filipovic2015polpres Lemma A.1].
To conclude, Proposition 2.2 in @LarssonPulido2015 shows that $V_t\in(v_{min},v_{max})$ if and only if $V_0\in (v_{min},v_{max})$ and condition holds.
Proof of Theorem \[thmconv\] {#proof-of-theoremthmconv .unnumbered}
----------------------------
The proof of Theorem \[thmconv\] builds on the following four lemmas.
\[lemweakconvmoments\] Suppose that $Y$ and $Y^{(n)}$, $n\ge 1$, are random variables in ${{\mathbb R}}^d$ for which all moments exist. Assume further that $$\lim_n{{\mathbb E}}\big[p(Y^{(n)})\big]={{\mathbb E}}\big[p(Y)\big],$$ for any polynomial $p(y)$ and that the distribution of $Y$ is determined by its moments. Then the sequence $Y^{(n)}$ converges weakly to $Y$ as $n\to\infty$.
Theorem 30.2 in @billingsley1995probability proves this result for the case $d=1$. Inspection shows that the proof is still valid for the general case.
\[lemfindimmom\] The moments of the finite-dimensional distributions of the diffusions $(V_t^{(n)},X_t^{(n)})$ converge to the respective moments of the finite-dimensional distributions of $(V_t,X_t)$. That is, for any $0\le t_1<\cdots<t_d<\infty$ and for any polynomials $p_1(v,x),\ldots,p_d(v,x)$ we have $$\label{eqlemmomentconv0}
\lim_{n}{{\mathbb E}}\left[\prod_{i=1}^d p_i(V^{(n)}_{t_i},X^{(n)}_{t_i})\right]={{\mathbb E}}\left[\prod_{i=1}^d p_i(V_{t_i},X_{t_i})\right].$$
Let $N=\sum_{i=1}^d \deg p_i$. Throughout the proof we fix a basis of ${{\rm Pol}}_N$, $h_j(v,x)$ where $1\le j\le M=\dim{{\rm Pol}}_N$, and for any polynomial $p(v,x)$ we denote by $\vv{p}$ its coordinates with respect to this basis. We denote by $G$ and $G^{(n)}$ the respective $M\times M$-matrix representations of the generators restricted to ${{\rm Pol}}_N$ of $(V_t,X_t)$ and $(V^{(n)}_t,X^{(n)}_t)$, respectively. We then define recursively the polynomials $q_i(v,x)$ and $q^{(n)}_i(v,x)$ for $1\le i\le d$ by $$\begin{aligned}
q_d(v,x)&=q^{(n)}_d(v,x)=p_d(v,x),\\
q_i(v,x)&= p_i(v,x) \begin{pmatrix}h_1(v,x) & \cdots & h_{M}(v,x)\end{pmatrix}{{\rm e}}^{(t_{i+1}-t_i)G}\vv{q_{i+1}} ,\quad 1\le i<d,\\
q^{(n)}_{i}(v,x)&= p_i(v,x) \begin{pmatrix}h_1(v,x) & \cdots & h_{M}(v,x)\end{pmatrix}{{\rm e}}^{(t_{i+1}-t_i)G^{(n)}}\vv{q^{(n)}_{i+1}},\quad 1\le i<d.\\
\end{aligned}$$ As in the proof of Theorem \[thmhermitemoments\_md\], a successive application of Theorem \[thmoments\] and the law of iterated expectation implies that $$\begin{aligned}
{{\mathbb E}}\left[\prod_{i=1}^d p_i(V_{t_i},X_{t_i})\right]&={{\mathbb E}}\left[\prod_{i=1}^{d-1} p_i(V_{t_i},X_{t_i}) {{\mathbb E}}\big[ p_d(V_{t_d},X_{t_d}) \bigm| {{\mathcal F}}_{t_{d-1}}\big]\right]\\
&=\cdots=\begin{pmatrix}h_1(V_0,X_0) & \cdots & h_{M}(V_0,X_0)\end{pmatrix}{{\rm e}}^{t_1G}\vv{q_1}.
\end{aligned}$$ and similarly, $${{\mathbb E}}\left[\prod_{i=1}^d p_i(V^{(n)}_{t_i},X^{(n)}_{t_i})\right]=\begin{pmatrix}h_1(V_0^{(n)},X_0^{(n)}) & \cdots & h_{M}(V_0^{(n)},X_0^{(n)})\end{pmatrix}{{\rm e}}^{t_1G^{(n)}}\vv{q_1^{(n)}}.$$
We deduce from that $$\label{eqlemmomentconv1}
\lim_n G^{(n)}=G.$$ This is valid also for the limit case $v_{max}=\infty$, that is ${Q(v)=v-v_{min}}$. This fact together with an inductive argument shows that $\lim_n\vv{q_1^{(n)}}=\vv{q_1}$. This combined with proves .
\[lemfindimdet\] The finite-dimensional distributions of $(V_t,X_t)$ are determined by their moments.
The proof of this result is contained in the proof of .
\[lemtight\] The family of diffusions $(V_t^{(n)},X_t^{(n)})$ is tight.
Fix a time horizon $N\in{{\mathbb N}}$. We first observe that by @karatzas1991brownian [Problem V.3.15] there is a constant $K$ independent of $n$ such that $$\label{Kolmd}
{{\mathbb E}}\left[\|(V_t^{(n)},X_t^{(n)})-(V_s^{(n)},X_s^{(n)})\|^4\right]\leq K|t-s|^2,\quad 0\le s<t\le N.$$ Now fix any positive $\alpha<1/4$. Kolmogorov’s continuity theorem (see @revuz1999continuous [Theorem I.2.1]) implies that $${{\mathbb E}}\left[\left(\sup_{0\le s<t\le N} \frac{\|(V_t^{(n)},X_t^{(n)})-(V_s^{(n)},X_s^{(n)})\|}{|t-s|^\alpha}\right)^4\right]\le J$$ for a finite constant $J$ that is independent of $n$. The modulus of continuity $$\Delta(\delta,n)=\sup\Big\{ \|(V_t^{(n)},X_t^{(n)})-(V_s^{(n)},X_s^{(n)})\| : 0\le s<t\le N,\, |t-s|<\delta\Big\}$$ thus satisfies $${{\mathbb E}}\left[ \Delta(\delta,n)^4\right] \le \delta^\alpha J .$$ Using Chebyshev’s inequality we conclude that, for every $\epsilon>0$, $${{\mathbb Q}}\left[ \Delta(\delta,n)>\epsilon\right] \le \frac{{{\mathbb E}}[ \Delta(\delta,n)^4]}{\epsilon^4}\le \frac{ \delta^\alpha J}{\epsilon^4},$$ and thus $\sup_{n} {{\mathbb Q}}[ \Delta(\delta,n)>\epsilon]\to 0$ as $\delta\to 0$. This together with the property that the initial states $(V_0^{(n)},X_0^{(n)})$ converge to $(V_0,X_0)$ as $n\to\infty$ proves the lemma, see @rogers2000diffusions [Theorem II.85.3].[^8]
Kolmogorov’s continuity theorem (see @revuz1999continuous [Theorem I.2.1]) and imply that the paths of $(V_t,X_t)$ are $\alpha$-Hölder continuous for any $\alpha<1/4$.
Lemmas \[lemweakconvmoments\]–\[lemfindimdet\] imply that the finite-dimensional distributions of the diffusions $(V_t^{(n)},X_t^{(n)})$ converge weakly to those of $(V_t,X_t)$ as $n\to\infty$. Theorem \[thmconv\] thus follows from Lemma \[lemtight\] and @rogers2000diffusions [Lemma II.87.3].
Proof of Theorem \[thmoptionFC\] {#proof-of-theoremthmoptionfc .unnumbered}
--------------------------------
We claim that the solution of the recursion is given by $$\label{E:functionI}
I_n(\mu;\nu)=\int_\mu^{\infty}{{\mathcal H}}_n(x){{\rm e}}^{\nu x}\phi(x)\,dx,\quad n\ge 0.$$ Indeed, for $n=0$ the right hand side of equals $$\int_\mu^{\infty}{{\mathcal H}}_0(x){{\rm e}}^{\nu x}\phi(x)\,dx={{\rm e}}^{\frac{\nu^2}{2}}\int_{\mu-\nu}^{\infty}\phi(x)\,dx,$$ which is $I_0(\mu;\nu)$. For $n\ge 1$, we recall that the standard Hermite polynomials ${{\mathcal H}}_{n}(x)$ satisfy $$\label{E:recursion}
{{\mathcal H}}_{n}(x)= x{{\mathcal H}}_{n-1}(x)-{{\mathcal H}}_{n-1}'(x).$$ Integration by parts and then show that $$\begin{aligned}
\int_\mu^{\infty}{{\mathcal H}}_n(x){{\rm e}}^{\nu x}\phi (x)\,dx&= \int_{\mu}^{\infty}{{\mathcal H}}_{n-1}(x){{\rm e}}^{\nu x}x\phi (x)\,dx-\int_{\mu}^{\infty}{{\mathcal H}}_{n-1}'(x){{\rm e}}^{\nu x}\phi (x)\,dx \\
&=- {{\mathcal H}}_{n-1}(x){{\rm e}}^{\nu x}\phi (x)\big|_{\mu}^{\infty}+ \int_{\mu}^{\infty}{{\mathcal H}}_{n-1}(x)\nu {{\rm e}}^{\nu x}\phi (x)\,dx.\\
&= {{\mathcal H}}_{n-1}(\mu){{\rm e}}^{\nu \mu}\phi (\mu)+\nu\int_{\mu}^{\infty}{{\mathcal H}}_{n-1}(x){{\rm e}}^{\nu x}\phi (x)\,dx ,
\end{aligned}$$ which proves .
A change of variables, using and , shows $$\begin{aligned}
f_n &= {{\rm e}}^{-rT} \int_k^\infty \left( {{\rm e}}^x -{{\rm e}}^k\right) H_n(x) w(x)\,dx\\
&= {{\rm e}}^{-rT} \int_{\frac{k-\mu_w}{\sigma_w}}^\infty \left( {{\rm e}}^{\mu_w+\sigma_w z} -{{\rm e}}^k\right) H_n(\mu_w+\sigma_w z) w(\mu_w+\sigma_w z)\sigma_w\,dz\\
&= {{\rm e}}^{-rT} \frac{1}{\sqrt{n!}}\int_{\frac{k-\mu_w}{\sigma_w}}^\infty \left( {{\rm e}}^{\mu_w+\sigma_w z} -{{\rm e}}^k\right) {{\mathcal H}}_n(z) \phi(z)\,dz\\
&={{\rm e}}^{-rT+\mu_w}\frac{1}{\sqrt{n!}}I_n\left(\frac{k-\mu_w}{\sigma_w};\sigma_w\right)-{{\rm e}}^{-rT+k}\frac{1}{\sqrt{n!}}I_n\left(\frac{k-\mu_w}{\sigma_w};0\right).
\end{aligned}$$ Formulas follow from the recursion formula .
Proof of Theorem \[thmoptionDigital\] {#proof-of-theoremthmoptiondigital .unnumbered}
-------------------------------------
As before, a change of variables, using and , shows $$\begin{aligned}
f_n&={{\rm e}}^{-rT} \int_k^\infty H_n(x) w(x)\,dx = \frac{{{\rm e}}^{-rT}}{\sqrt{n!}} \int_{\frac{k-\mu_w}{\sigma_w}}^\infty {{\mathcal H}}_n(z) \phi(z)\,dz \\&=\frac{{{\rm e}}^{-rT}}{\sqrt{n!}} I_n\left(\frac{k-\mu_w}{\sigma_w};0\right).
\end{aligned}$$ Formulas follow directly from .
Proof of Lemma \[lemellnorm\] {#proof-of-lemmalemellnorm .unnumbered}
-----------------------------
We use similar notation as in the proof of Theorem \[thmdensitymd\]. In particular, with $C_T$ as in and $M_T$ as in , we denote by $$\label{GxdefN}
G_T(x)=(2\pi C_T)^{-\frac12}\exp\left(-\frac{(x-M_T)^2}{2C_T}\right)$$ the conditional density of $X_T$ given $\{V_t:t\in[0,T]\}$, so that $g_T(x)={{\mathbb E}}[G_T(x)]$ is the unconditional density of $X_T$. Lemma \[lemellnorm\] now follows from observing that $G_T(x)=\phi(x,M_T,C_T)$ and $w(x) = \phi(x,\mu_w,\sigma_w^2)$.
Proof of Lemma \[lemdecayH\] {#proof-of-lemmalemdecayh .unnumbered}
----------------------------
We first recall that by Cramér’s inequality (see for instance @erd_53 [Section 10.18]) there exists a constant $K>0$ such that for all $n\ge 0$ $$\label{cramer}
{{\rm e}}^{-(x-\mu_w)^2/4\sigma_w^2}|H_n(x)|=(n!)^{-1/2}{{\rm e}}^{-(x-\mu_w)^2/4\sigma_w^2}\left|{{\mathcal H}}_n\left(\frac{x-\mu_w}{\sigma_w}\right)\right|\leq K.$$ Additionally, as in the proof Theorem \[thmdensitymd\], since $1/4\sigma^2_w<1/(2v_{max}T)$, $${{\mathbb E}}\left[\int_{{{\mathbb R}}}{{\rm e}}^{(x-\mu_w)^2/4\sigma_w^2}G_T(x)\,dx\right]<\infty,$$ where $G_T(x)$ is given in . This implies $$\begin{aligned}
{{\mathbb E}}&\left[\int_{R}|H_n(x)|G_T(x)\,dx\right]\\
&\quad={{\mathbb E}}\left[\int_{R}|H_n(x)|{{\rm e}}^{-(x-\mu_w)^2/4\sigma_w^2}{{\rm e}}^{(x-\mu_w)^2/4\sigma_w^2}G_T(x)\,dx\right]\\
&\quad\leq K {{\mathbb E}}\left[ \int_R{{\rm e}}^{(x-\mu_w)^2/4\sigma_w^2}G_T(x)\, dx\right]<\infty.
\end{aligned}$$ We can therefore use Fubini’s theorem to deduce $$\label{eq:lnexplicit}
\ell_n=\int_{{{\mathbb R}}}H_n(x)g_T(x)\,dx
={{\mathbb E}}\left[\int_{{{\mathbb R}}}H_n(x)G_T(x)\,dx\right]={{\mathbb E}}[Y_n].$$
We now analyze the term inside the expectation in . A change of variables shows $$Y_n=\int_{R}H_n(x)G_T(x)\,dx=(2\pi n!)^{-1/2}\int_{{{\mathbb R}}}{{\mathcal H}}_n(\alpha y+\beta){{\rm e}}^{-y^2/2}\,dy,$$ where we define $\alpha=\frac{\sqrt{C_T}}{\sigma_w}$ and $\beta=\frac{M_T-\mu_w}{\sigma_w}$. We recall that $$\label{E:boundsC}
0<(1-\rho^2)v_{min}T\leq C_T\leq v_{max}T<\sigma_w.$$ The inequalities in together with the fact that $V_t$ is a bounded process yield the following uniform bounds for $\alpha,\beta$, $$\label{E:alphabounds}
1-q=\frac{(1-\rho^2)v_{min}T}{\sigma_w^2}\leq \alpha^2\leq v_{max}T/\sigma_w^2<1,\quad|\beta|\leq R,$$ with constants $0<q<1$ and $R>0$. Define $$x_n=(2\pi)^{-1/2}\int_{{{\mathbb R}}}{{\mathcal H}}_n(\alpha y+\beta){{\rm e}}^{-y^2/2}\,dy,$$ so that $$Y_n=\int_{R}H_n(x)G_T(x)\,dx=(n!)^{-1/2}x_n.$$ An integration by parts argument using and the identity $${{\mathcal H}}'_n(x)=n{{\mathcal H}}_{n-1}(x)$$ shows the following recursion formula $$x_n=\beta x_{n-1}-(n-1)(1-\alpha^2)x_{n-2},$$ with $x_0=1$ and $x_1=\beta$. This recursion formula is closely related to the recursion formula of the Hermite polynomials which helps us deduce the following explicit expression $$\label{eq:lnexplicit3}
x_n=n!\sum_{m=0}^{\lfloor n/2\rfloor}\frac{(\alpha^2-1)^m}{m!(n-2m)!}\frac{\beta^{n-2m}}{2^m}.$$ Recall that $$\label{eq:lnexplicit3.5}
{{\mathcal H}}_n(x)=n!\sum_{m=0}^{\lfloor n/2\rfloor}\frac{(-1)^m}{m!(n-2m)!}\frac{x^{n-2m}}{2^m}.$$ By and we have $$\begin{aligned}
x_n & =n!(1-\alpha^2)^{\frac n2}\sum_{m=0}^{\lfloor n/2\rfloor}\frac{(-1)^m}{m!(n-2m)!}\frac{((1-\alpha^2)^{-\frac12}\beta)^{n-2m}}{2^m} \\
& = (1-\alpha^2)^{\frac n2}{{\mathcal H}}_n\left((1-\alpha^2)^{-\frac12}\beta\right)\end{aligned}$$ and $$\ell_n={{\mathbb E}}\left[(1-\alpha^2)^{\frac n2}n!^{-\frac12}{{\mathcal H}}_n\left((1-\alpha^2)^{-\frac12}\beta\right)\right].$$ Cauchy-Schwarz inequality and yield $$\label{E:estimateelln}
\begin{split}
\ell_n^2 & \leq {{\mathbb E}}\left[\left(n!^{-\frac 12}{{\mathcal H}}_n\big((1-\alpha^2)^{-\frac12}\beta\big)\right)^2\right]{{\mathbb E}}\left[(1-\alpha^2)^n\right] \\
& \leq K^2{{\mathbb E}}\left[\exp\Big(\beta^2/\big(2(1-\alpha^2)\big)\Big)\right]{{\mathbb E}}\left[(1-\alpha^2)^n\right].
\end{split}$$ Inequalities and imply the existence of constants $C>0$ and $0<q<1$ such that $\ell_n^2\leq Cq^n$.
Proof of Theorem \[thmdensitymd\] {#proof-of-theoremthmdensitymd .unnumbered}
---------------------------------
In order to shorten the notation we write $\Delta Z_{t_i}=Z_{t_i}-Z_{t_{i-1}}$ for any process $Z_t$. From we infer that the log price $X_t= M_t + \int_0^t \sqrt{V_s -\rho^2 Q(V_s)}\,dW_{2s}$ where $M_t$ is defined in . In particular the log returns $Y_{t_i}=\Delta X_{t_i}$ have the form $$Y_{t_i}=\Delta M_{t_i}+ \int_{t_{i-1}}^{t_i} \sqrt{V_s -\rho^2 Q(V_s)}\,dW_{2s}.$$ In view of property we infer that $\Delta C_{t_i}>0$ for $i=1,\ldots,d$. Motivated by @BroadieKaya2006, we notice that, conditional on $\{ V_{t},\, t\in [0,T]\}$, the random variable $(Y_{t_1},\ldots,Y_{t_d})$ is Gaussian with mean vector $(\Delta M_{t_1},\ldots,\Delta M_{t_d})$ and covariance matrix ${\rm diag}(\Delta C_{t_1},\ldots,\Delta C_{t_d})$. Its density $G_{t_1,\ldots,t_d}(y)$ has the form $$G_{t_1,\ldots,t_d}(y) =(2\pi)^{-d/2}\prod_{i=1}^d (\Delta C_{t_i})^{-1/2}\exp\left[-\sum_{i=1}^d \frac{ (y_i-\Delta M_{t_i})^2}{2\Delta C_{t_i}}\right].$$ Fubini’s theorem implies that $g_{t_1,\ldots,t_d}(y) = {{\mathbb E}}[ G_{t_1,\ldots,t_d}(y)]$ is measurable and satisfies, for any bounded measurable function $f(y)$, $${{\mathbb E}}\left[ f(Y_{t_1},\ldots,Y_{t_d})\right] ={{\mathbb E}}\left[ \int_{{{\mathbb R}}^d} f(y)G_{t_1,\ldots,t_d}(y) \,dy\right] = \int_{{{\mathbb R}}^d} f(y)g_{t_1,\ldots,t_d}(y) \,dy.$$ Hence the distribution of $(Y_{t_1},\ldots,Y_{t_d})$ admits the density $g_{t_1,\ldots,t_d}(y)$ on ${{\mathbb R}}^d$. Dominated convergence implies that $g_{t_1,\ldots,t_d}(y)$ is uniformly bounded and $k$–times continuously differentiable on ${{\mathbb R}}^d$ if holds. The arguments so far do not depended on $\epsilon_i$ and also apply to the Heston model, which proves Remark \[remHESTON\].
For the rest of the proof we assume, without loss of generality, that $\epsilon_i>0$ for $i=1,\ldots,d$. Observe that the mean vector and covariance matrix of $G_{t_1,\ldots,t_d}(y)$ admit the uniform bounds $$|\Delta M_{t_i}| \le K,\quad {|\Delta C_{t_i}|}\le v_{max}(t_i-t_{i-1}),$$ for some finite constant $K$. Define $\Delta_i=1-2\epsilon_i {\Delta C_{t_i}}$ and $\delta_i=1-2\epsilon_i v_{max}(t_i-t_{i-1})$. Then $\delta_i\in (0,1)$ and $\Delta_i\geq \delta_i$. Completing the square implies $$\begin{aligned}
&{{\rm e}}^{\sum_{i=1}^d \epsilon_i y_i^2 } G_{t_1,\ldots,t_d}(y) =\prod_{i=1}^d (2\pi\Delta C_{t_i})^{-\frac12}\exp\left[\epsilon_i y_i^2-\frac{ (y_i-\Delta M_{t_i})^2}{2{\Delta C_{t_i}}}\right]\notag \\
& \quad =\prod_{i=1}^d (2\pi\Delta C_{t_i})^{-\frac12} \exp\left[ - \frac{\Delta_i}{2{\Delta C_{t_i}}}\left( y_i-\frac{\Delta M_{t_i}}{\Delta_i}\right)^2 + \frac{\Delta M_{t_i}^2}{2\Delta C_{t_i}}\left(\frac{1}{\Delta_i}-1\right)\right] \notag\\
& \quad =\prod_{i=1}^d (2\pi\Delta C_{t_i})^{-\frac12} \exp\left[ - \frac{\Delta_i}{2\Delta C_{t_i}}\left( y_i-\frac{\Delta M_{t_i}}{\Delta_i}\right)^2 + \frac{\epsilon_i \Delta M_{t_i}^2}{\Delta_i}\right].\label{ex2GTx}
\end{aligned}$$ Integration of then gives $$\int_{{{\mathbb R}}^d} {{\rm e}}^{\sum_{i=1}^d \epsilon_i y_i^2 } G_{t_1,\ldots,t_d}(y) \,dy = \prod_{i=1}^d \frac{1}{\sqrt{\Delta_i}} \exp\left[\frac{\epsilon_i \Delta M_{t_i}^2}{\Delta_i}\right]\leq \prod_{i=1}^d \frac{1}{\sqrt{\delta_i }} \exp\left[\frac{\epsilon_i K^2}{\delta_i }\right] .$$ Hence follows by Fubini’s theorem after taking expectation on both sides. We also derive from that $$\begin{aligned}
e^{\sum_{i=1}^d \epsilon_i y_i^2 } g_{t_1,\ldots,t_d}(y)&={{\mathbb E}}\left[{{\rm e}}^{\sum_{i=1}^d \epsilon_i y_i^2 } G_{t_1,\ldots,t_d}(y)\right]\\
&\leq {{\mathbb E}}\left[ \prod_{i=1}^d (2\pi\Delta C_{t_i})^{-\frac12}\right]\prod_{i=1}^d \exp\left[\frac{\epsilon_i K^2}{\delta_i }\right].\end{aligned}$$ Hence ${{\rm e}}^{\sum_{i=1}^d \epsilon_i y_i^2 } g_{t_1,\ldots,t_d}(y)$ is uniformly bounded and continuous on ${{\mathbb R}}^d$ if holds. In fact, for this to hold it is enough suppose that holds with $k=0$. Moreover, implies that $\Delta C_{t_i}\ge (t_i-t_{i-1})(1-\rho^2) v_{min}>0$ and follows.
Proof of Theorem \[thm:IVEx\] {#proof-of-theoremthmivex .unnumbered}
-----------------------------
We assume the Brownian motions $B_t$ and $(W_{1t},W_{2t})$ in and are independent. We denote by $\pi_{f,t}$ the time-$t$ price of the exotic option in the Jacobi model.
For any $t_{i-1}\le t<t_i$ and given a realization $X_{t_1},\dots,X_{t_{i-1}}$, the time-$t$ Black–Scholes price of the option is a function $\pi^{\sigma_{\rm BS}}_f(t,S_t)$ of $t$ and the spot price $S_t$ defined by $$\begin{aligned}
{{\rm e}}^{-rt} \pi^{\sigma_{\rm BS}}_f(t,s)&={{\mathbb E}}\big[ f\left(X_{t_1},\dots,X_{t_{i-1}},\log S^{{\rm BS}}_{t_i},\dots,\log S^{{\rm BS}}_{t_d}\right) \bigm| {{\mathcal F}}_t,\, S^{{\rm BS}}_t=s\big] \\
&= {{\mathbb E}}\Big[ f\Big(X_{t_1},\dots,X_{t_{i-1}},\log \left( s R^{\rm BS}_{t,t_i}\right),\dots,\log \left( s R^{\rm BS}_{t,t_d}\right)\Big) \Bigm| {{\mathcal F}}_t\Big]
\end{aligned}$$ where we write $$R^{\rm BS}_{t,t_i}={{\rm e}}^{\left(r-\delta-\frac{1}{2}\sigma_{\rm BS}^2\right)(t_i-t)+\sigma_{\rm BS}\left(B_{t_i}-B_t\right)}.$$ By assumption, we infer that $\pi^{\sigma_{\rm BS}}_f(t,s)$ is convex in $s>0$. Moreover, $\pi_{f}^{\sigma_{\rm BS}}(t,s)$ satisfies the following PDE $$\label{BSPDE}
r \pi_{f}^{{\sigma_{\rm BS}}}(t,s) = \frac{\partial \pi_{f}^{{\sigma_{\rm BS}}}(t,s)}{\partial t} + (r-\delta) s \frac{\partial \pi_{f}^{{\sigma_{\rm BS}}}(t,s)}{\partial s} + \frac{1}{2} \sigma_{\rm BS}^2 s^2\frac{\partial^2 \pi_{f}^{{\sigma_{\rm BS}}}(t,s)}{\partial s^2}$$ and has terminal value satisfying $\pi_{f}^{{\sigma_{\rm BS}}}(T,S_T)=\pi_{f,T}$. Write $$\begin{aligned}
& \pi_{f,t}^{\sigma_{\rm BS}} = \pi^{\sigma_{\rm BS}}_f(t,S_t),\quad \Theta^{\sigma_{\rm BS}}_{f,t}= - \frac{\partial \pi_{f}^{{\sigma_{\rm BS}}}(t,S_t)}{\partial t}, \\
& \Delta_{f,t}^{\sigma_{\rm BS}} = \frac{\partial \pi_{f}^{{\sigma_{\rm BS}}}(t,S_t)}{\partial s},\quad \Gamma^{\sigma_{\rm BS}}_{f,t}= \frac{\partial^2 \pi_{f}^{{\sigma_{\rm BS}}}(t,S_t)}{\partial s^2} \end{aligned}$$ and $dN_t =\rho\, \sqrt{Q(V_t)}\,dW_{1t} + \sqrt{V_t -\rho^2 \,Q(V_t)}\,dW_{2t} $ for the martingale driving the asset return in such that, using , $$\begin{aligned}
d(e^{-rt}\pi_{f,t}^{{\sigma_{\rm BS}}}) &= {{\rm e}}^{-rt} \left( -r \pi_{f,t}^{{\sigma_{\rm BS}}} -\Theta^{\sigma_{\rm BS}}_{f,t} +(r-\delta) S_t \Delta^{\sigma_{\rm BS}}_{f,t} +\frac{1}{2} V_t S_t^2 \Gamma^{\sigma_{\rm BS}}_{f,t}\right)dt \\
& \quad + {{\rm e}}^{-rt} \Delta_{f,t}^{{\sigma_{\rm BS}}} S_t \,dN_t\\
&= \frac{1}{2} {{\rm e}}^{-rt} (V_t-\sigma_{\rm BS}^2) S_t^2 \Gamma^{\sigma_{\rm BS}}_{f,t}\,dt + {{\rm e}}^{-rt}\Delta_{f,t}^{{\sigma_{\rm BS}}} S_t \,dN_t.\end{aligned}$$
Consider the self-financing portfolio with zero initial value, long one unit of the exotic option, and short $\Delta_{f,t}^{{\sigma_{\rm BS}}} $ units of the underlying asset. Let $\Pi_t$ denote the time-$t$ value of this portfolio. Its discounted price dynamics then satisfies
$$\begin{aligned}
d(e^{-rt}\Pi_t) &= d(e^{-rt}\pi_{f,t}) - \Delta_{f,t}^{{\sigma_{\rm BS}}} \left( d(e^{-rt}S_t) + {{\rm e}}^{-rt}S_t\delta\,dt\right)\\
&= d(e^{-rt}\pi_{f,t}) - \Delta_{f,t}^{{\sigma_{\rm BS}}}e^{-rt} S_t\,dN_t \\
&= d(e^{-rt}\pi_{f,t}) - d(e^{-rt}\pi_{f,t}^{{\sigma_{\rm BS}}}) + \frac{1}{2}e^{-rt}(V_t-\sigma_{\rm BS}^2)S_t^2\Gamma_{f,t}^{{\sigma_{\rm BS}}}\,dt.\end{aligned}$$
Integrating in $t$ gives $$\label{E:BShedging}
{{\rm e}}^{-rT} \Pi_T = -\pi_{f,0} + \pi_{f,0}^{{\sigma_{\rm BS}}} + \frac{1}{2}\int_0^Te^{-rt}(V_t-\sigma_{\rm BS}^2)S_t^2\Gamma_{f,t}^{{\sigma_{\rm BS}}}\,dt$$ as $\pi_{f,T} - \pi_{f,T}^{{\sigma_{\rm BS}}}=0$.
We now claim that the time-$0$ option price $\pi_{f,0}=\pi_f$ lies between the Black–Scholes option prices for $\sigma_{\rm BS}=\sqrt{v_{min}}$ and $\sigma_{\rm BS}=\sqrt{v_{max}}$, $$\label{claimD}
\pi^{\sqrt{v_{min}}}_{f,0} \le \pi_{f} \le \pi^{\sqrt{v_{max}}}_{f,0}.$$ Indeed, let $\sigma_{\rm BS}=\sqrt{v_{min}}$. Because $\Gamma_{f,t}^{{\rm BS}}\ge 0$ by assumption, it follows from that ${{\rm e}}^{-rT} \Pi_T\ge - \pi_{f,0}+\pi_{f,0}^{\sqrt{v_{min}}}$. Absence of arbitrage implies that $\Pi_T$ must not be bounded away from zero, hence $ - \pi_{f,0}+\pi_{f,0}^{\sqrt{v_{min}}}\le 0$. This proves the left inequality in . The right inequality follows similarly, whence the claim is proved.
A similar argument shows that the Black–Scholes price $\pi^{\sigma_{\rm BS}}_{f,0}$ is non-decreasing in $\sigma_{{\rm BS}}$, whence $\sqrt{v_{min}}\le \sigma_{{\rm IV}}\le \sqrt{v_{max}}$, and the theorem is proved.
----- ------- ------- ------- ------- ------- -------
$N$ IV error IV error IV error
0–2 20.13 2.62 20.09 0.86 20.08 0.83
3 22.12 0.63 19.96 0.73 16.60 2.65
4 23.02 0.27 19.27 0.04 18.88 0.37
5 23.03 0.28 19.27 0.04 18.88 0.37
6 22.93 0.18 19.33 0.10 18.72 0.53
7 22.76 0.01 19.32 0.09 19.11 0.14
8 22.83 0.08 19.22 0.01 19.18 0.07
9 22.82 0.07 19.22 0.01 19.19 0.06
10 22.83 0.08 19.25 0.02 19.22 0.03
15 22.74 0.01 19.23 0.00 19.32 0.07
20 22.75 0.00 19.23 0.00 19.28 0.03
30 22.75 0.00 19.23 0.00 19.25 0.00
----- ------- ------- ------- ------- ------- -------
: Implied volatility values and absolute errors in percentage points for European call option price approximations at various truncation orders $N$ and log strikes $k$. []{data-label="tab:conv"}
![Variance and correlation.\
The quadratic variation of the Jacobi model (black line) and of the Heston model (gray line) are displayed in the left panel as a function of the instantaneous variance. The right panel displays the instantaneous correlation between the processes $X_t$ and $V_t$ as a function of the instantaneous variance. We denote $v_* = \sqrt{v_{min}v_{max}}$ and assumed that $\rho<0$.[]{data-label="fig:varcor"}](varcor.pdf)
![European call option.\
Hermite moments $\ell_n$, Fourier coefficients $f_n$, and approximation prices $\pi_f^{(N)}$ with error bounds as functions of the order $n$ (truncation order $N$).[]{data-label="fig:fnlnpin"}](call.pdf)
![Implied volatility smile: from Heston to Black–Scholes.\
The first row displays the variance process’ diffusion function in the Jacobi model (black line) and in the Heston model (gray line). The second row displays the implied volatility as a function of the log strike $k$ in the Jacobi model (black line) and in the Heston model (gray line). []{data-label="fig:smile"}](smiles.pdf)
![Computational performance.\
The left panel displays the computing time to derive the Hermite moments $\ell_n$ (black line) and the matrix $G$ (gray line) as functions of the order $n$. The right panel displays the same relation for the Fourier coefficients $f_n$ (black line). []{data-label="fig:cputimes"}](cputimes.pdf)
![Forward start and Asian options.\
The left panels display the approximation prices as functions of the truncation order $N$. The right panels display the corresponding Hermite moments for multi-orders $n_1+\cdots+n_d=1,\dots,10$. []{data-label="fig:fwdasian"}](fwdasian.pdf)
[^1]: Swissquote Bank, Gland, Switzerland. E-mail: <damien.ackerer@swissquote.ch>
[^2]: EPFL and Swiss Finance Institute, Lausanne, Switzerland. E-mail: <damir.filipovic@epfl.ch>
[^3]: Laboratoire de Mathématiques et Modélisation d’Évry (LaMME), Université d’Évry-Val-d’Essonne, ENSIIE, Université Paris-Saclay, Évry, France. E-mail: <sergio.pulidonino@ensiie.fr>
[^4]: We thank the participants at the 2014 Stochastic Analysis in Finance and Insurance Conference in Oberwolfach, the 2015 AMaMeF and Swissquote Conference in Lausanne, the 2016 ICMS Workshop in Edinburgh, and the seminar at Mannheim Mathematics Department, as well as Stefano De Marco, Julien Hugonnier, Wahid Khosrawi-Sardroudi, Martin Larsson, and Peter Tankov for their comments. We thank an anonymous referee, an anonymous associate editor, and Chris Rogers (co-editor) for their careful reading of the manuscript and suggestions. The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) ERC Grant Agreement n. 307465-POLYTE. The research of Sergio Pulido benefited from the support of the Chair Markets in Transition (Fédération Bancaire Française) and the project ANR 11-LABX-0019.
[^5]: A Gram–Charlier A series expansion of a density function $g(x)$ is formally defined as $g(x)=\sum_{n\ge 0} c_n H_n(x) w(x)$ for some real numbers $c_n$, $n\ge0$.
[^6]: Similar recursive relations of the Fourier coefficients for the physicist Hermite polynomial basis can be found in @drimus2013closed. The physicist Hermite polynomial basis is the orthogonal polynomial basis of the $L^2_w$ space equipped with the weight function $w(x)={{\rm e}}^{-x^2}$ so that ${(H_n,H_n)_w}=\sqrt{2\pi}2^nn!$.
[^7]: In practice, depending on the model parameters, this may not always be feasible, in which case the truncation order $N$ should be increased.
[^8]: The derivation of the tightness of $(V_t^{(n)},X_t^{(n)})$ from is also stated without proof in @rogers2000diffusions [Theorem II.85.5]. For the sake of completeness we give a short self-contained argument here.
|
---
abstract: 'We present a conjecture in Diophantine geometry concerning the construction of line bundles over smooth projective varieties over ${{\overline {\mathbb Q}}}.$ This conjecture, closely related to the Grothendieck Period Conjecture for cycles of codimension 1, is also motivated by classical algebraization results in analytic and formal geometry and in transcendence theory. Its formulation involves the consideration of $D$-group schemes attached to abelian schemes over algebraic curves over ${{\overline {\mathbb Q}}}.$ We also derive the Grothendieck Period Conjecture for cycles of codimension 1 in abelian varieties over ${{\overline {\mathbb Q}}}$ from a classical transcendence theorem *à la* Schneider-Lang.'
address: 'J.-B. Bost, D[é]{}partement de Math[é]{}matiques, Universit[é]{} Paris-Sud, B[â]{}timent 425, 91405 Orsay cedex, France'
author:
- 'Jean-Benoît Bost'
title: 'Algebraization, Transcendence, and $D$-group schemes'
---
Foreword
========
My aim, in this largely expository article, is to present a conjecture in Diophantine geometry, concerning the construction of line bundles over smooth projective varieties over ${{\overline {\mathbb Q}}}.$ This conjecture is motivated by the classical Grothendieck Period Conjecture ([[*cf.* ]{}]{}Section \[GPC\]) and by the philosophy, already advocated in diverse places (see for instance [@Bost01], [@Chambert01], [@BostChambert-Loir07], [@Gasbarri10]), that various results in Diophantine approximation and transcendence theory are arithmetic counterparts, valid in varieties over number fields, or rather in their model of finite type over $\Z$, of geometric algebraicity criteria, concerning formal objects inside algebraic varieties over some (algebraically closed) field $k$.
Most of the presently known results in transcendence appear actually to be analogues of geometric algebraicity criteria concerning germs $\widehat{V}$ of formal subvarieties along a projective subvariety $Y$ of some ambient variety $X$ over $k$ — by such a $\widehat{V}$, we mean a smooth formal subscheme $\widehat{V}$ of the completion $\widehat{X}_{Y}$ admitting $Y$ as scheme of definition. (Any such $\widehat{V}$ may be written as the limit $$\widehat{V} = \lim_{\stackrel{{\longrightarrow}}{i}} V_{i}$$ of the successive infinitesimal neighbourhoods $V_{i},$ $i \in \N$, of $Y$ in $\widehat{V}$, which are closed subschemes of $X$, of support $\vert V_{i}\vert = Y.$ ) These criteria assert that, if $Y$ is smooth, *of dimension at least one*, and if the normal bundle $N_{Y}\widehat{V}$ of $Y$ in $\widehat{V}$ satisfies some suitable positivity condition, then $\widehat{V}$ is algebraic — roughly speaking, this means that $\widehat{V}$ is a “branch” along $Y$ of some subvariety $W$ of $X$ containing $Y$.
When the base field $k$ is the field $\C$ of complex numbers, that kind of result may be stated in the following terms, which avoid an explicit appeal to formal geometry and so may look more familiar. In the situation when $k =\C,$ any germ of $\C$-analytic submanifold ${{\mathcal V}}$ of $X$ along $Y$ defines a smooth formal germ $\widehat{V}:= \widehat{{{\mathcal V}}}_{Y}$ along $Y$ (namely, the limit $\lim_{i} {{\mathcal V}}_{i}$ of the successive infinitesimal neighbourhood of $Y$ inside ${{\mathcal V}}$; these are projective analytic subspaces in $X,$ which may be identified to projective subschemes over $\C$). Then the above-mentioned algebraicity criteria assert that, *when the normal bundle of $Y$ in ${{\mathcal V}}$* satisfies a suitable positivity condition, for instance when it *is ample, then ${{\mathcal V}}$ is contained in some algebraic subvariety $W$ of $X$ of the same* (complex) *dimension as* ${{\mathcal V}}$. That type of geometric result goes back to Andreotti [@Andreotti63].
In transcendence theory, one deals with algebraicity criteria concerning smooth formal germs of subvarieties $\widehat{V}$ through some $K$-rational point $P$ in a variety $X$ over a number field $K$. According to a viewpoint that goes back to Kronecker, it is appropriate to consider a model ${{\mathcal X}}$ of $X$ of finite type over the ring of integers ${{\mathcal O}_K}$ of $K$ (hence over $\Z$), in which $P$ extends to a point ${{\mathcal P}}$ in ${{\mathcal X}}({{\mathcal O}_K})$. The algebraicity criteria established in transcendence turn out to deal with a formal germ in the completion $\widehat{{{\mathcal X}}}_{{{\mathcal P}}}$ along the “arithmetic curve” ${{\mathcal P}}\simeq {{\rm Spec\, }}{{\mathcal O}_K}.$ In this Kroneckerian perspective, transcendence results are indeed algebraicity criteria concerning formal germs along *curves*, analogue to the geometric algebraicity criteria *à la* Andreotti.
It turns out that, in the context of analytic and formal geometry, algebraicity criteria have been established that concern, not only subvarieties, but also coherent sheaves (for examples, line bundles or vector bundles), notably by Grothendieck ([@GrothendieckFGA], [@GrothendieckSGA2]) in the context of formal geometry. In their most basic geometric version, for instance, the algebraization results in [@GrothendieckSGA2] (also presented in [@Hartshorne70]) deal with germs of formal (or analytic) vector bundles along suitable ample projective subvarieties $Y$ of some algebraic variety $X$ over some base field $k$. Their validity requires $Y$ to be *of dimension at least two*. The Kroneckerian viewpoint mentioned above — in which the arithmetic counterpart of a surface over some base field is an “arithmetic surface”, that is an integral model of a curve over a number field — leads one to expect that one could formulate, and possibly establish, some significant arithmetic algebraization criterion, concerning *formal line or vector bundles over the completion $\widehat{X}_{Y}$ of some algebraic variety $X$ over a number field along some projective curve $Y$.*
In this article, I present a conjectural transcendence statement of this kind (Conjecture \[Main\] *infra*), the validity of which would actually imply some new cases of the classical Grothendieck Period Conjecture.
An interesting feature of this conjectural statement is that it introduces differential algebraic groups in a classical Diophantine context, concerning algebraic varieties over number fields. Recall that the role of differential algebra in Diophantine geometry over function fields is well established since the work of Manin on algebraic curves over function fields, culminating with his proof of the geometric Mordell conjecture ([@Manin58], [@Manin61], [@Manin63]), and has more recently considerably expanded, in a series of works initiated by the contributions of Buium ([@Buium92Annals], [@Buium93], [@Buium93eff], [@BuiumVoloch93]) and Hrushovski ([@Hrushovski96]), which make conspicuous the role of differential algebraic groups in the Diophantine geometry of abelian varieties over function fields[^1]. The occurrence of nonlinear differential algebraic groups over curves over number fields in Conjecture \[Main\], which reflects the two-dimensional nature of the problem at hand, has appeared to me worthy of attention, and I took the opportunity of the Oléron conference to present it to experts in model theory and differential algebra gathered at the occasion of Anand Pillay’s 60th birthday.
Actually, although the content of this work has presently no explicit link with model theory, it turns out to involve several of the mathematical themes so successfully explored by Anand Pillay during the recent years, notably the interplay between the analytic geometry of compact complex manifolds and algebraic geometry, and the study of algebraic $D$-groups, especially in relation to abelian varieties and their universal vector extensions. This article is dedicated to him, as a token of appreciation and confidence in his mathematical vision.
This paper, like my oral presentation in Oléron, is to a large extent expository: I seriously attempted to discuss the classical facts relevant to the formulation of Conjecture \[Main\] in a form accessible to mathematicians of diverse backgrounds (with possibly a limited success, notably in the last sections of this article). Especially I tried to avoid any real knowledge of formal geometry, by putting forward the analytic variants of diverse results usually formulated in terms of formal geometry, or by translating statements in formal geometry into equivalent statements involving systems of successive thickenings, to stay in the realm of algebraic geometry. I also tried to present various themes from some unconventional point of view, for instance in emphasizing the role of moduli spaces of vector bundles with integrable connections.
However, besides Conjecture \[Main\] itself, I also included some original content, notably in Part 5 a proof of the Grothendieck Period Conjecture in codimension 1 for abelian varieties. Readers interested in this result may only read Parts 4 and 5, independently of the rest of the article.
I heartily thank Daniel Bertrand for generously sharing his insights of transcendence theory and differential algebraic groups over the years, and for helpful remarks on a preliminary version. I am grateful to the referee for useful comments and to J.P. Serre for his remarks on Section 2.1. I also thank Zoé Chatzidakis for her gentle insistence that I transform my oral presentation in Oléron into some written contribution, and the Centro di Ricerca Matematica Ennio di Giorgi (Pisa) for its hospitality during the completion of this article.
During the preparation of this article, the author has been partially supported by the ANR project [MODIG]{}[^2] and by the Institut Universitaire de France.
Algebraization of analytic objects I {#sec:AlgAna}
====================================
Algebraization of compact Riemann surfaces and of projective analytic sets {#PRChow}
--------------------------------------------------------------------------
Algebraization of analytic objects (such as varieties and their morphisms, vector bundles, coherent sheaves, etc.) is a central theme in the development of algebraic and analytic geometry at least since the 1830s. Already recognizable in the pioneering work of Abel and Jacobi on elliptic functions and elliptic curves, it appears in a form familiar to modern mathematicians in the work of Puiseux and Riemann.
For instance, in the first part of his memoir on abelian functions [@Riemann57IV] — devoted to a systematic study of what today would be called “compact Riemann surfaces realized as a finite covering of the projective complex line $\PP^1(\C)$” — Riemann establishes the *algebraicity* of any pair $(C,\nu)$ where $C$ is *a compact connected Riemann surface* and $\nu : C \longrightarrow \PP^1(\C)$ *a ramified analytic covering* (or equivalently, a nonconstant $\C$-analytic map).
Namely, he proves that, for any such pair $(C, \nu)$, there exists an irreducible polynomial $P$ in $\C[X,Y]$ (of positive degree in $Y$), and an isomorphism from $C$ to the compact Riemann surface associated to the plane algebraic curve of equation $P(X,Y) = 0$ such that, through this isomorphism, the map $\nu$ (seen as a meromorphic function on $C$) gets identified with the meromorphic function defined by the first coordinate $X$. To achieve this, Riemann constructs a suitable meromorphic function on $C$ (which ultimately will become the second coordinate $Y$) by appealing to the Dirichlet principle.
An important step in the development of algebraization theorems has been the theorem of Chow ([@Chow49]), which asserts that *any closed $\C$-analytic subset $X$ of the projective space $\PP^N(\C)$ is algebraic.* In other words, there exists a finite family $(P_{\alpha})_{1\leq \alpha \leq A}$ of homogeneous polynomials in $\C[X_{0},\cdots,X_{N}]$ such that, for any point $(x_{0}:\cdots:x_{N})$ in $\PP^N(\C),$ $$(x_{0}:\cdots:x_{N}) \in X \Longleftrightarrow \mbox{for $\alpha = 1,\ldots, A,$ } P_{\alpha}(x_{0},\ldots,x_{N})= 0.$$
The statement of Chow’s theorem clearly did not come as a surprise at the time of the publication of [@Chow49] (see for instance H. Cartan’s summary of [@Chow49] in *Mathematical Reviews*). A significant point in [@Chow49] is the formal rigour of its proofs — based on some algebraicity criterion formulated in terms of intersections numbers with algebraic subvarieties of $\PP^N(\C)$ — which links the theme of algebraization of analytic objects to the development of rigorous foundations for algebraic topology and geometry, in the line of earlier works by Lefschetz, van der Waerden, and Chevalley.
Algebraization of line bundles over complex projective varieties {#subsec:algline}
----------------------------------------------------------------
Actually, more than forty years before Chow’s work, a remarkable variation on this theme of algebraization was initiated by Poincaré and Lefschetz during their investigation of *algebraic cycles on complex surfaces* by means of the so-called normal functions. Motivated by techniques and problems of the Italian school of algebraic geometry and by Picard’s contributions to the theory of algebraic surfaces, they basically established the following theorem, when $\dim X = 2$ :
*Let $X$ be a smooth closed $\C$ analytic subvariety of $\PP^N(\C)$ *(necessarily algebraic, according to Chow’s theorem).* Then any analytic line bundle $L$ over $X$ is algebraic.*
This result was extended by Hodge ([@Hodge41], p. 214-216) to higher-dimensional smooth projective varieties. Kodaira and Spencer [@KodairaSpencer53II] gave a new “modern” proof of this theorem in 1953, in what probably constitutes the first application of sheaf theory and cohomological techniques to projective complex varieties.
Let us formulate a few comments on the content of the Poincaré-Lefschetz-Hodge theorem.
We shall denote ${{\mathcal O}}^{{\rm an}}_{X}$ and ${{\mathcal C}}_{X}$ (resp. ${{\mathcal O}}_{X}$) the sheaf of analytic and complex-valued continuous functions (resp. of regular functions) on $X$ equipped with the usual “analytic” topology (resp. with the Zariski topology).
Recall that, for any analytic line bundle $L$ over $X$, there exist an open covering ${{\mathcal U}}:= (U_{\alpha})_{\alpha \in A}$ of $X$ (in the analytic topology) and, for every $\alpha \in A,$ an analytic trivialisation of $L$ over $U_{\alpha}$ : $$s_{\alpha} : {{\mathcal O}}^{{\rm an}}_{U_{\alpha}} \stackrel{\sim}{\longrightarrow} L_{U_{\alpha}}.$$ By comparing the trivialisations — namely by introducing the functions $\phi_{\alpha \beta}$ in ${{\mathcal O}}^{{\rm an}}_X(U_{\alpha}\cap U_{\beta})^\ast$ defined by $$s_{\alpha} = \phi_{\alpha \beta} s_{\beta} \mbox{ over } U_{\alpha}\cap U_{\beta}$$ — one defines a 1-cocycle $(\phi_{\alpha \beta})$ in $Z^1({{\mathcal U}}, {{\mathcal O}}^{{{\rm an}}\ast}_X)$. The class of this cocycle in $H^1(X,{{\mathcal O}}^{{{\rm an}}\ast}_{X})$ determines the isomorphism class of $L$, and any cohomology class in $H^1(X,{{\mathcal O}}^{{{\rm an}}\ast}_X)$ arises through this construction from a suitable analytic line bundle $L$.
The line bundle $L$ is *algebraic* precisely when the above covering ${{\mathcal U}}=:= (U_{\alpha})_{\alpha \in A}$ and trivialisations $(s_{\alpha})_{\alpha \in A}$, may be chosen in such a way that every $U_{\alpha}$ is *Zariski* open in $X$ and every function $\phi_{\alpha \beta} s_{\beta}$ is *regular*[^3] over $U_{\alpha}\cap U_{\beta}$; then $(\phi_{\alpha \beta})$ defines a 1-cocycle in $Z^1({{\mathcal U}}, {{\mathcal O}}^\ast).$
The above formulation of the theorem of Poincaré-Lefschetz-Hodge, in terms of algebraicity of analytic line bundles, is basically its “modern” formulation by Kodaira and Spencer. Let us recall how it translates into its “classical” formulation à la Lefschetz-Hodge, involving (co)homology classes of divisors. The following arguments, now classical, appear in [@KodairaSpencer53I].
Consider the short exact sequences of sheaves of abelian groups over $X$ defined by the “exponential” map ${{\mathbf e}}: = \exp (2\pi i .)$ : $$0 \longrightarrow \Z_{X} \longrightarrow {{\mathcal C}}_{X} \stackrel{{{\mathbf e}}}{\longrightarrow} {{\mathcal C}}_{X}^\ast \longrightarrow 0$$ and $$0 \longrightarrow \Z_{X} \longrightarrow {{\mathcal O}}_{X}^{{\rm an}}\stackrel{{{\mathbf e}}}{\longrightarrow} {{\mathcal O}}_{X}^{{{\rm an}}\ast} \longrightarrow 0.$$ The abelian group of isomorphism classes of topological (resp. analytic line) bundles over $X$ is naturally identified with $H^1(X, {{\mathcal C}}_{X}^\ast)$ (resp. $H^1(X,{{\mathcal O}}_{X}^{{{\rm an}}\ast})$). The long exact sequences of cohomology groups associated to the above short exact sequences of sheaves fit into a commutative diagram : $$\label{diagChern}
\begin{CD}
H^1(X, {{\mathcal C}}_{X})@>{{{\mathbf e}}}>> H^1(X,{{\mathcal C}}_{X}^\ast) @>{\delta}>> H^2(X, \Z) @>>> H^2(X,{{\mathcal C}}_{X}) \\
@. @AAA @AAA @AAA \\
@. H^1(X,{{\mathcal O}}^{{{\rm an}}\ast}_X) @>{\delta^{{\rm an}}}>> H^2(X, \Z) @>>> H^2(X, {{\mathcal O}}_{X}).
\end{CD}$$
The exactness of the first line and the vanishing of $H^1(X,{{\mathcal C}}_{X})$ and $H^2(X,{{\mathcal C}}_{X})$ define an isomorphism $$\label{deftopChern}
c_{1, {{\rm top}}}:= \delta : H^1(X,{{\mathcal C}}_{X}^\ast) \stackrel{\sim}{\longrightarrow} H^2(X, \Z),$$ which maps the isomorphism class of some topological line bundle $L$ to its so-called *first Chern class*. The exactness of the second line in (\[diagChern\]) precisely asserts that a class $\alpha$ in $H^2(X, \Z)$ belongs to the image of $\delta^{{\rm an}}$ — or equivalently, is the first Chern class $c_{1}(L)$ of some *analytic* line bundle — if and only if $\alpha$ belongs to the kernel $$\ker (H^2(X, \Z) \longrightarrow H^2(X,{{\mathcal O}}_{X}^{{\rm an}}))$$ of the map induced by the inclusion of sheaves $\Z_{X} {{\lhook\joinrel\longrightarrow}}{{\mathcal O}}_{X}^{{\rm an}},$ or equivalently, if the real cohomology class $\alpha_{\R}$ in $H^2(X, \R)$ belongs to $$\ker (H^2(X, \R) \longrightarrow H^2(X,{{\mathcal O}}^{{\rm an}}_X)).$$ In the classical notation of Hodge theory, this is precisely the space $H^2(X, \R) \cap H^{1,1}(X)$ of real 2-cohomology classes on $X$ of type $(1,1).$ In the case of surfaces, considered by Lefschetz, this space may be defined by the classical vanishing condition $$\int_X \alpha \wedge \omega = 0$$ of the integrals along $\alpha$ of the global regular algebraic 2-forms $\omega$ on $X$.
Besides, an *algebraic* line bundle $L$ may be described in terms of the divisor $D$ of some nonzero rational section $s$ : the section $s$ establishes an isomorphism from $L$ to the line bundle ${{\mathcal O}}(D)$, and the class $c_{1}(L)=c_{1}({{\mathcal O}}(D))$ coincides with the class $[D]$ in $H^2(X, \Z)$ Poincaré dual to the divisor $D$, seen as a codimension 1 algebraic cycle on $X$.
Taking the above facts into account, Kodaira-Spencer’s version of the theorem of Poincaré-Lefschetz-Hodge admits the following consequence, which is actually its original version due to Lefschetz and Hodge[^4] : *a class $\alpha$ in $H^2(X,\Z)$ is algebraic — namely, the class $[D]$ of some algebraic cycle $D$ of codimension 1 on $X$ — if and only if $\alpha_{\R}$ is of type $(1,1)$.*
GAGA
----
The diverse algebraicity statements in the previous sections appear today as special instances of Serre’s GAGA Theorem (1956, [@Serre56]).
To formulate Serre’s results, consider a complex algebraic variety $X$. From any algebraic coherent sheaf $F$ over $X$ equipped with the Zariski topology — for example, an algebraic vector bundle $E$ over $X$, defined by some 1-cocycle $(\phi_{\alpha \beta}) \in Z^1((U_{\alpha}), GL_{N}({{\mathcal O}}_{X}))$, attached to some Zariski-open covering $(U_{\alpha})$ of $X$, with values in invertible matrices of regular functions — we deduce an analytic coherent sheaf $F^{\rm an}$ on $X$ equipped with the analytic topology — for instance, $E^{\rm an}$ is the analytic vector bundle defined by the cocycle $(\phi_{\alpha \beta})$ seen as as an analytic cocycle (that is, as an element of $Z^1((U_{\alpha}), GL_{N}({{\mathcal O}}_{X}^{{\rm an}}))$. This is a straightforward consequence of the facts that the analytic topology of $X$ is finer than its Zariski topology, and that, for every Zariski open subset $U$ of $X$, ${{\mathcal O}}_{X}(U)$ is a subring of ${{\mathcal O}}_{X}^{{\rm an}}(U).$
These facts also imply the existence of canonical “analytification maps” between cohomology groups : $$\label{anmaps}
H^i(X,F) \longrightarrow H^i(X^{\rm an}, F^{\rm an}).$$ Here $X$ (resp. $X^{{\rm an}}$) denotes the variety $X$ equipped with the Zariski topology (resp. the underlying analytic space, which topologically is the set of complex points of $X$ equipped with the usual “analytic” topology).
Serre’s GAGA Theorem is the conjunction of the following two statements :
**GAGA Comparison Theorem.** *For any projective complex variety $X$ and any coherent algebraic sheaf $F$ on $X$, the “analytification maps” (\[anmaps\]) are isomorphisms:* $$\label{anmapsiso}
H^i(X,F) \stackrel{\sim}{\longrightarrow} H^i(X^{\rm an}, F^{\rm an}).$$
**GAGA Existence Theorem.** *For any projective complex variety $X$ and for any analytic coherent sheaf ${{\mathcal F}}$ on $X^{\rm an}$, there exists some algebraic coherent sheaf $F$ over $X$ *(unique up to unique isomorphism)* such that ${{\mathcal F}}$ is isomorphic to $F^{\rm an}$* (as analytic coherent sheaf over $X^{{\rm an}}$).
Let us stress that the projectivity assumption in the GAGA Theorem is essential (see Section \[subscec:alganstructures\] for a discussion of counterexamples in the quasi-projective situation).
The Poincaré-Lefschetz-Hodge Theorem is nothing but the special case of the GAGA Existence Theorem concerning line bundles over smooth varieties.
Chow’s Theorem also follows from the GAGA Existence Theorem — with the notation of paragraph (\[PRChow\]), it follows from this theorem applied to ${{\mathcal O}}^{\rm an}_{X}$, seen as a coherent analytic sheaf over $\PP^N(\C)^{\rm an}.$ Observe also that conversely, by considering graphs, Chow’s theorem implies the comparison isomorphism (\[anmapsiso\]) when $i = 0$ and $F$ is a vector bundle.
Serre’s proof of GAGA Theorems is the archetype of “modern cohomological proofs” and, beside its considerable importance in itself, has also played an important role as a model for the development of cohomological techniques in algebraic and formal geometry.
To establish the GAGA Comparison Theorem, using that $X$ may be embedded into some projective space $\PP^N_{\C}$, one reduces to the special case $X = \PP^N_{\C}$. In that case, Serre’s proof relies on some “algebraic dévissage of $F$” by means of a left resolution by algebraic coherent sheaves that are direct sums of line bundles of the form ${{\mathcal O}}_{\PP^N}(k)$, $k \in \Z,$ combined with a direct computation of the algebraic and analytic cohomology groups in (\[anmapsiso\]) when $F = {{\mathcal O}}_{\PP^N}(k).$
The proof of the GAGA Existence Theorem may be seen as a deep amplification and simplification of Kodaira-Spencer’s proof in [@KodairaSpencer53II]. Besides the Comparison Theorem previously established, it relies on the finite dimensionality of the analytic cohomology groups $H^i(X^{\rm an}, {{\mathcal F}})$ attached to an arbitrary analytic coherent sheaf ${{\mathcal F}}$ on $X$. This result, of analytic nature, was established by Cartan and Serre ([@CartanSerre53]) with $X^{\rm an}$ an arbitrary compact complex analytic space. Actually only the degree $i=1$ case of the finiteness theorem of Cartan-Serre is used in the proof of the Existence Theorem. When $X$ is smooth and ${{\mathcal F}}$ is a line bundle, it was established by Kodaira and Spencer as a consequence of the description of $H^i(X^{\rm an}, {{\mathcal F}})$ by means of harmonic forms and of the fact that elliptic differential operators on compact manifolds are Fredholm.
Algebraization of analytic objects II : comments and applications {#sec:AlgAnaII}
=================================================================
Un peu d’histoire {#subsec:histoire}
-----------------
I would like to stress that the content of the previous sections provides a very fragmentary image of the history of algebraization theorems, a topic especially rich in results and techniques, where the evolution of ideas over the long term seems rather difficult to untangle.
To illustrate this last point, let me indicate that algebraicity theorems *à la* Chow may be derived from Bézout-type bounds on intersection multiplicities. That line of argument appears for instance in Poincaré’s survey article on abelian functions [@Poincare02], when he proves that a compact complex torus imbedded in a complex projective space is actually algebraic (see *loc. cit.*, Section 2, 53–56). It constitutes the central point in Chow’s proof in [@Chow49], and more recently, plays a key role in the work of Hrushovski and Zilber on Zariski geometries (see [@HrushovskiZilber96], section 7). The influence of Poincaré’s work on [@Chow49] and [@HrushovskiZilber96] seems unclear, and [@Poincare02] could be a striking example of double *plagiat par anticipation* by Poincaré.
Another approach due to Serre to Chow’s Theorem — which appears as an anonymous contribution in [@Anonymous56] — consists in deriving it from the fact that the transcendence degree over $\C$ of the field ${{\mathcal M}}(X)$ of meromorphic functions on some compact connected complex manifold $X$ is at most its (complex) dimension : $$\label{degtrleqdim}
{\rm degtr}_{\C} {{\mathcal M}}(X) \leq \dim X.$$
Indeed, if $X$ is analytically embedded in $\PP^N(\C)$, its Zariski closure $\overline{X}^{\rm Zar}$ is irreducible, the field $\C(\overline{X}^{\rm Zar})$ of rational function on $\overline{X}^{\rm Zar}$ may be identified to a subfield of the field of meromorphic function ${{\mathcal M}}(X)$, and the upper bound (\[degtrleqdim\]) implies that the Zariski closure $\overline{X}^{\rm Zar}$ of $X$ in $\PP^N(\C)$ has dimension at most $\dim X$, hence equal to $\dim X$. Besides, the irreducibility of $\overline{X}^{\rm Zar}$ implies its connectedness and the connectedness of its subset $\overline{X}^{\rm Zar}_{\rm reg}$ of smooth points in the analytic topology. This connectedness is a GAGA-type statement which goes back to Puiseux [@Puiseux51], Section I, in the case of plane curves; Puiseux’s original proof actually extends to higher-dimensional varieties (see for instance [@Shafarevich77], Section VII.2), and probably constitutes, with other arguments in [@Puiseux50] and [@Puiseux51], the first proof of such results satisfactory according to modern standards. The connectedness of $\overline{X}^{\rm Zar}_{\rm reg}$ and its density in $\overline{X}^{\rm Zar}$ for the analytic topology, together with the inclusion $X \subset \overline{X}^{\rm Zar}$ and the equality of dimension $\dim X = \dim \overline{X}^{\rm Zar}$, imply the equality $X =\overline{X}^{\rm Zar},$ that is the algebraicity of $X$.
In turn, proofs of the upper bound (\[degtrleqdim\]) appear to have a complicated history — this bound seems to have been established for the first time in a completely satisfactory way by Serre ([@Serre53], §3) and Thimm ([@Thimm54]). In [@Siegel55], Siegel discusses the history of the question and gives an ingenious “elementary" proof, directly influenced by Poincaré’s article[^5] [@Poincare02] and actually very close to the proof in [@Serre53]. Conversely, as observed in [@Remmert56], (\[degtrleqdim\]) is an easy consequence of Chow’s Theorem and Remmert proper image theorem. In turn, both these theorems may be derived from the fundamental extension theorems concerning complex analytic sets, due to Thullen, Remmert, and Stein (see for instance [@Mumford76], Section 4A, or [@Gunning90], Chapters K and M).
Concerning the history of the Poincaré-Lefschetz-Hodge theorem, I refer to the classical analysis by Zariski and to the additional comments by Mumford in [@Zariski71] Chapter VII[^6].
Algebraic de Rham cohomology {#subsec:algebdeRham}
----------------------------
In this section, we apply the GAGA Comparison Theorem to the study of the algebraic de Rham cohomology, in the “easy” case of projective smooth varieties. The formalism below seems to appear in printed form in the famous letter of Grothendieck to Atiyah [@Grothendieck66], although algebraic de Rham cohomology already occurs implicitly in diverse classical works on algebraic curves, surfaces, and abelian varieties. See [@Hartshorne75] for a systematic presentation of the de Rham cohomology of algebraic varieties and for references.
### {#GAGAdR}
Let $X$ be a smooth projective complex algebraic variety. It is equipped with the algebraic de Rham complex $$\label{algdR}
\Omega^\bullet_{X/\C} : 0 \longrightarrow
\Omega^0_{X/\C} ={{\mathcal O}}_{X} \stackrel{d}{\longrightarrow}
\Omega^1_{X/\C} \stackrel{d}{\longrightarrow}
\Omega^2_{X/\C} \stackrel{d}{\longrightarrow} \cdots$$ and the hypercohomology groups of this complex of sheaves over $X$ equipped with the Zariski topology define the *algebraic de Rham cohomology groups* of $X$ : $${H_{\rm dR}}^i(X/\C) := \H^i (X, \Omega^\bullet_{X/\C}).$$
By “analytification”, the algebraic de Rham complex (\[algdR\]) becomes the analytic de Rham complex of the $\C$-analytic manifold $X^{{{\rm an}}}$: $$\label{andR}
\Omega^\bullet_{X^{{{\rm an}}}} : 0 \longrightarrow
\Omega^0_{X^{{{\rm an}}}} = {{\mathcal O}}^{{\rm an}}_{X^{{{\rm an}}}} \stackrel{d}{\longrightarrow}
\Omega^1_{X^{{{\rm an}}}} \stackrel{d}{\longrightarrow}
\Omega^2_{X^{{{\rm an}}}} \stackrel{d}{\longrightarrow} \cdots$$ The hypercohomology groups of $\Omega^\bullet_{X^{{{\rm an}}}}$ define the *analytic de Rham cohomology groups* of $X^{{{\rm an}}}$ $\H^i (X^{{{\rm an}}}; \Omega^\bullet_{X^{{{\rm an}}}}),$ and “analytification” defines canonical $\C$-linear maps: $$\label{analytifdR}
\H^i (X, \Omega^\bullet_{X/\C}) \longrightarrow \H^i (X^{{{\rm an}}}, \Omega^\bullet_{X^{{{\rm an}}}}).$$ The algebraic (resp. analytic) de Rham cohomology groups are related to the algebraic (resp. analytic) “Hodge cohomology groups” $H^q(X,\Omega^p_{X/\C})$ (resp. $H^q(X^{{{\rm an}}},\Omega^p_{X^{{{\rm an}}}})$) by the usual spectral sequences $$E_{1}^{p,q} = H^q(X,\Omega^p_{X/\C}) \Rightarrow \H^{p+q}(X, \Omega^\bullet_{X^{{{\rm an}}}})$$ $$\mbox{(resp. $E_{1}^{p,q} = H^q(X^{{{\rm an}}},\Omega^p_{X^{{{\rm an}}}}) \Rightarrow \H^{p+q}(X^{{{\rm an}}}, \Omega^\bullet_{X^{{{\rm an}}}})$)}.$$ The formation of these spectral sequences is compatible with analytification. Consequently, from the GAGA comparison isomorphisms $$H^q(X,\Omega^p_{X/\C}) \stackrel{\sim}{\longrightarrow} H^q(X^{{{\rm an}}},\Omega^p_{X^{{{\rm an}}}}),$$ we deduce that the analytification maps (\[analytifdR\]) from algebraic to analytic de Rham cohomology groups are isomorphisms.
Besides, according to the analytic Poincaré Lemma, the inclusion of the locally constant sheaf $\C_{X^{{{\rm an}}}}$ into ${{\mathcal O}}^{{\rm an}}_{X^{{{\rm an}}}}$ defines a quasi-isomorphism of complex of sheaves on $X^{{{\rm an}}}$: $$\C_{X^{{{\rm an}}}} \stackrel{q.i.}{\longrightarrow} \Omega^\bullet_{X^{{{\rm an}}}},$$ and consequently an isomorphism of (hyper)cohomology groups: $$\label{PdR}
H^i (X^{{{\rm an}}}, \C) \stackrel{\sim}{\longrightarrow} \H^i (X^{{{\rm an}}}, \Omega^\bullet_{X^{{{\rm an}}}}).$$ The isomorphisms (\[analytifdR\]) and (\[PdR\]) define by composition an isomorphism of finite-dimensional $\C$-vector spaces: $$\label{dRB}
\begin{array}{rcl}
{H_{\rm dR}}^i(X/\C) & \longrightarrow & H^i(X^{{{\rm an}}}, \C) \\
\beta & \longmapsto & \beta^{{{\rm an}}}.
\end{array}$$
### {#subsub:algdRk}
Observe that the definition of the algebraic de Rham cohomology makes sense for any smooth projective variety $X_{0}$ defined over an arbitrary base field $k$. Indeed we may consider the algebraic de Rham complex $$\label{algdRk}
\Omega^\bullet_{X_{0}/k} : 0 \longrightarrow
\Omega^0_{X_{0}/k} ={{\mathcal O}}_{X_{0}} \stackrel{d}{\longrightarrow}
\Omega^1_{X_{0}/k} \stackrel{d}{\longrightarrow}
\Omega^2_{X_{0}/k} \stackrel{d}{\longrightarrow} \cdots$$ and define $${H_{\rm dR}}^i(X_{0}/k) := \H^i (X_{0}, \Omega^\bullet_{X_{0}/k}).$$
These are finite-dimensional $k$-vector spaces, and when $k$ is a subfield of $\C$, this construction defines a natural “form over $k$” of the cohomology with complex coefficients $H^i(X^{{{\rm an}}}; \C)$ of the $\C$-analytic manifold $X^{{{\rm an}}}$ attached to complex algebraic variety $X := X_{0} \otimes_{k} \C$ deduced from $X_{0}$ by extending the base field from $k$ to $\C$. Indeed, by composing a straightforward base change isomorphism and the comparison isomorphism (\[dRB\]), we obtain a canonical isomorphism $$\label{dRkB}
{H_{\rm dR}}^i(X_{0}/k) \otimes_{k}\C \stackrel{\sim}{\longrightarrow} {H_{\rm dR}}^i(X/\C) {\stackrel{\sim}{\longrightarrow}}H^i(X^{{{\rm an}}}, \C).$$
### Example I. Smooth projective curves.
Let $X_{0}$ be a smooth, projective, geometrically connected curve, of genus $g$, over $k$. Then ${H_{\rm dR}}^i(X_{0}/k)$ vanishes if $i>2$ and is a canonically isomorphic to $k$ when $i =0$ or $2$. The first de Rham cohomology group ${H_{\rm dR}}^1(X_{0}/k)$ is a $2g$-dimensional $k$-vector space. It may be identified with the quotient of the space of meromorphic 1-forms over $X_{0}/k$ of the second kind (that is, with vanishing residues) by its subspace $dk(X_{0})$ formed by the differentials of rational functions $k(X_{0})$ over $X_{0}.$
For instance, when $k$ is a field of characteristic $\neq 2, 3$, if $X_{0}$ is an elliptic curve $E$ of plane equation $$y^2 = 4 x^3 -g_{2}x -g_{3},$$ then ${H_{\rm dR}}^1(E/k_{0})$ is a $2$-dimensional $k$-vector space with basis $([\alpha],[\beta])$, where $\alpha : = dx/y$ and $\beta := x.dx/y.$
### Example II. The first Chern class in algebraic de Rham cohomology.
The morphism of sheaves of abelian groups over $X_{0}$ $$\begin{array}{rccc}
d\log : & {{\mathcal O}}^\ast_{X_{0}} & {\longrightarrow}&\Omega ^1_{X_{0}/k} \\
& \phi & {\longmapsto}& d\phi/\phi
\end{array}$$ takes its values in the subsheaf $\Omega ^{1{\rm closed}}_{X_{0}/k}$ of closed 1-forms. Therefore it defines a morphism of complex of sheaves $$d\log : {{\mathcal O}}^\ast_{X_{0}} {\longrightarrow}\Omega^\bullet_{X_{0}/k}[1],$$ and finally of (hyper)cohomology groups $$H^1(X_{0}, {{\mathcal O}}^\ast_{X_{0}}) {\longrightarrow}\H^1(X_{0}, \Omega^\bullet_{X_{0}/k}[1]) = \H^2(X_{0}, \Omega^\bullet_{X_{0}/k}).$$
The map so defined will be denoted: $$c_{1, {{\rm dR}}} : {{\rm Pic}}(X_{0}) := H^1(X_{0}, {{\mathcal O}}^\ast_{X_{0}}) {\longrightarrow}H^2_{{{\rm dR}}}(X_{0}/k).$$ It sends the class of the line bundle $L$ over $X_{0}$ defined by a cocycle $(\phi_{\alpha \beta})$ in $Z^1({{\mathcal U}}, {{\mathcal O}}^\ast_{X_{0}})$ to the class of the (hyper)cocycle $(d\phi_{\alpha \beta}/\phi_{\alpha \beta})$ in $Z^1({{\mathcal U}}, \Omega ^{1{\rm closed}}_{X_{0}/k})$, identified to a subspace of $Z^2({{\mathcal U}}, \Omega^\bullet_{X_{0}/k}).$
This construction of the first Chern class in algebraic de Rham cohomology is compatible with the topological first Chern class defined in (\[deftopChern\]):
\[compChern\] Assume that $k$ is a subfield of $\C$, and consider a smooth projective variety $X_{0}$ over $k$, the complex algebraic projective variety $X:= X_{0}\otimes_{k}\C$, and the associated $C$-analytic manifold $X^{{{\rm an}}}$, as in \[subsub:algdRk\]. Let $L$ be a line bundle over $X_{0}$, let $L_{\C}$ be the algebraic line bundle over $X$ deduced from $L$ by extension of scalars from $k$ to $\C$, and let $L^{{{\rm an}}}_{\C}$ be the associated analytic line bundle over $X^{{{\rm an}}}.$
The morphism $$\begin{array}{rcccl}
{H_{\rm dR}}^i(X_{0}/k) & {\longrightarrow}& {H_{\rm dR}}^i(X/\C) & {\stackrel{\sim}{\longrightarrow}}& H^i(X^{{{\rm an}}}, \C) \\
\alpha & {\longmapsto}& \alpha_{\C}:= \alpha \otimes_{k}1_{\C} & {\longmapsto}& \alpha^{{{\rm an}}}_{\C}
\end{array}$$ maps $c_{1, {{\rm dR}}}(L)$ to $2\pi i\, c_{1,{{\rm top}}}(L_{\C}^{{{\rm an}}}).$
To prove this Lemma, it is enough to consider the case $k = \C.$ Then it follows from the fact that the composite morphism of sheaves over $X^{{{\rm an}}}$ $${{{\mathcal O}^{\rm{an}}}}\stackrel{{{\mathbf e}}}{{\longrightarrow}} {{{\mathcal O}^{\rm{an}}}}^\ast \stackrel{d\log}{{\longrightarrow}} \Omega^1_{X^{{{\rm an}}}}$$ is[^7] $2\pi i \, d.$
### Amplification: modules with integrable connections and de Rham cohomology {#deRhamcoeff}
In the last sections of this article, we shall use a generalization of the previous results, concerning cohomology with coefficients not only in $\C$, but in local systems of finite-dimensional $\C$-vector spaces.
Let $(E,\nabla)$ be a “module with integrable connection” over $X$, namely a vector bundle $E$ over $X$ equipped with a connection $$\nabla: E {\longrightarrow}E \otimes_{{{\mathcal O}}_{X}}\Omega_{X/\C}^1$$ with vanishing curvature. Then $\nabla$ canonically extends to morphisms of sheaves over $X$ $$\nabla : E \otimes_{{{\mathcal O}}_{X}}\Omega_{X/\C}^l {\longrightarrow}E \otimes_{{{\mathcal O}}_{X}}\Omega_{X/\C}^{l+1}$$ which satisfy the Leibniz rule — namely, for any sections $\omega$ of $\Omega^k_{X/\C}$ and $\alpha$ of $E \otimes_{{{\mathcal O}}_{X}}\Omega_{X/\C}^\ast$, $$\nabla(\omega \wedge \alpha) = d\omega \wedge \alpha + (-1)^k \omega \wedge \nabla \alpha$$ — and the relation $$\nabla \circ \nabla = 0.$$ Consequently we may define: $$\label{coeffalg}
H^i_{{{\rm dR}}}(X/\C,(E,\nabla)):=\H^i(X, (\Omega^\bullet_{X/\C}\otimes_{{{\mathcal O}}_{X}}E, \nabla)).$$
By analytification, we obtain a complex of sheaves $(\Omega^\bullet_{X^{{\rm an}}}\otimes_{{{\mathcal O}}^{{\rm an}}_{X}}E^{{\rm an}}, \nabla)$ on $X^{{\rm an}}$ from $(\Omega^\bullet_{X/\C}\otimes_{{{\mathcal O}}_{X} }E, \nabla)$, and we may define $$\label{coeffan}
H^i_{{{\rm dR}}}(X^{{\rm an}},(E^{{\rm an}},\nabla)):=\H^i(X^{{\rm an}}, (\Omega^\bullet_{X^{{\rm an}}}\otimes_{{{\mathcal O}}^{{\rm an}}_{X}}E^{{\rm an}}, \nabla)).$$ An application of GAGA similar to the one in paragraph \[GAGAdR\] shows that (\[coeffalg\]) and (\[coeffan\]) are finite-dimensional vector spaces and that the analytification morphisms $$\label{GAGAdRcoeff}
H^i_{{{\rm dR}}}(X/\C,(E,\nabla)) {\longrightarrow}H^i_{{{\rm dR}}}(X^{{\rm an}},(E^{{\rm an}},\nabla))$$ are isomorphisms.
Besides, the “analytic de Rham complex with coefficients” $(\Omega^\bullet_{X^{{\rm an}}}\otimes_{{{\mathcal O}}^{{\rm an}}_{X}}E^{{\rm an}}, \nabla)$ is a resolution of the local constant sheaf $E^h$ of finite-dimensional complex vector spaces (of dimension the rank of $E$) defined by the $\C$-analytic sections of $E^{{\rm an}}$ which are “horizontal”, that is in the kernel of $\nabla.$ In other words, we have an “analytic Poincaré lemma with coefficients” over $X^{{\rm an}}$, $$E^h \stackrel{q.i}{{\longrightarrow}} (\Omega^\bullet_{X^{{\rm an}}}\otimes_{{{\mathcal O}}^{{\rm an}}_{X}}E^{{\rm an}}, \nabla),$$ and consequently an isomorphism of (hyper)cohomology groups: $$\label{dRthcoeff}
H^i(X^{{\rm an}}, E^h) {\stackrel{\sim}{\longrightarrow}}H^i_{{{\rm dR}}}(X^{{\rm an}},(E^{{\rm an}},\nabla)).$$
The isomorphisms (\[GAGAdRcoeff\]) and (\[dRthcoeff\]) define by composition an isomorphism $$H^i_{{{\rm dR}}}(X/\C,(E,\nabla)) {\stackrel{\sim}{\longrightarrow}}H^i(X^{{\rm an}}, E^h).$$
When $X=X_{0}\times_{k}\C$ and $(E,\nabla)$ are defined over some subfield $k$ of $\C$, we may define $$H^i_{{{\rm dR}}}(X_{0}/k,(E,\nabla)):=\H^i(X_{0}, (\Omega^\bullet_{X_{0}/k}\otimes_{{{\mathcal O}}_{X_{0}}}E, \nabla)).$$ It is a finite-dimensional $k$-vector space, which defines a natural “form over $k$” of the cohomology $H^i(X^{{\rm an}}, E^h)$ with coefficients in the local system $E^h.$
Algebraic and analytic structures, and moduli spaces of vector bundles with integrable connections {#subscec:alganstructures}
--------------------------------------------------------------------------------------------------
### {#section}
Applied to graphs of morphisms, Chow’s Theorem shows that, for any two *projective* complex varieties $X_{1}$ and $X_{2}$ (say smooth for simplicity), the analytification map defines a bijection : $$\begin{array}{rcl}
\left\{\substack{\mbox{morphisms $\phi : X_{1} \rightarrow X_{2}$}\\ \mbox{of complex algebraic varieties}}\right\} &
{\stackrel{\sim}{\longrightarrow}}&
\left\{\substack{\mbox{morphisms $\psi : X^{{{\rm an}}}_{1} \rightarrow X^{{{\rm an}}}_{2}$}\\ \mbox{of complex analytic manifolds}}\right\} \\
\phi & {\longmapsto}& \phi^{{{\rm an}}}.
\end{array}$$
(See for instance [@Mumford76], Section 4B, for details.)
In particular, $X_{1}$ and $X_{2}$ are isomorphic as complex algebraic varieties if and only if $X^{{\rm an}}_{1}$ and $X^{{\rm an}}_{2}$ are isomorphic as complex analytic manifolds. Moreover, for any smooth projective complex algebraic variety $X$, the algebraic variety structure of $X$ is uniquely determined by the structure of $\C$-analytic manifold $X^{{\rm an}}$ it induces.
This does not hold anymore for general quasi-projective varieties. In this section, we want to discuss a remarkable families of counterexamples, namely of pairs $(X_{1}, X_{2})$ of smooth quasi-projective complex algebraic varieties such that $X_{1}^{{\rm an}}$ and $X_{2}^{{\rm an}}$ are “naturally” isomorphic complex manifolds, although $X_{1}$ and $X_{2}$ are not algebraically isomorphic.
The GAGA Existence Theorem will actually play a crucial role in the construction of these counterexamples, which are built from moduli spaces of vector bundles with integrable connections of a given rank $N$ on a smooth projective variety $M$, and from spaces of representations of degree $N$ of the fundamental group of $M^{{\rm an}}.$ When $N =1$, these spaces have been classically considered by Severi and Conforto, and then by Rosenlicht and Serre, during the decades around 1950. For arbitrary $N \geq 1$, they have been investigated thoroughly by Simpson ([@Simpson94I], [@Simpson94II]; see also [@LePotier91] for a survey).
### {#cor(i)(ii)}
Let $M$ be a smooth connected projective complex algebraic variety, and let $o$ be a (complex) point of $X$. Choose a positive integer $N$, and consider the following kinds of data :
\(i) 3-uples $(E, \nabla, \psi)$ consisting in *a vector bundle $E$ of rank $N$ over $M,$ an integrable connection $\nabla$ on $E$, and a “rigidification”* $\psi$ of $E$ at $o$, namely an isomorphism of $\C$-vector spaces $$\psi : E_{o} {\stackrel{\sim}{\longrightarrow}}\C^N.$$
\(ii) *Representations of degree $N$ $$\rho : \Gamma {\longrightarrow}GL_{N}(\C)$$ of the fundamental group* $\Gamma := \pi_{1}(M^{{\rm an}}, o)$ of the complex analytic manifold $M^{{\rm an}}$ with base point $o$.
Observe that we may consider $\C$-analytic versions of data of type (i), namely:
$\mbox{(i)}^{{\rm an}}$ 3-uples $(E^{{\rm an}}, \nabla^{{\rm an}}, \psi)$ consisting in *an analytic vector bundle $E$ of rank $N$ over $M^{{\rm an}},$ an integrable analytic connection $\nabla^{{\rm an}}$ on $E^{{\rm an}}$, and a rigidification* $\psi$ of $E^{{\rm an}}$ at $o$.
The notion of isomorphisms between two data of type (i), or between two data of type $\mbox{(i)}^{{\rm an}}$, is defined in the obvious manner as an isomorphism of (algebraic or analytic) vector bundles, compatible with the connections and rigidifications. Observe that, when such an isomorphism exists, it is actually unique.
Through analytification, any data $(E, \nabla, \psi)$ of type (i) determines a data $(E^{{\rm an}}, \nabla^{{\rm an}}, \psi)$ of type $\mbox{(i)}^{{\rm an}}$. Conversely, GAGA Theorems show that any data of type $\mbox{(i)}^{{\rm an}}$ may be obtained by analytification from some data of type (i), that is uniquely determined (up to unique algebraic isomorphism)[^8].
In turn, to any data of type $\mbox{(i)}^{{\rm an}}$ is associated its monodromy representation in the fiber $E_{0}$ of the flat vector bundle $(E^{{\rm an}}, \nabla^{{\rm an}})$, which may be identified to a $GL_{N}(\C)$-representation by means of the rigidification $\psi$: $$\rho : \Gamma {\longrightarrow}GL(E_{o}) \xrightarrow{\psi . \psi^{-1}} GL_{N}(\C).$$
Conversely, we may introduce the universal covering $(\tilde{M},\tilde{o})$ of the pointed connected complex manifold $(M^{{\rm an}}, o)$ — it is a $\Gamma$-covering of $M^{{\rm an}}$ — and the trivial vector bundle $\tilde{E} := \tilde{M}\times \C^N$ of rank $N$ over $\tilde{M}$, equipped with the “trivial” integrable analytic connection $\tilde{\nabla} := d \otimes Id_{\C^N}.$ If $\rho : \Gamma {\longrightarrow}GL_{N}(\C)$ denotes an arbitrary representation, the action of $\Gamma$ on $\C^N$ defined by $\rho$ makes $(\tilde{E}, \tilde{\nabla})$ a $\Gamma$-equivariant analytic vector bundle with integrable connection, which moreover is naturally rigidified at $\tilde{o}$. This equivariant rigidified vector bundle with integrable connection over $(\tilde{M}, \tilde{o})$ descends to some rigidified vector bundle of rank $N$ with integrable connection $(E^{{\rm an}}, \nabla^{{\rm an}}, \psi)$ on the pointed complex manifold $(M^{{\rm an}}, o)$.
These last two constructions are clearly inverse of each other and establish a natural bijection between (isomorphism classes) of data of type $\mbox{(i)}^{{\rm an}}$ and representations of type (ii). Combined with the above GAGA correspondence between data of type (i) and $\mbox{(i)}^{{\rm an}}$, this becomes a natural bijection between (isomorphism classes) of data of type (i) and representations of type (ii).
### {#MIC}
The set of (isomorphism classes) of data of type (i) coincides with the set of complex points ${\mathbf{MIC}}_{N}(M,o)(\C)$ of some quasi-projective scheme ${\mathbf{MIC}}_{N}(M,o)$ over $\C$, which represents the functor which maps a $\C$-scheme (of finite type) $S$ to the isomorphism classes of “data of type (i) over $S$”, defined as 3-uples $(E,\nabla, \psi)$ where $E$ denotes a locally free coherent sheaf of rank $N$ over $M \times S,$ $\nabla$ an integrable connection on $E$, relative to the projection $M \times S \rightarrow S,$ and $\psi$ a rigidification $E_{\vert o\times S} {\stackrel{\sim}{\longrightarrow}}{{\mathcal O}}_{S}^{\oplus N}$.
At this level of generality, the existence of the quasi-projective scheme ${\mathbf{MIC}}_{N}(M,o)$ representing this functor is one of the main results of Simpson in [@Simpson94I; @Simpson94II], where it is denoted $\mathbf{R}_{\rm DR}(M,o,N)$. A central point in the construction of ${\mathbf{MIC}}_{N}(M,o)$ is the fact that the vector bundles $E$ of rank $N$ over $M$ admitting an integrable connection $\nabla$ constitute a bounded family (see [@LePotier91], Lemme 9, for a concise presentation of Simpson’s argument in this specific situation).
The set of representations of type (ii) coincides with the set of complex points ${\mathbf{Rep}}_{N}(\Gamma)(\C)$ of the quasi-projective (actually affine) scheme ${\mathbf{Rep}}_{N}(\Gamma)$ over $\C$ which represents the functor which sends a $\C$-scheme of finite type $S$ to the set of representations $$\rho : \Gamma {\longrightarrow}GL_{N}(\Gamma(S, {{\mathcal O}}_{S})).$$ The existence of the scheme ${\mathbf{Rep}}_{N}(\Gamma)$ is a straightforward consequence of the existence of a finite presentation for the fundamental group $\Gamma$ (see for instance [@Simpson94II], Section 5, where this scheme is denoted $\mathbf{R}(\Gamma,N)$ or $\mathbf{R}_{\rm B}(M,o,N)$).
The bijection constructed in \[cor(i)(ii)\], by associating the monodromy representation of its analytification to some data of type (i), defines a bijection: $$\label{anisopoints}
{\mathbf{MIC}}_{N}(M,o)(\C) {\stackrel{\sim}{\longrightarrow}}{\mathbf{Rep}}_{N}(\Gamma)(\C),$$ which turns out to be defined by a canonical isomorphism of $\C$-analytic spaces $$\label{aniso}
{{\rm mon}}_{o} : {\mathbf{MIC}}_{N}(M,o)^{{\rm an}}{\stackrel{\sim}{\longrightarrow}}{\mathbf{Rep}}_{N}(\Gamma)^{{\rm an}}.$$ (Compare [@Simpson94II], Section 7. This formally expresses the fact that the construction in \[cor(i)(ii)\] “analytically depends on parameters” in an arbitrary analytic space.)
### {#section-1}
However, in general, the analytic isomorphism (\[aniso\]) is *not* induced by an algebraic isomorphism from ${\mathbf{MIC}}_{N}(M,o)$ to ${\mathbf{Rep}}_{N}(\Gamma)$.
This is already the case when $M$ is a smooth connected projective curve $C$ of positive genus $g$ and $N=1$. Then $${{\rm Pic}}^\natural (C) := {\mathbf{MIC}}_{1}({C},o)$$ may be identified with the *universal vector extension* $E({{\rm Pic}}_0(C))$ of the connected Picard variety ${{\rm Pic}}_{0}(C)$ of $C$ (see for instance [@Messing73], [@MazurMessing74], [@BK09]). Actually, ${{\rm Pic}}^\natural (C)$ classifies pairs $(L,\nabla)$ consisting in a line bundle $L$ of degree 0 over $C$ and a (necessarily integrable) connection $\nabla$ over $L$. The tensor product of line bundles with connections induces a structure of algebraic groups on ${{\rm Pic}}^\natural (C)$. It fits into the following exact sequence of connected commutative group schemes over $\C$, which displays it as a vector extension of ${{\rm Pic}}_{0}(C)$: $$\label{Picnaturalext}
\begin{array}{crcccclc}
0 {\longrightarrow}& \Omega^1(C) & {\longrightarrow}& {{\rm Pic}}^\natural(C) & {\longrightarrow}& {{\rm Pic}}_{0}(C) & {\longrightarrow}0 \\
& \alpha &{\longmapsto}& [({{\mathcal O}}_{C}, d + \alpha)] & & & \\
& & & [(L,\nabla)] & {\longmapsto}& [L] & &
\end{array}$$
Besides, the representation space ${\mathbf{Rep}}_{1}(\pi_{1}(C^{{\rm an}}, o))$ may be identified with the torus $$H^1(C^{{\rm an}}, \Z) \otimes_{\Z}\G_{m} \simeq \G_{m}^{2g},$$ and the monodromy isomorphism (\[aniso\]) takes the form of an isomorphism of complex Lie groups : $${{\rm Pic}}^\natural(C)^{{\rm an}}{\stackrel{\sim}{\longrightarrow}}\C^{\ast 2g}.$$
However the description of ${{\rm Pic}}^\natural(C)$ as a vector extension of an abelian variety easily implies that every morphism of algebraic variety from ${{\rm Pic}}^\natural(C)$ to $\G_{m}$ is constant. *A fortiori*, the algebraic varieties $ {\mathbf{MIC}}_{1}({C},o) = {{\rm Pic}}^\natural(C)$ and ${\mathbf{Rep}}_{1}(\pi_{1}(C^{{\rm an}}, o)) \simeq \G_{m}^{2g}$ are not isomorphic[^9].
### {#VarMic}
For later reference, let us indicate diverse variants of the previous constructions.
First of all, for any base field of characteristic zero and any pointed connected smooth pointed variety $(M,o)$ over $k$, the construction of the quasi-projective scheme ${\mathbf{MIC}}_{N}(M,o)$ makes sense over $k$ : it classifies data of type (i) over varying $k$-schemes $S$. This follows from a straightforward generalization of the arguments in [@Simpson94I], or (say, when $k$ is a subfield of $\C$) from a descent argument.
When $N=1,$ the tensor product of line bundles with (necessarily integrable) connections makes the quasi-projective scheme ${\mathbf{MIC}}_{1}(M,o)$ a group scheme, necessarily smooth over $k$. Moreover its connected component ${\mathbf{MIC}}_{1}(M,o)_{0}$ may be identified with the universal vector extension $E({{\rm Pic}}_{0}(M))$ of the connected Picard variety ${{\rm Pic}}_{0}(M)$ of $M$. Indeed the obvious analogue of the short exact sequence (\[Picnaturalext\]) still holds in this setting (see for instance [@BK09], Appendix B).
When $M$ is the abelian variety $\hat{A}$ dual to some abelian variety $A$ over $k$, this construction identifies the universal vector extension $E(A)$ of $A$ to the $k$-algebraic group $${{\rm Pic}}^\natural (\hat{A}) := {\mathbf{MIC}}_{1}(\hat{A}, 0_{\hat{A}}),$$ which classifies line bundles with (necessarily integrable) connections over $A$, and the short exact sequence (\[Picnaturalext\]) becomes the extension defining $E(A)$: $$0 {\longrightarrow}\E_{{\hat{A}}} := ({{\rm Lie\,}}{\hat{A}})^\vee {\longrightarrow}E(A) \stackrel{p_{A}}{\longrightarrow}A {\longrightarrow}0.$$
Second, it is convenient to have at one’s disposal diverse generalizations of the moduli spaces ${\mathbf{MIC}}_{N}(M,o)$. For instance, if $(M, o, o')$ denotes a connected smooth projective variety over $k$, endowed with two (possibly equal) “base points” $o$ and $o'$ in $M(k)$, we may construct a quasi-projective scheme ${\mathbf{MIC}}_{N}(M,o,o')$ that classifies vector bundles $E$ of rank $N$ over $M$, equipped with an integrable connection $\nabla$ and with rigidifications $\psi : E_{o} {\stackrel{\sim}{\longrightarrow}}k^N$ and $\psi': E_{o'} {\stackrel{\sim}{\longrightarrow}}k^N$ at $o$ and $o'$ ([[*cf.* ]{}]{}[@Simpson94I], Remark p. 109). Thanks to the morphism $$\digamma : {\mathbf{MIC}}_{N}(M,o,o') {\longrightarrow}{\mathbf{MIC}}_{N}(M,o)$$ defined by forgetting the rigidifications $\psi'$ at $o'$ and to the action by composition of $GL_{N,k}$ on these rigidifications, ${\mathbf{MIC}}_{N}(M,o,o')$ becomes a $GL_{N,k}$-torsor over ${\mathbf{MIC}}_{N}(M,o)$. When $N=1,$ the tensor product again makes ${\mathbf{MIC}}_{N}(M,o,o')$ a commutative algebraic group over $k$, and the above structure of $GL_{N,k}$-torsor becomes an extension of commutative algebraic groups: $$0 {\longrightarrow}\G_{m,k} {\longrightarrow}{\mathbf{MIC}}_{1}(M,o,o') {\longrightarrow}{\mathbf{MIC}}_{1}(M,o) {\longrightarrow}0.$$
When $M=\hat{A}$ as above, $o = 0_{\hat{A}},$ and $o'$ is a point $P$ in ${\hat{A}}(k)$ parameterizing some line bundle $L$ over $A$ (equipped with a rigidification $\epsilon : k\simeq L_{0_{A}}$ and algebraically equivalent to zero), one gets an extension $$\label{extEA}
0 {\longrightarrow}\G_{m,k} {\longrightarrow}{\mathbf{MIC}}_{1}({\hat{A}},0_{A},P) {\longrightarrow}E(A) {\longrightarrow}0$$ which may be described as follows. The $\G_{m,k}$-torsor $L^\times$ over $A$, deduced from the total space of $L$ by deleting its zero section may be endowed with a unique structure of $k$-algebraic group which makes the diagram $$\label{extA}
0 {\longrightarrow}\G_{m,k} \stackrel{\epsilon}{{\longrightarrow}} L^\times {\longrightarrow}A {\longrightarrow}0$$ a short exact sequence of commutative algebraic groups over $k$, and the extension (\[extEA\]) coincides with the pullback of the extension (\[extA\]) by $p_{A}: E(A) {\longrightarrow}A.$
Algebraization of formal objects
================================
A Theorem of Grauert-Grothendieck
---------------------------------
Since the work of Zariski on “holomorphic functions” ([@Zariski51]) and its amplification in Grothendieck’s new foundations of algebraic geometry ([@GrothendieckFGA]), *formal schemes* and coherent sheaves over them play a central role in modern algebraic geometry. Grothendieck notably established some comparison and existence theorems that relate algebraic and formal geometry over a suitable complete “adic” base ring ([[*cf.* ]{}]{}[@GrothendieckFGA], [@EGAIII1], [@Illusie05]). In SGA2 ([@GrothendieckSGA2]), motivated by some earlier work of Grauert, he also used formal geometry to investigate the classical Lefschetz theorems comparing the geometry of projective varieties and of their hyperplane sections.
In the sequel, we shall be concerned by the algebraization theorems of “Lefschetz type” established in SGA2 rather than by the earlier “fundamental” comparison and existence theorems discussed in [@GrothendieckFGA], [@EGAIII1] and [@Illusie05].
For the sake of simplicity, we first state a (weaker) analytic version of these theorems of Lefschetz type in a special simple case.
\[GrGr\] Let $X\hookrightarrow \PP^N_{\C}$ be a smooth projective complex variety of dimension $d$, and let $Y:= X \cap \PP^{N-1}_{\C}$ be a hyperplane section of $X$ of dimension $d-1$.
[**Gr1.**]{} If $d \geq 2,$ then for every algebraic vector bundle $E$ over $X$, the restriction map $$\Gamma(X,E) \longrightarrow \left\{\text{germs of analytic sections of $E$ along $Y$}\right\}$$ is an isomorphism.
[**Gr2.**]{} If $d \geq 3,$ any germ of analytic vector bundle ${{\mathcal E}}$ on some analytic neighbourhood of $Y$ in $X$ “extends” to some coherent sheaf $E$ over $X$.
Observe that, like GAGA, this theorem decomposes into two parts: a “comparison theorem” [**Gr1**]{}, and an “existence theorem” [**Gr2**]{}.
Observe also that, according to Serre’s GAGA, the vector bundle $E$ in [**Gr1**]{} and its space of global sections $\Gamma(X,E)$ may be equivalently taken in the algebraic or in the analytic category. The same remark applies to the coherent sheaf $E$ the existence of which is asserted in [**Gr2**]{}. Accordingly, when the conclusion of [**Gr2**]{} holds, we shall say that ${{\mathcal E}}$ is *algebraizable*.
Let us emphasize that the assumptions on the dimension $d$ are crucial in Theorem \[GrGr\].
Indeed [**Gr1**]{} trivially fails for $X=\PP^1$, $Y=\{\mbox{point}\}$, and $E={{\mathcal O}}_{X}.$
The existence theorem [**Gr2**]{} already fails for line bundles when $X$ is the projective plane $\PP_{\C}^2$ and $Y= \PP_{\C}^1$ a projective line in $X$. This follows from Proposition \[gerlineb\] below, which is a simple consequence of [**Gr1**]{}.
Let $X_{\infty}$ denote a projective line in $X$ distinct from $Y$, and let us consider the affine plane $\A_{\C}^2 := X \setminus X_{\infty}$ and the affine line $\A_{\C}^1:= \A_{\C}^2 \cap Y$. Choose affine coordinates $(x,y)$ on $\A_{\C}^2$ such that $\A_{\C}^1 = (x=0)$. For any converging power series $f$ in $\C\{T\}$, the equation $$y = f(x)$$ defines a germ $T_{f}$ of smooth analytic curve in $X=\PP_{\C}^2$ transverse to $Y= \PP_{\C}^1.$
\[gerlineb\] The germ of analytic line bundle ${{{\mathcal O}^{\rm{an}}}}(T_{f})$ along $\PP_{\C}^1$ in $\PP_{\C}^2$ is algebraizable if and only if the series $f$ belongs to $\C T + \C.$
Observe also that Theorem \[GrGr\] admits striking elementary geometric applications. For instance, it implies that *any germ of analytic hypersurface along $\PP^2_{\C}$ in $\PP^3_{\C}$ extends to a global algebraic hypersurface, defined by the vanishing of some homogeneous polynomial in $\C[X_{0},X_{1},X_{2},X_{3}]$.*
Formal geometry
---------------
In SGA2, Theorem \[GrGr\] is stated and proved in a more general formulation, in which (i) concerns *formal* sections and vector bundles instead of analytic germs of sections and vector bundles, (ii) makes sense over an arbitrary base field $k$ — indeed over an arbitrary Noetherian base $S$ — instead of $\C$, and (iii) holds under regularity assumptions weaker than the smoothness of $X$, formulated in terms of depth. In this paragraph, we want to give some indication of the generalizations (i) and (ii), while keeping minimal the prerequisites from formal geometry.
Recall (see for instance [@Illusie05]) that, for any Noetherian scheme $X$ and any closed subscheme $Y$ in $X$, a coherent formal sheaf ${{\mathcal E}}$ over the formal scheme $\widehat{X}_{Y}$, completion of $X$ along $Y$, “is” nothing else than the data of a system $({{\mathcal E}}_{n})_{n \in \N}$ of coherent sheaves on the successive infinitesimal neigbourhoods $Y_{n}$ of $Y$ in $X$ ($Y_{0}:=Y;$ $Y_{n}$ is defined by the $n+1$-th power ${{\mathcal I}}_{Y}^{n+1}$ of the ideal sheaf ${{\mathcal I}}_{Y}$ of $Y$ in ${{\mathcal O}}_{X}$), equipped with isomorphisms $$\label{forsyst}
{{\mathcal E}}_{n+1\vert Y_{n}} {\stackrel{\sim}{\longrightarrow}}{{\mathcal E}}_{n}.$$ The coherent formal sheaf ${{\mathcal E}}$ is locally free — and then called a *vector bundle* — if and only if, for every $n$, ${{\mathcal E}}_{n}$ is a locally free coherent sheaf of ${{\mathcal O}}_{Y_{n}}$-modules.
By definition, the space of sections of ${{\mathcal E}}$ over $\widehat{X}_{Y}$ “is” precisely the projective limit $$\Gamma(\widehat{X}_{Y}, {{\mathcal E}}) := \lim_{\stackrel{\longleftarrow}{n}}\Gamma(Y_{n}, {{\mathcal E}}_{n}),$$ defined by means of the isomorphisms (\[forsyst\]) and of the induced projective system of spaces of sections: $$\Gamma(Y_{n+1}, {{\mathcal E}}_{n+1}) \stackrel{._{\vert Y_{n}}}{{\longrightarrow}} \Gamma(Y_{n}, {{\mathcal E}}_{n+1 \vert Y_{n}}) {\stackrel{\sim}{\longrightarrow}}\Gamma(Y_{n}, {{\mathcal E}}_{n}).$$
A coherent sheaf $E$ over $X$ defines a formal coherent sheaf $E_{\vert \widehat{X}_{Y}} := (E_{\vert Y_{n}})$ over $\widehat{X}_{Y}$. A formal coherent sheaf on $\widehat{X}_{Y}$ will be called *algebraizable* if, up to isomorphism, it is of the form $E_{\vert \widehat{X}_{Y}}$ for some coherent sheaf $E$ over $X$.
Using these definitions, we may state a generalized version of Theorem \[GrGr\] valid for a smooth projective scheme over an arbitrary base field $k$.
\[GrGrfor\] Let $X\hookrightarrow \PP^N_{k}$ be a smooth projective scheme over $k$, of pure dimension $d$, and let $Y := X \cap \PP^{N-1}_{k}$ be some hyperplane section, of dimension $d-1$.
[**Gr1.**]{} If $d \geq 2,$ then for any vector bundle $E$ over $X$, the restriction map $$\Gamma(X,E) {\longrightarrow}\Gamma(\widehat{X}_{Y}, E_{\vert \widehat{X}_{Y}}) := \lim_{\stackrel{\longleftarrow}{n}}\Gamma(Y_{n}, {{\mathcal E}}_{\vert Y_{n}})$$ is an isomorphism.
[**Gr2.**]{} If $d \geq 3,$ then any vector bundle ${{\mathcal E}}$ over $\widehat{X}_{Y}$ is algebraizable.
Like the proof of Serre’s GAGA and of Grothendieck’s Comparison and Existence Theorems in [@GrothendieckFGA], [@EGAIII1], [@Illusie05], the proofs in SGA2 are cohomological. For instance, a key point in the proof of [**Gr2**]{} is that, since $d\geq 3,$ the Cartier divisor $Y$ has depth $\geq 2$ and the ampleness of ${{\mathcal O}}_{X}(Y)_{\vert Y}$ implies that, for every vector bundle $E_{0}$ over $Y$, the cohomology group $H^1(Y,E_{0}\otimes {{\mathcal O}}_{X}(-Y)^{\otimes n}_{\vert Y})$ vanishes for $n$ a sufficiently large positive integer (Lemma of Enriques-Severi-Zariski). This implies that, for any vector bundle ${{\mathcal E}}= (E_{n})$ over $\widehat{X}_{Y}$, the system $(H^1(Y_{n},E_{n}))$ is essentially constant, and consequently $$H^1(\widehat{X}_{Y}, {{\mathcal E}}) = \lim_{\stackrel{\longleftarrow}{n}}H^1(Y_{n}, {{\mathcal E}}_{n})$$ is a *finite-dimensional* $k$-vector space. The finite dimensionality of a first cohomology group plays the same role here as in the proofs of the Poincaré-Lefschetz-Hodge Theorem by Kodaira-Spencer, and of the GAGA Existence Theorem by Serre.
Let us also indicate that the results in SGA2 have been extended in diverse directions by Michèle Raynaud ([@RaynaudMe75]) and Faltings ([@Faltings79]), and that, besides the original cohomological proofs, it is possible to give more “classical" proofs of Theorems \[GrGr\] and \[GrGrfor\], based on Theorem \[thAndreotti\] *infra* and its formal variant, which ultimately rely on the use of “auxiliary polynomials," familiar in Diophantine approximation and transcendence.
A Theorem of Andreotti and Hartshorne
-------------------------------------
Let us mention that diverse algebraization results concerning formal meromorphic functions along subvarieties have also been established, notably by Hironaka-Matsumura ([@HironakaMatsumura68]), Faltings ([@Faltings80], [@Faltings81]), and Chow ([@Chow86]).
We want to discuss briefly an algebraization result, concerning formal germs along curves, that is related both to the results in *loc. cit.* and to the Grauert-Grothendieck Theorems \[GrGr\] and \[GrGrfor\]. For the sake of simplicity, we state it in the analytic framework, in which situation it goes back to Andreotti [@Andreotti63] :
\[thAndreotti\] Let $C \hookrightarrow \PP^N_{\C}$ be a smooth connected projective complex algebraic curve, and let ${{\mathcal V}}$ be a germ of smooth $\C$-analytic submanifold along $C$ in $\PP^N(\C).$
If the normal bundle $N_{C}{{\mathcal V}}$ to $C$ in ${{\mathcal V}}$ is ample, then ${{\mathcal V}}$ is algebraic.
Observe that the normal bundle $N_{C}{{\mathcal V}}$ is an analytic vector bundle over $C$, which by GAGA defines an algebraic vector bundle over $C$. When $\dim {{\mathcal V}}= 2$, it is a line bundle, and its ampleness is equivalent to the positivity of its degree $\deg_{C}N_{C}{{\mathcal V}}.$
In Theorem \[thAndreotti\], the algebraicity of ${{\mathcal V}}$ precisely means that the dimension $\dim {\overline}{{{\mathcal V}}}^{\rm Zar}$ of its Zariski closure ${\overline}{{{\mathcal V}}}^{\rm Zar}$ in $\PP^N_{\C}$, which is at least equal to the complex dimension $\dim {{\mathcal V}}$ of the complex manifold ${{\mathcal V}}$, actually coincides with $\dim {{\mathcal V}}$. This is equivalent to the fact that the germ ${{\mathcal V}}$ is a “branch” along $C$ of some (irreducible) algebraic subset of $\PP^N_{\C}$ containing $C$.
Here again, Theorem \[thAndreotti\] admits a formal generalization, valid over any base field, where ${{\mathcal V}}$ is a smooth formal subscheme containing $C$ of the formal completion of $\PP^N_{k}$ along a smooth projective $k$-curve. It may also be extended to higher-dimensional situations : the curve $C$ may be replaced by any smooth projective subvariety $Y$, of dimension at least $1$. This condition is similar to the dimension condition in the assertions [**Gr1**]{} in Theorems \[GrGr\] and \[GrGrfor\]. Actually [**Gr1**]{} may be derived from Theorem \[thAndreotti\] and its higher-dimensional and formal generalization by considering the graphs of analytic or formal sections (see [@BostChambert-Loir07]).
In its analytic (resp. formal) form, Theorem \[thAndreotti\] is a direct consequence — by the “anonymous” argument recalled in Section \[subsec:histoire\] — of a result of Andreotti [@Andreotti63] (resp. of Hartshorne [@Hartshorne68]) which asserts that the field of meromorphic functions (resp. of formal meromorphic functions) on ${{\mathcal V}}$ is a field of transcendence degree at most $\dim {{\mathcal V}}$ over $\C$ (resp. over $k$).
Theorem \[thAndreotti\] may also established by directly estimating the Hilbert function of the Zariski closure of ${{\mathcal V}}$, with no recourse to the (formal) meromorphic functions ([[*cf.* ]{}]{}[@Bost01], Section 3.3, and [@Bost06]). This type of argument may be seen as a geometric counterpart of the use of auxiliary polynomials in Diophantine approximation and transcendence proofs.
Algebraization criteria in the style of Theorem \[thAndreotti\] have been recently reconsidered in [@BogomolovMcQuillan01] and [@Bost01] in relation to algebraicity properties of leaves of algebraic foliations; see [@KebekusSolaToma07] for geometric applications and references, and [@Bost04] for similar geometric applications to groups schemes over projective curves.
Algebraization over function fields
-----------------------------------
The above algebraization theorems, concerning formal “objects” over projective varieties on some base field $k$ may be used to derive algebraization theorems over projective varieties on function fields of the form $k(C)$, where $C$ denotes some projective variety over $k$.
We illustrate this general principle by formulating an application of Theorem \[thAndreotti\] to the algebraicity of formal germs in varieties over the function field $\C(C)$ defined by some smooth projective complex curve $C$. The details of its derivation, which is straightforward, will be left to the reader, as well as the derivation from the formal variant of Theorem \[thAndreotti\] of a similar algebraicity criterion for formal germs in varieties over a general function field $k(C)$.
Let $C$ be a smooth projective complex curve and let $\pi : {{\mathcal X}}\rightarrow C$ be a projective complex variety fibered over $C$. (In other words, $\pi$ is a flat surjective morphism of complex schemes.)
Let $K:= \C(C)$ be the function field of $C$, and let $X := {{\mathcal X}}_{K}$ be the generic fiber of $\pi$. It is a projective $K$-variety, and conversely, any projective $K$-variety may be realized as the generic fiber of a suitable projective model ${{\mathcal X}}$ fibered over $C$ as above.
Let $P$ be a $K$-point of $X$. By the projectivity of $\pi,$ it extends to a section ${{\mathcal P}}$ of $\pi$ over $C$.
Consider a smooth formal germ of a subvariety through $P$ in $X$, $$\widehat{V} := \lim_{\stackrel{{\longrightarrow}}{i}} V_{i},$$ namely a smooth formal subscheme of the completion $\widehat{X}_{P}$. Here again it is said to be algebraic when its Zariski closure ${\overline}{\widehat{V}}^{{\rm Zar}_{X}}$ in the $K$-scheme $X$ has the same dimension as $\widehat{V}.$
The $V_{i}$’s are zero-dimensional subschemes of $X ={{\mathcal X}}_{K}$ supported by $P$. Their closures in ${{\mathcal X}}$ $${{\mathcal V}}_{i}:= {\overline}{V_{i}}^{{\rm Zar}_{{{\mathcal X}}}}$$ are one-dimensional subschemes of ${{\mathcal X}}$ with support ${{\mathcal P}}$, and constitute an inductive system $${{\mathcal V}}_{0} = {{\mathcal P}}{{\lhook\joinrel\longrightarrow}}{{\mathcal V}}_{1} {{\lhook\joinrel\longrightarrow}}{{\mathcal V}}_{2} {{\lhook\joinrel\longrightarrow}}\ldots {{\lhook\joinrel\longrightarrow}}{{\mathcal V}}_{i} {{\lhook\joinrel\longrightarrow}}{{\mathcal V}}_{i+1} {{\lhook\joinrel\longrightarrow}}\ldots$$ In general this system $({{\mathcal V}}_{i})_{i \in \N}$ does *not* define a formal subscheme of the completion $\hat{{{\mathcal X}}}_{{{\mathcal P}}}$ smooth over $C$. However it is the case when there exists a germ ${{\mathcal V}}$ of analytic submanifold of ${{\mathcal X}}^{{\rm an}}$ along ${{\mathcal P}}$ that “extends” $({{\mathcal V}}_{i})_{i \in \N}$ in the sense that ${{\mathcal V}}_{i}$ is the $i$th infinitesimal neighbourhood of ${{\mathcal P}}$ in ${{\mathcal V}}$.
\[corthA\] With the above notation, if $\widehat{V}$ extends to a germ ${{\mathcal V}}$ of a smooth analytic submanifold of ${{\mathcal X}}^{{\rm an}}$ along ${{\mathcal P}}$ and if the normal bundle $N_{{{\mathcal P}}}{{\mathcal V}}$ to ${{\mathcal P}}$ in ${{\mathcal V}}$ is ample, then $\widehat{V}$ is algebraic.
A generalization of this corollary, formulated in terms of formal geometry only, holds when the base field $\C$ is replaced by an arbitrary base field $k$. Namely, *$\widehat{V}$ is algebraic when it extends to a formal subscheme $\hat{{{\mathcal V}}}$ of $\hat{{{\mathcal X}}}_{{{\mathcal P}}}$ smooth over the base curve $C$ and when the normal bundle $N_{{{\mathcal P}}}\hat{{{\mathcal V}}}$ is ample.*
Algebraization and transcendence
================================
Various classical results in transcendance theory and Diophantine approximation may be rephrased in geometric terms as algebraization results, asserting the algebraicity of certain formal or analytic subvarieties inside algebraic varieties defined over number fields, provided suitable arithmetic and analytic conditions are satisfied (see for instance [@Bost01], [@Chambert01], [@Bost06], [@Gasbarri10]).
In this article, we are concerned with transcendence results of “Schneider-Lang type”, in the line of the classical theorems of Schneider about the transcendence of values of abelian functions ([@Schneider41], [@Schneider57]) and of their modern amplification by Lang ([@Lang62; @Lang65; @Lang66]). We shall content ourselves with two instances of these transcendence theorems, whose proofs involve only elementary analytic techniques. We refer the reader to [@Bombieri70], [@Waldschmidt79], [@Demailly82], [@Gasbarri10], [@Herblot12] for more general higher-dimensional situations and references to related work.
In the sequel, ${{\overline {\mathbb Q}}}$ will denote the algebraic closure of $\Q$ in $\C$ — or equivalently, an algebraic closure of $\Q$ equipped with some preferred embedding in $\C$.
Algebraicity of leaves of rank one algebraic foliations
-------------------------------------------------------
Let $K$ be a number field, embedded in $\C$, and $X$ a smooth quasi-projective variety over $K$, and let $L \hookrightarrow T_{X/K}$ be a sub-vector bundle of rank 1 of its tangent bundle.
By base field extension from $K$ to $\C$ and analytification, we obtain a complex analytic manifold $X_{\C}^{{\rm an}}$ and an analytic sub-vector bundle $L^{{\rm an}}_{C} \hookrightarrow T_{X_{C}^{{\rm an}}}.$ Since $L^{{\rm an}}_{\C}$ has rank 1, it is integrable (in other words, its sheaf of sections is stable under Lie bracket), and defines a $\C$-analytic foliation of $X_{\C}^{{\rm an}}$. Consider some analytic leaf ${{\mathcal F}}$ of this foliation — it is a connected Riemann surface, equipped with an injective analytic immersion into $X_{\C}^{{\rm an}}$ — and assume that, for some closed discrete subset $\Delta$ of $\C$, we are given a nonconstant analytic map: $$f : \C \setminus \Delta {\longrightarrow}{{\mathcal F}}.$$
The map $f$ defines an analytic map from $\C \setminus \Delta$ into the quasi-projective complex variety $X_{\C}^{{\rm an}}\hookrightarrow \PP^N(\C)$. As such, it is said to be meromorphic on $\C$ when it extends to an analytic map, which we will still denote $f$, from $\C$ to $\PP^N(\C).$ When this holds, it is said to be *of order* $\leq \rho$ for some $\rho \in \R_{+}$ when, for every $\epsilon >0,$ it admits an analytic lift[^10] $$F = (F_{0},\ldots,F_{N}) : \C {\longrightarrow}\C^{N+1}\setminus\{0\}$$ such that $$\log^+ \max_{0\leq i \leq N} \vert F_{i}(t)\vert = O(\vert t \vert^{\rho + \epsilon}) \mbox{ when $\vert t \vert \rightarrow + \infty$.}$$
Here is a first instance of a transcendence theorem *à la* Schneider–Lang (see for instance [@Herblot12], notably Section 6, for a proof and for a discussion of earlier variants):
\[SL1\] Let $K, X, {{\mathcal F}}, \Delta$, and $f$ be as above. If
\(1) $f$ is meromorphic of finite order $\leq \rho$, and
\(2) there exists a subset $A$ of $\C\setminus \Delta$ such that $f(A) \subset X(K)$, whose cardinality $\vert A \vert$ satisfies$$\vert A \vert > 2 \rho [K:\Q],$$ then ${{\mathcal F}}$ is algebraic.
Here the algebraicity of ${{\mathcal F}}$ precisely means that the Riemann surface ${{\mathcal F}}$, injectively immersed in $X^{{\rm an}}_{\C}$ is actually a (necessarily closed and smooth) complex algebraic curve in $X_{\C}$. It is equivalent to the algebraicity of the formal germ $\widehat{{{\mathcal F}}}_{f(z)}$ of ${{\mathcal F}}$ through $f(z)$, for any $z \in A.$ The formal germ $\widehat{{{\mathcal F}}}_{f(z)} \hookrightarrow \widehat{X}_{\C,f(z)}$ is indeed defined[^11] over $K$, and consequently its Zariski closure in $X_{\C}$ is also. Finally, when conditions (1) and (2) hold, ${{\mathcal F}}$ is the set of complex points of some smooth closed $K$-curve in $X.$
Classically a transcendence theorem *à la* Schneider–Lang like Theorem \[SL1\] is rather expressed in the following contrapositive formulation: *if $f$ is meromorphic of finite order $\rho$ and if ${{\mathcal F}}$ is not algebraic, then the cardinality of the subset $f^{-1}(X(K))$ of $\C\setminus \Delta$ is at most $2\rho [K:\Q].$*
A simple but nontrivial instance of Theorem \[SL1\] arises when $$X := \A^1 \times \G_{m},$$ $$L:=(\partial/\partial x + y \,\partial/\partial y) {{\mathcal O}}_{X}$$ (where $x$ and $y$ denote the standard coordinates on $\A^1 \times \G_{m} \hookrightarrow \A^2$), and ${{\mathcal F}}$ is the image of $$\begin{array}{crcl}
f : & \C & {\longrightarrow}& X^{{\rm an}}_{\C} \\
& t & {\longmapsto}& (t,e^t).
\end{array}$$ Clearly $f$ is of order $\leq 1$ and ${{\mathcal F}}$ is not algebraic, and Theorem \[SL1\] asserts that, for any number field $K$ in $\C$, the intersection $f^{-1}(X(K))$ is finite, of cardinality $\leq 2 [K:\Q].$ Besides, if for some $z$ in $K$, $f(z)$ belongs to $X(K)$, then for any $n\in \Z,$ $f(nz)$ belongs to $X(K)$. Consequently in this case Theorem \[SL1\] boils down to the *Theorem of Hermite-Lindemann*, which asserts that *for any non-zero complex number $z$, $(z, e^z)$ does not belong to ${{\overline {\mathbb Q}}}^2$.*
Algebraic Lie subalgebras
-------------------------
Let $G$ be a (quasi-projective) algebraic group over ${{\overline {\mathbb Q}}}$, and let ${{\rm Lie\,}}G$ denote its Lie algebra. Observe that $${{\rm Lie\,}}G_{\C} := {{\rm Lie\,}}G \otimes_{{{\overline {\mathbb Q}}}}\C \simeq {{\rm Lie\,}}(G_{\C})$$ may be identified with the Lie algebra of the complex Lie group $G^{{\rm an}}_{\C}.$ In particular, we may consider the exponential map of this Lie group: $$\exp_{G_{\C}} : {{\rm Lie\,}}G_{\C} {\longrightarrow}G^{{\rm an}}_{\C}.$$ It is a $\C$-analytic map, étale at $0$, and of finite order.
We may also consider the formal variant of this exponential map: $$\widehat{\exp}_{G}: ({{\rm Lie\,}}G)^{\wedge}_{0} {\stackrel{\sim}{\longrightarrow}}\widehat{G}_{e},$$ which is an isomorphism between the formal completion of ${{\rm Lie\,}}G$ at $0$ — defined as the formal spectrum of the completion of the symmetric algebra ${\rm Sym}^\bullet({{\rm Lie\,}}G)^\vee$, $$({{\rm Lie\,}}G)^{\wedge}_{0} := {\rm Spf}[{\rm Sym}^\bullet({{\rm Lie\,}}G)^\vee]^\wedge$$ — and the formal completion $\widehat{G}_{e}$ of $G$ at its unit element $e$.
A ${{\overline {\mathbb Q}}}$-Lie subalgebra $V$ of ${{\rm Lie\,}}G$ will be called *algebraic* when the formal subgroup $\widehat{\exp}_{G} V_{0}^\wedge$ that it defines may be algebraized, or equivalently, when *there exists a ${{\overline {\mathbb Q}}}$-algebraic subgroup $H$ of $G$ such that $V = {{\rm Lie\,}}H$.*
Transcendence techniques *à la* Schneider-Lang may be used to derive “arithmetic criteria” for a Lie subalgebra of ${{\rm Lie\,}}G$ to be algebraic. For instance, when $G$ is commutative — so that any ${{\overline {\mathbb Q}}}$-vector subspace of ${{\rm Lie\,}}G$ is a Lie subalgebra — they lead to the following result, which appears as a vast generalization of Schneider’s original result in [@Schneider41] (see [@Lang66b], IV, §4, Th. 2, when $G$ is a linear group or an abelian variety, and [@Waldschmidt79], Th. 5.2.1, for a general commutative algebraic group $G$):
\[SL2\] For any commutative algebraic group $G$ over ${{\overline {\mathbb Q}}}$ and any ${{\overline {\mathbb Q}}}$-vector subspace $V$ of ${{\rm Lie\,}}G$, the following two conditions are equivalent :
*(1)* $V$ is an algebraic Lie subalgebra;
*(2)* there exists a family $(w_{i})_{i \in I}$ of element of $V_{\C}$ such that, for any $i \in I,$ $$\exp_{G_{\C}} w_{i} \in G({{\overline {\mathbb Q}}}),$$ which generates the $\C$-vector space $V_{\C}.$
The direct implication ${\rm (1)} \Rightarrow {\rm (2)}$ is straightforward. The converse implication ${\rm (2)} \Rightarrow {\rm (1)}$ is a transcendence statement. Consider for instance the case where $G = \G_{m}^2$. Then the (connected) algebraic subgroup of $G$ are defined by monomial equations, and consequently the algebraic Lie subalgebras $V$ of $${{\rm Lie\,}}G = {{\rm Lie\,}}\G_{m}^2= {{\overline {\mathbb Q}}}.x\partial/\partial x \oplus {{\overline {\mathbb Q}}}.y\partial/\partial y$$ are precisely the ${{\overline {\mathbb Q}}}$-vector subspaces of ${{\rm Lie\,}}G$ which are $\Q$-rational in the basis $(x\partial/\partial x, y\partial/\partial y)$. Therefore Theorem \[SL2\] for $G=\G_{m}^2$ becomes the *Theorem of Gelfond-Schneider*, which asserts that *for any $\alpha$ in ${{\overline {\mathbb Q}}}^\ast$ and any *non-zero* complex number $\log \alpha$ such that $\exp (\log \alpha) = \alpha$, and for any $\beta$ in ${{\overline {\mathbb Q}}}\setminus \Q,$ $\alpha^\beta := \exp (\beta \log \alpha)$ does not belong to ${{\overline {\mathbb Q}}}$.*
Observe also that, when $\dim V =1,$ Theorem \[SL2\] follows from Theorem \[SL1\] applied to the translation invariant sub-vector bundle $L$ in $T_{G/{{\overline {\mathbb Q}}}}$ such that $L_{e}= V$. (Choose $K$ large enough to have $G$ and $V$ defined over $K$.) In general, Theorem \[SL2\] may be seen as an algebraic integrability criterion for translation-invariant algebraic foliations on the algebraic groups $G$.
Let me point out that Theorem \[SL2\] is now subsumed in stronger transcendence results on commutative algebraic groups, such as the theorems of Baker on linear forms in logarithms and the analytic subgroup theorem of Wüstholz. The reader may find a recent survey of these results in the monograph [@BakerWuestholz07].
Morphisms of commutative algebraic groups {#Morag}
-----------------------------------------
In the sequel, we shall use a corollary of Theorem \[SL2\] which describes morphisms of connected commutative algebraic groups over ${{\overline {\mathbb Q}}}$ in terms of Lie theoretic data. This type of consequence was already pointed out by Bertrand in [@Bertrand83], Section 5, Prop. 2B, where Theorem \[SL2\] is applied in a similar way to investigate the ring of endomorphisms of a commutative algebraic group.
If $G$ is a connected commutative algebraic group over $\C$, we may introduce its group of “periods” $${{\rm Per}\,}G := \ker \exp_{G},$$ defined as the kernel of its exponential map. It is a discrete subgroup of its Lie algebra ${{\rm Lie\,}}G$, and fits into an exact sequence of commutative complex Lie groups $$0 {\longrightarrow}{{\rm Per}\,}{G}\, {{\lhook\joinrel\longrightarrow}}{{\rm Lie\,}}G \xrightarrow{\exp_{G}} G^{{\rm an}}{\longrightarrow}0.$$
We shall say that *$G$ satisfies Condition* $\mathbf{LP}$ when the group of periods ${{\rm Per}\,}G$ generates ${{\rm Lie\,}}G$ as a complex vector space.
Observe that this condition is preserved by isogenies, and by forming quotients and products, and is satisfied by the multiplicative group $\G_{m\C}$, complex abelian varieties, and universal vector extensions. Actually, a connected commutative algebraic group $G$ over $\C$ satisfies Condition $\mathbf{LP}$ precisely when $G$ is “almost semi-abelian" or “anti-additive" in the sense of [@BertrandPillay10], Section 3.1, namely when the torsion points of $G(\C)$ are Zariski dense in $G$, or equivalently when there is no nontrivialmorphism of algebraic groups from $G$ to the additive group $\G_{a\C}$ ([[*cf.* ]{}]{}*loc. cit.*, Appendix I). In particular condition $\mathbf{LP}$ is a purely algebraic condition, invariant under the automorphisms of the field $\C$.
\[CorSL2\] Let $G_{1}$ and $G_{2}$ be connected commutative algebraic groups over ${{\overline {\mathbb Q}}}$.
1\) For any $\phi$ in the $\Z$-module ${\operatorname{Hom}}_{{\rm{gp}}/{{\overline {\mathbb Q}}}}(G_{1}, G_{2})$ of morphisms of algebraic groups over ${{\overline {\mathbb Q}}}$ from $G_{1}$ to $G_{2}$, the ${{\overline {\mathbb Q}}}$-linear map $${{\rm Lie\,}}\phi := D\phi (e) : {{\rm Lie\,}}G_{1} {\longrightarrow}{{\rm Lie\,}}G_{2}$$ satisfies $$({{\rm Lie\,}}\phi)_{\C}({{\rm Per}\,}G_{1\C}) \subset {{\rm Per}\,}G_{2\C}.$$
The map $$\label{LieGamma}
{{\rm Lie\,}}: {\operatorname{Hom}}_{{\rm{gp}}/{{\overline {\mathbb Q}}}}(G_{1}, G_{2}) {\longrightarrow}\{ \psi \in {\operatorname{Hom}}_{{{\overline {\mathbb Q}}}}({{\rm Lie\,}}G_{1},{{\rm Lie\,}}G_{2})\vert \psi_{\C}({{\rm Per}\,}G_{1\C}) \subset {{\rm Per}\,}G_{2\C} \}$$ so defined is an injective morphism of $\Z$-modules.
2\) When the group $G_{1\C}$ satisfies condition $\mathbf{LP}$, then the morphism (\[LieGamma\]) is bijective.
[[**Proof.**]{}]{} Assertion 1) follows from identification of $({{\rm Lie\,}}\phi)_{\C}$ with the differential ${{\rm Lie\,}}\phi_{\C}:= D\phi_{\C}(e)$ of the complexification $\phi_{\C}: G_{1\C}\rightarrow G_{2\C}$ of the morphism of ${{\overline {\mathbb Q}}}$-algebraic groups $\phi,$ together with the commutativity of the diagram: $$\begin{CD}
{{\rm Lie\,}}G_{1\C} @>{{{\rm Lie\,}}\phi_{\C}}>> {{\rm Lie\,}}G_{2\C} \\
@V{\exp_{G_{1\C}}}VV @VV{\exp_{G_{2\C}}}V \\
G_{1\C}^{{\rm an}}@>{\phi_{\C}}>> G_{2\C}^{{\rm an}}.
\end{CD}$$
To prove 2), assume that condition $\mathbf{LP}$ is satisfied by $G_{1\C},$ and consider some ${{\overline {\mathbb Q}}}$-linear map $$\psi : {{\rm Lie\,}}G_{1} {\longrightarrow}{{\rm Lie\,}}G_{2}$$ such that $\psi_{\C}({{\rm Per}\,}G_{1\C}) \subset {{\rm Per}\,}G_{2\C}$. We need to establish the existence of a morphism of ${{\overline {\mathbb Q}}}$-algebraic groups $\phi: G_{1} {\longrightarrow}G_{2}$ such that $$\label{psiphi}
\psi = {{\rm Lie\,}}\phi.$$
To achieve this, we will apply Theorem \[SL2\] to the group $G := G_{1} \times G_{2}$, and to the subspace $V$ of $${{\rm Lie\,}}G = {{\rm Lie\,}}G_{1} \oplus {{\rm Lie\,}}G_{2}$$ defined as the graph of $\psi$.
Indeed, as $G$ is commutative, $V$ is a Lie subalgebra of ${{\rm Lie\,}}G$. Moreover the complex vector space $V_{\C}$ is the graph of $\psi_{\C}$ and therefore contains $$\widetilde{{{\rm Per}\,}G_{1\C}} := \{ (\gamma, \psi_{\C}(\gamma)), \gamma \in {{\rm Per}\,}G_{1\C} \},$$ which is included in ${{\rm Per}\,}G_{1\C} \times {{\rm Per}\,}G_{2\C} = {{\rm Per}\,}G_{\C}$. Besides, the condition $\mathbf{LP}$ on $G_{1\C}$ shows that $\widetilde{{{\rm Per}\,}G_{1\C}}$ generates this $\C$-vector space. According to Theorem \[SL2\], $V$ is algebraic and is the Lie algebra of some connected ${{\overline {\mathbb Q}}}$-algebraic subgroup $H$ of $G$.
The first projection $p :={{{\rm pr}}}_{1\vert H} : H {\longrightarrow}G_{1}$ is étale. Moreover $H_{\C}^{{\rm an}}$ is the image by $\exp_{G_{\C}}$ of $V_{\C}$. This immediately implies that ${p}_{\C} : H_{\C} {\longrightarrow}G_{1\C}$ is injective, and finally that $p$ is an isomorphism. In other words, $H$ is the graph of some morphism $\phi$ of algebraic groups from $G_{1}$ to $G_{2}$. Clearly it satisfies (\[psiphi\]). [\
${\square}_{\tiny{\mbox{}}}$]{}
Transcendence theorems and the analogy between number fields and functions fields {#subsec:analogy}
---------------------------------------------------------------------------------
Theorems \[SL1\] and \[SL2\] may be seen as arithmetic counterparts of algebraization theorems such as Andreotti’s Theorem \[thAndreotti\], or $\mathbf{Gr1}$ in Theorems \[GrGr\] and \[GrGrfor\], or more specifically, of their consequences concerning algebraization over function fields, such as Corollary \[corthA\] and its formal variant. The role of the function field $\C(C)$ or $k(C)$ is now played by ${{\overline {\mathbb Q}}}$ or by a number field $K$ over which the geometric data $X$ and $L$, or $G$ and $V$, are defined.
Observe that the so-called Kronecker dimension of $K$ — namely the Krull dimension of ${{\rm Spec\, }}{{\mathcal O}_K}$ — is *one*, and that the algebraization Theorems \[SL1\] and \[SL2\], which are algebraicity criteria for smooth formal germs of subvarieties through $K$-rational *points*, isomorphic to ${{\rm Spec\, }}K$, are indeed algebraization theorems concerning smooth formal germs along some *arithmetic curves* ${{\rm Spec\, }}{{\mathcal O}_K}$ in some integral model of the given $K$-variety.
The classical proofs of Theorems \[SL1\] and \[SL2\] may be understood in a way that makes this geometric analogy precise. This geometric approach even suggests the formulation and the proof of new transcendence theorems, as demonstrated by the recent works of Gasbarri [@Gasbarri10] and Herblot [@Herblot12] who have established sophisticated generalizations of previously known transcendence theorems *à la* Schneider-Lang. I might also refer the reader to [@Chambert01] and [@Bost06] for discussions of this geometric approach and of some of its applications in the framework of Diophantine results *à la* Chudnovsky ([@ChudnovskysGroth85], [@ChudnovskysAcad85]) instead of Schneider-Lang. The arithmetic counterparts of the ampleness conditions in the geometric theorem of Andreotti-Hartshorne and $\mathbf{Gr1}$ appear more clearly in this somewhat simpler framework.
At the present stage, in this analogy, there is no known counterpart in transcendence theory of the general Existence Theorems, such as $\mathbf{Gr2}$ in Theorems \[GrGr\] and \[GrGrfor\]. This absence appear especially regrettable when one considers the important geometric applications of these theorems: we have discussed at length several consequences of GAGA Existence Theorem in Sections \[subsec:algline\], \[subsec:algebdeRham\], and \[subscec:alganstructures\]; as demonstrated in [@GrothendieckSGA2], $\mathbf{Gr2}$ is the key to a modern approach to “Lefschetz-type theorems” which compare invariants, such as their fundamental group or their Picard group, of projective varieties to the ones of their hyperplane section.
The dimension condition $$\dim Y \geq 2$$ in $\mathbf{Gr2}$ leads one, in a Kroneckerian perspective, to expect a suitable arithmetic counterpart of $\mathbf{Gr2}$ to be an algebraization criterion concerning formal line or vector bundles over the completion $\widehat{X}_{Y}$ of some algebraic variety $X$ over a number field $K$, along a smooth projective embedded curve $Y$ over $K$, or if one prefers, over the completion $\widehat{{{\mathcal X}}}_{{{\mathcal Y}}}$ of some scheme of finite type ${{\mathcal X}}$ over ${{\rm Spec\, }}{{\mathcal O}_K}$ along a projective arithmetic surface ${{\mathcal Y}}.$
In the spirit of transcendence theorems *à la* Schneider–Lang like Theorems \[SL1\] and \[SL2\], this criterion would also require some “differential algebraic” conditions (comparable to the occurrence of algebraic foliations in these theorems) and some “analytic control” on the considered formal vector bundles.
The remainder of this article is devoted to presenting such a criterion, in a conjectural form, and its relation to Grothendieck Period Conjecture in codimension 1.
The proof of this last conjecture for abelian varieties may actually be derived from Theorem \[SL2\] and its Corollary \[CorSL2\]. As it provides a further illustration of the “concrete geometric content” of transcendence theorems *à la* Schneider–Lang, we begin by a discussion of this material in Part \[GPCAb\]. Then, in Sections \[basicD\] to \[ExtD\], we review the formalism of $D$-group schemes and of their extensions that will be used in the last part to formulate our conjectural algebraization criterion.
The Grothendieck Period Conjecture for cycles of codimension 1 in abelian varieties {#GPCAb}
===================================================================================
Grothendieck’s conjecture $GPC^1(X)$ {#GPC}
------------------------------------
Let $X$ be a smooth projective algebraic variety over ${{\overline {\mathbb Q}}},$ and let $X_{\C}$ denote the smooth complex projective variety $X \otimes_{{{\overline {\mathbb Q}}}}\C,$ and $X^{{\rm an}}$ the corresponding compact complex manifold.
As discussed in Section \[subsec:algebdeRham\], the Picard groups of $X$, $X_{\C}$, and $X_{\C}^{{\rm an}}$ — which classify the algebraic lines bundles over $X$ and $X_{\C}$, and the analytic line bundles over $X_{\C}^{{\rm an}}$ — fit into the following commutative diagram: $$\label{bigdiag}
\begin{CD}
{{\rm Pic}}(X) @>{c_{1{{\rm dR}}/{{\overline {\mathbb Q}}}}}>> {H_{\rm dR}}^2(X/{{\overline {\mathbb Q}}}) \\
@VVV @VV{.\otimes_{{{\overline {\mathbb Q}}}}1_{\C}}V \\
{{\rm Pic}}(X_{\C}) @>{c_{1{{\rm dR}}/\C}}>> {H_{\rm dR}}^2(X_{\C}/\C) \\ @VV{.^{{\rm an}}}V @VV{.^{{\rm an}}}V \\
{{\rm Pic}}(X_{\C}^{{\rm an}}) @>{c_{1{{\rm dR}}}^{{\rm an}}}>> {H_{\rm dR}}^2(X^{{\rm an}}_{\C}/\C) \\
@VV{c_{1{{\rm top}}}}V @VV{\text{de Rham isomorphism}}V \\
H^2(X^{{\rm an}}_{\C}, \Z) @>{2\pi i (. \otimes_{\Z}1_\C)}>> H^2(X_{\C}^{{\rm an}},\C).
\end{CD}$$
The upper vertical arrows are induced by the field extension ${{\overline {\mathbb Q}}}\hookrightarrow \C$. The map ${{\rm Pic}}(X) {\longrightarrow}{{\rm Pic}}(X_{\C})$ maps the class of some line bundle $L$ over $X$ to the class of the line bundle $L_{\C}$ over $X_{\C}$, and is injective, but not surjective when the connected Picard variety ${{\rm Pic}}_{0}(X/{{\overline {\mathbb Q}}})$ has positive dimension[^12]. However, since any line bundle over $X_{\C}$ is algebraically equivalent to some line bundle defined over ${{\overline {\mathbb Q}}}$, the images of ${{\rm Pic}}(X)$ and ${{\rm Pic}}(X_{\C})$ by the first Chern class coincide. The map ${H_{\rm dR}}^2(X/{{\overline {\mathbb Q}}}) {\longrightarrow}{H_{\rm dR}}^2(X_{\C}/\C)$ induces an isomorphism ${H_{\rm dR}}^2(X/{{\overline {\mathbb Q}}}) \otimes_{{{\overline {\mathbb Q}}}}\C {\stackrel{\sim}{\longrightarrow}}{H_{\rm dR}}^2(X_{\C}/\C)$. The image in ${H_{\rm dR}}^2(X_{\C}/\C)$ of an element $\alpha$ in ${H_{\rm dR}}^2(X/{{\overline {\mathbb Q}}})$ will be denoted $\alpha \otimes_{{{\overline {\mathbb Q}}}}1_{\C}$.
The two middle vertical arrows $.^{{\rm an}}$, defined by analytification, are isomorphisms according to GAGA. The analytification isomorphism ${H_{\rm dR}}^2(X_{\C}/\C) {\stackrel{\sim}{\longrightarrow}}{H_{\rm dR}}^2(X^{{\rm an}}_{\C}/\C)$ will be noted as an equality.
The image of some class $\beta \in H^2(X^{{\rm an}}_{\C}, \Z)$ by the natural map $H^2(X^{{\rm an}}_{\C}, \Z) {\longrightarrow}H^2(X^{{\rm an}}_{\C}, \C)$ (defined by extending the coefficients from $\Z$ to $\C$) will be denoted $\beta \otimes_{\Z} 1_\C$, and the image of some class $\gamma$ in ${H_{\rm dR}}^2(X^{{\rm an}}_{\C}/\C)$ by the de Rham isomorphism will be denoted $\gamma^{\rm B}$.
We may define the subgroup $H^2_{\rm Gr}(X)$ of “Grothendieck’s classes” in $H^2_{{{\rm dR}}}(X/{{\overline {\mathbb Q}}}) \oplus H^2(X_{\C}^{{\rm an}}, \Z)$ by the condition that, for any $\alpha \in {H_{\rm dR}}^2(X/{{\overline {\mathbb Q}}})$ and any $\beta \in H^2(X_{\C}^{{\rm an}}, \Z)$: $$\label{defGr} (\alpha,\beta) \in H^2_{\rm Gr}(X) \Longleftrightarrow (\alpha \otimes_{{{\overline {\mathbb Q}}}} 1_{\C})^{{{\rm B}}} = 2\pi i \, \beta\otimes_{\Z}1_{\C}.$$ The commutativity of the diagram above shows that the algebraic and topological first Chern classes define a morphism of abelian groups: $$\begin{array}{crcl}
c_{1{{\rm dRB}}}: & {{\rm Pic}}(X) & {\longrightarrow}& H^2_{\rm Gr}(X) \\
& [L] & {\longmapsto}& (c_{1{{\rm dR}}}(L), c_{1{{\rm top}}}(L^{{\rm an}}_{\C})).
\end{array}$$
The classical Grothendieck Period Conjecture[^13] leads one to conjecture that *the morphism $c_{1{{\rm dRB}}}$ is onto*, namely that *a class $\gamma$ in $H^2(X^{{\rm an}}_{\C}, \Z)$ such that $2 \pi i. \gamma \otimes_{\Z}1_{\C}$ is ${{\overline {\mathbb Q}}}$-rational in $$H^2(X^{{\rm an}}_{\C},\C) \simeq {H_{\rm dR}}^2(X/{{\overline {\mathbb Q}}}) \otimes_{{{\overline {\mathbb Q}}}} \C$$ is algebraic* in the sense of Section \[subsec:algline\].
This conjectural assertion assertion may be called *the Grothendieck Period Conjecture in codimension 1* for the smooth projective variety $X$ over ${{\overline {\mathbb Q}}}$ and will be denoted $GPC^1(X)$ in the sequel.
Conjecture $GPC^1(X)$ admits a $\Q$-rational version, *a priori* weaker, that asserts the surjectivity of the map $$c_{1{{\rm dRB}}\Q}: {{\rm Pic}}(X)_{\Q} {\longrightarrow}H^2_{\rm Gr}(X)_{\Q}$$ deduced from $c_{1{{\rm dRB}}}$ by tensoring with $\Q$. (The tensor product $H^2_{\rm Gr}(X)_{\Q}:= H^2_{\rm Gr}(X)\otimes{\Q}$ may be identified with the $\Q$-vector subspace of $H^2_{{{\rm dR}}}(X/{{\overline {\mathbb Q}}}) \oplus H^2(X_{\C}^{{\rm an}}, \Q)$ defined by the right-hand side of (\[defGr\]), with $.\otimes_{\Z}.$ replaced by $.\otimes_{\Q}.$) A special feature of the codimension 1 case of the Grothendieck Period Conjecture is that this rational version of the conjecture — which is the one that appears in *loc. cit.* — actually implies the above “integral” version. Indeed, for any positive integer $n$, a class $\gamma$ in $H^2(X_{\C}^{{\rm an}}, \Z)$ is algebraic if $n\gamma$ is algebraic.
More generally, for any positive integer $k$, we may consider the Grothendieck Period Conjecture in codimension $k$, $GPC^k(X)$ : it asserts that any class $\gamma$ in $H^{2k}(X^{{\rm an}}_{\C}, \Q)$ such that $(2\pi i)^k \gamma \otimes_{\Q}1_{\C}$ is ${{\overline {\mathbb Q}}}$-rational in $H^{2k}(X^{{\rm an}}_{\C},\C) \simeq {H_{\rm dR}}^{2k}(X/{{\overline {\mathbb Q}}}) \otimes_{{{\overline {\mathbb Q}}}} \C$ is algebraic. See [@AndreMotives04], Section 7.5, for a discussion of the close relation between the original version of the Grothendieck Period Conjecture and the fullness conjecture for the “de Rham–Betti realization”, namely the conjunction of Conjectures $GPC^k(X)$ for all smooth projective varieties $X$ over ${{\overline {\mathbb Q}}}$ and all integers $k$[^14]. To my knowledge, the known results concerning these conjectures may be summarized as follows :
\(i) the original Grothendieck Period Conjecture is known to be valid for a motive in the Tannakian category generated by the Tate motive (transcendence of $\pi$) or for an elliptic curve with complex multiplication (Chudnovsky);
\(ii) the fullness of the de Rham-Betti realization is known for $H^1$ ([[*cf.* ]{}]{}[@AndreMotives04], 7.2.3, where it is derived from the transcendence results in [@Wuestholz84]; this fullness is basically the content of Theorem \[HomAB\] *infra*, and as shown in the next paragraphs, may be derived from Schneider-Lang’s Theorem \[SL2\] and its Corollary \[CorSL2\]).
In the next sections, we shall establish the validity of Grothendieck’s Period Conjecture in codimension 1 for abelian varieties:
\[GPCA\] For any abelian variety $A$ over ${{\overline {\mathbb Q}}},$ $GPC^1(A)$ holds.
The proof of Theorem \[GPCA\] will be based on the “transcendental” characterization of algebraic Lie subalgebras in Theorem \[SL2\], via its Corollary \[CorSL2\] applied to universal vector extensions of abelian varieties, and on the identification of the Néron-Severi group of an abelian variety with the group of symmetric morphisms from the abelian variety to its dual (compare [@Bost06], Theorem 6.4). We present the details of this proof in Section \[GPCAb\]. As a preliminary, in Section \[PrelAb\] we recall classical facts concerning abelian varieties, their duality, and their universal vector extensions, and in Section \[CDRB\] we introduce the elementary, but convenient, formalism of the category ${{\mathcal C}_{\rm dRB}}$ of the “de Rham–Betti realisations” (in the spirit of the realisation categories *à la* Deligne–Jannsen [@Jannsen90]; see also [@AndreMotives04], Section 7.5.).
Abelian varieties, duality, and universal extensions {#PrelAb}
----------------------------------------------------
In this section, we work over an algebraically closed field $k$ of characteristic zero.
### Dual abelian varieties and de Rham (co)homology {#subsubPoinc}
If $A$ is an abelian variety over $k$, we shall denote ${\hat{A}}:= {{\rm Pic}}_{0}(A/k)$ the dual abelian variety. The group ${\hat{A}}(k)$ of its $k$-rational points may be identified with the subgroup ${{\rm Pic}}^{0}(A)$ of ${{\rm Pic}}(A)$ of isomorphism classes of line bundles algebraically equivalent to zero, or equivalently, with the kernel of $$c_{1{{\rm dR}}}: {{\rm Pic}}(A) {\longrightarrow}{H_{\rm dR}}^2(A/k).$$
To any morphism $\phi : A {\longrightarrow}B$ of abelian varieties over $k$ is attached the dual morphism $\hat{\phi} :{\hat{B}}{\longrightarrow}{\hat{A}}.$ It maps the class of some line bundle $L$ over $B$ algebraically equivalent to zero to the class of $\phi^\ast(L).$ This construction is additive and (contravariantly) functorial.
Let ${{\mathcal P}}_{A}$ denote the Poincaré line bundle over $A \times {\hat{A}}$. Its restriction to $0_{A}\times {\hat{A}}$ is trivial, and for any ${\hat{a}}\in {\hat{A}}(k),$ the isomorphism class of its restriction to $A \times {\hat{a}}$ is precisely ${\hat{a}}$ itself, and these properties characterize ${{\mathcal P}}_{A}$ up to isomorphism. By mapping a point $a$ in $A(k)$ to the class $\iota_{A}(a)$ of ${{\mathcal P}}_{A\vert a \times {\hat{A}}}$, ones defines a canonical isomorphism $$\iota_{A} : A {\stackrel{\sim}{\longrightarrow}}\hat{{\hat{A}}},$$ which is sometimes written as an equality.
Recall that the following “biduality” properties are satisfied (compare [@BerthelotBreenMessing82], Section V.1, or [@Coleman91], Section 1). For any $\phi : A {\longrightarrow}B$ as above, $\hat{\hat{\phi}} :\hat{{\hat{A}}} {\longrightarrow}\hat{\hat{B}}$ and $\phi$ (or more exactly $\iota_{B}\circ \phi \circ \iota_{A}$) coincide. Moreover, under the composite isomorphism $$\begin{array}{rclc}
A \times {\hat{A}}& \stackrel{\sigma}{{\longrightarrow}} & {\hat{A}}\times A & \xrightarrow{Id_{{\hat{A}}}\times\iota_{A}} {\hat{A}}\times \hat{{\hat{A}}} \\
(a,{\hat{a}}) & {\longmapsto}& ({\hat{a}},a) &
\end{array}$$ the Poincaré bundle ${{\mathcal P}}_{A}$ of $A$ becomes the Poincaré bundle ${{\mathcal P}}_{{\hat{A}}}$ of ${\hat{A}}$: $$\label{Poincdual}
((Id_{{\hat{A}}}\times \iota_{A})\circ \sigma)^\ast {{\mathcal P}}_{{\hat{A}}} {\stackrel{\sim}{\longrightarrow}}{{\mathcal P}}_{A}.$$
Moreover $c_{1{{\rm dR}}}({{\mathcal P}}_{A})$ belongs to the Künneth component $H^1_{{{\rm dR}}}(A/k) \otimes H^1_{{{\rm dR}}}({\hat{A}}/k)$ of $H^2(A\times {\hat{A}}/k).$ If we define $$H_{1{{\rm dR}}}(A/k) := H^1_{{{\rm dR}}}(A/k)^\vee = {\operatorname{Hom}}_{k}(H^1_{{{\rm dR}}}(A/k),k),$$ then $c_{1{{\rm dR}}}({{\mathcal P}}_{A})$ defines an element $\varpi_{A}$ in $$H_{1{{\rm dR}}}(A/k)^\vee \otimes_{k} {H_{\rm dR}}^1({\hat{A}}/k) \simeq {\operatorname{Hom}}_{k}(H_{1{{\rm dR}}}(A/k),{H_{\rm dR}}^1({\hat{A}}/k))$$ which actually is an isomorphism: $$\varpi_{A} : H_{1{{\rm dR}}}(A/k) {\stackrel{\sim}{\longrightarrow}}{H_{\rm dR}}^1({\hat{A}}/k) = H_{1{{\rm dR}}}({\hat{A}}/k)^\vee.$$
The duality isomorphism $\varpi_{A}$ satisfies the following functoriality property.
Let $\phi: A {\longrightarrow}B$ be a morphism of abelian varieties over $k$. It induces a $k$-linear map between de Rham cohomology groups: $$H^1_{{{\rm dR}}}(\phi) :=\phi^\ast : H^1_{{{\rm dR}}}(B/k) {\longrightarrow}H^1_{{{\rm dR}}}(A/k),$$ and then by duality, between homology groups: $$H_{1{{\rm dR}}}(\phi) := H^1_{{{\rm dR}}}(\phi)^t : H_{1{{\rm dR}}}(A/k) {\longrightarrow}H_{1{{\rm dR}}}(B/k).$$ Then the dual morphism of abelian varieties $$\hat{\phi}: {\hat{B}}{\longrightarrow}{\hat{A}}$$ satisfies $$\label{dualdual}
H_{1{{\rm dR}}}(\hat{\phi}) = \varpi_{A}^{\vee -1} \circ H_{1}(\phi)^\vee \circ \varpi_{B}^\vee.$$ This follows from the isomorphism of line bundles over $A \times \hat{B}$: $$(Id_{A}\times \hat{\phi})^\ast {{\mathcal P}}_{A} \simeq (\phi \times Id_{{\hat{B}}})^\ast {{\mathcal P}}_{B},$$ and from the implied equality between first Chern classes.
Observe however that the isomorphism $$\varpi_{{\hat{A}}}: H_{1{{\rm dR}}}({\hat{A}}/k) {\stackrel{\sim}{\longrightarrow}}H^1_{{{\rm dR}}}(\hat{{\hat{A}}}/k) \simeq H_{1{{\rm dR}}}(\hat{{\hat{A}}}/k)^\vee$$ differs by a sign from the transpose of $\varpi_{A}$: $$\label{sign}
\varpi_{{\hat{A}}} = - H_{1{{\rm dR}}}(\iota_{A})^\vee \circ \varpi_{A}^\vee.$$ This follows from the equality of first Chern classes implied by the isomorphism (\[Poincdual\]), and from the fact that switching the factors $A \simeq \hat{{\hat{A}}}$ and ${\hat{A}}$ introduces a sign in the Künneth morphism $$H^1_{{{\rm dR}}}(A/k) \otimes_{k} H^1_{{{\rm dR}}}({\hat{A}}/k) {{\lhook\joinrel\longrightarrow}}H^2_{{{\rm dR}}}(A\times{\hat{A}}/k).$$
### Néron-Severi groups and symmetric morphisms {#NSsym}
To any line bundle $L$ over $A$ is attached a morphism of abelian varieties over $k$, $$\phi_{L}: A {\longrightarrow}\hat{A},$$ that is defined by $$\phi_{L}(a) := [\tau_{a}^\ast L \otimes L^\vee]$$ for any $a \in A(k)$, where $\tau_{a}$ denotes the translation by $a$ on $A$. Moreover $\phi_{L}$ is zero if and only if $L$ is algebraically equivalent to zero, and, for any two line bundles $L_{1}$ and $L_{2}$ on $A,$ $\phi_{L_{1}\otimes L_{2}}=\phi_{L_{1}}+\phi_{L_{2}}.$ Consequently this construction induces an injective morphism of $\Z$-modules: $$\begin{array}{rcl}
NS(A) := {{\rm Pic}}(A)/{{\rm Pic}}_{0}(A) & {\longrightarrow}& {\operatorname{Hom}}_{{{\rm gp}}/k}(A,{\hat{A}}) \\
{[L]} & {\longmapsto}& \phi_{L} .
\end{array}$$ Its image is the subgroup ${\operatorname{Hom}}_{{{\rm gp}}/k}(A,{\hat{A}})^{\text{sym}}$ of *symmetric* morphisms, namely the subgroup of morphisms $\phi : A {\longrightarrow}{\hat{A}}$ such that $$\label{symphi}
\hat{\phi} \circ \iota_{A} = \phi.$$
This actually holds for abelian schemes over an arbitrary base, as established by Nishi and Oda ([[*cf.* ]{}]{}[@Oda69], p. 77, note $(^2)$).
Observe that, at the level of de Rham (co)homology groups, the symmetry condition (\[symphi\]) translates into a *skew-symmetry* condition on $$\varpi_{A}^\vee \circ H_{1}(\phi) : H_{1{{\rm dR}}}(A/k) {\longrightarrow}H_{1{{\rm dR}}}(A/k)^\vee.$$ Indeed the “duality” formulas (\[dualdual\]) and (\[sign\]) imply the relation: $$\label{altphi}\varpi_{A}^\vee \circ H_{1}(\hat{\phi}\circ \iota_{A}) = - (\varpi_{A}^\vee \circ H_{1}(\phi))^\vee.$$
In particular, when the base field $k$ is $\C$, the above identification of $NS(A)$ with ${\operatorname{Hom}}_{{{\rm gp}}/k}(A,{\hat{A}})^{\text{sym}}$ is basically the classical theory of Riemann forms attached to line bundles over complex abelian varieties.
### Universal vector extensions
([[*cf.* ]{}]{}[@Rosenlicht58], [@Serre59], [@Messing73], [@MazurMessing74], [@Coleman91], [@BK09]).
For any abelian variety $A$ over $k$, we shall denote $\E_{A}$ the $k$-vector space $$\Gamma(A,\Omega^1_{A/k}) \simeq \Omega^1_{A/k, 0_{A}} \simeq ({{\rm Lie\,}}A)^\vee.$$ Observe that we have a canonical identification $$\E_{{\hat{A}}} \simeq ({{\rm Lie\,}}{\hat{A}})^\vee \simeq H^1(A,{{\mathcal O}}_{A})^\vee.$$
Let $V$ a finite-dimensional $k$-vector space, and let $V^{{{\rm gp}}}$ denote the associated $k$-vector group (namely the commutative algebraic group over $K$, such that the group $V^{{{\rm gp}}}(k)$ “is” the additive group $(V,+)$). Recall that any extension of commutative algebraic groups over $k$ $$\label{VGA}
0 {\longrightarrow}V^{{{\rm gp}}} {\longrightarrow}G {\longrightarrow}A {\longrightarrow}0$$ of some abelian variety $A$ over $k$ by $V^{{\rm gp}}$ determines an ${{\mathcal O}}_{A}\otimes_{k}V$-torsor over $A$, and that this construction defines a canonical isomorphism[^15] $$\label{canext}
{{\rm Ext}}^1_{{\rm{c-gp}}/k}(A, V^{{\rm gp}}) {\stackrel{\sim}{\longrightarrow}}{{\rm Ext}}^1_{{{\mathcal O}}_{A}-\text{mod}}({{\mathcal O}}_{A}, {{\mathcal O}}_{A}\otimes_{k}V) \simeq H^1(A,{{\mathcal O}}_{A}) \otimes_{k} V \simeq {\operatorname{Hom}}_{k}(\E_{{\hat{A}}},V).$$ Moreover an extension (\[VGA\]) of commutative algebraic groups of an abelian variety by a vector group admits no nontrivial automorphism. Consequently the isomorphism (\[canext\]) with $V= \E_{{\hat{A}}}$ shows that, to the element $Id_{\E_{{\hat{A}}}}$ is canonically associated a vector extension of $A$ by the vector group defined by $\E_{{\hat{A}}}$, which we shall denote $$\label{defEA}
0 {\longrightarrow}\E_{{\hat{A}}} {{\lhook\joinrel\longrightarrow}}E(A) \stackrel{p_{A}}{{\longrightarrow}} A {\longrightarrow}0.$$ It is the *universal vector extension* of $A$ : any vector extension (\[VGA\]) may be realized uniquely as a pushout of (\[defEA\]), namely, as the pushout by its “classifying element” in the right-hand side of (\[canext\]).
### The functor $E$ {#functE}
Let $\phi : A {\longrightarrow}B$ be a morphism of abelian varieties over $k$. We may consider the pullback by $\phi$ of the universal vector extension of $B$, and use the universal property of the universal vector extension of $A$. We thus get the existence and unicity of a morphism $E(\phi)$ of $k$-algebraic groups which makes the following diagram commutative: $$\begin{CD}
E(A) @>{E(\phi)}>> E(B) \\
@VV{p_{A}}V @VV{p_{B}}V \\
A @>{\phi}>> B.
\end{CD}$$ The construction of $E(\phi)$ is clearly additive and functorial in $\phi.$ Moreover it is easily seen to be fully faithful:
For any two abelian varieties $A$ and $B$ over $k$, the morphism of $\Z$-modules $$\label{Eff}
\begin{array}{rcl}
{\operatorname{Hom}}_{{{\rm gp}}/k} (A, B) & {\longrightarrow}& {\operatorname{Hom}}_{{{\rm gp}}/k} (E(A), E(B)) \\
\phi & {\longmapsto}& E(\phi).
\end{array}$$ is an isomorphism.
### Biduality and universal vector extensions {#biduve}
We shall also use that the biduality isomorphism $$\iota_{A} : A(k) {\stackrel{\sim}{\longrightarrow}}\hat{{\hat{A}}}(k) = \ker c_{1{{\rm dR}}}: H^1({\hat{A}}, {{\mathcal O}}_{{\hat{A}}}^\ast) {\longrightarrow}H^1_{{{\rm dR}}}({\hat{A}}, \Omega^\bullet_{{\hat{A}}/k})$$ may be lifted to an isomorphism $$\iota_{E(A)} : E(A)(k) {\stackrel{\sim}{\longrightarrow}}H^1({\hat{A}}, \Omega^\times_{{\hat{A}}/k}),$$ where $\Omega^\times_{{\hat{A}}/k}$ denotes the complex $${{\mathcal O}}^\ast_{{\hat{A}}} \stackrel{d\log}{{\longrightarrow}} \Omega^1_{{\hat{A}}/k} \stackrel{d}{{\longrightarrow}} \Omega^2_{{\hat{A}}/k}\stackrel{d}{{\longrightarrow}} \cdots,$$ which makes commutative the following diagram with exact lines[^16]: $$\label{horrible}
\begin{CD}
0 @>>> \E_{{\hat{A}}} @>>> E(A)(k) @>{p_{A}}>> A(k) @>>> 0 \\
@. @V{\simeq}VV @V{\simeq}V{\iota_{E(A)}}V @V{\simeq}V{\iota_{A}}V @. \\
0 @>>> H^1({\hat{A}}, \sigma^{\geq 1} \Omega^\bullet_{{\hat{A}}/k}) @>>> H^1({\hat{A}}, \Omega^\times_{{\hat{A}}/k})
@>>> \hat{{\hat{A}}}(k) @>>> 0. \\
\end{CD}$$ (For constructing the second line, recall that $F^1H^2_{{{\rm dR}}}({\hat{A}}/k)
:= H^2({\hat{A}}, \sigma^{\geq 1} \Omega^\bullet_{{\hat{A}}/k})
$ injects into $H^2_{{{\rm dR}}}({\hat{A}}/k)$, and that $c_{1{{\rm dR}}}: H^1({\hat{A}}, {{\mathcal O}}_{{\hat{A}}}^\ast) \rightarrow H^1_{{{\rm dR}}}({\hat{A}}, \Omega^\bullet_{{\hat{A}}/k})$ coincides with $d\log: H^1({\hat{A}}, {{\mathcal O}}_{{\hat{A}}}^\ast) \rightarrow F^1H^2_{{{\rm dR}}}({\hat{A}}/k).$)
Moreover the “infinitesimal” version[^17] of $\iota_{E(A)}$ defines an isomorphism $$I_{A}:= {{\rm Lie\,}}\iota_{E(A)} : {{\rm Lie\,}}E(A) {\longrightarrow}H^1({\hat{A}}, \Omega^\bullet_{{\hat{A}}/k}) = H^1_{{{\rm dR}}}({\hat{A}}/k),$$ and the infinitesimal version of (\[horrible\]) is an isomorphism of exact sequences of finite-dimensional $k$-vector spaces: $$\label{moinshorrible}
\begin{CD}
0 @>>> \E_{{\hat{A}}} @>>> {{\rm Lie\,}}E(A) @>{{{\rm Lie\,}}p_{A}}>> {{\rm Lie\,}}A @>>> 0 \\
@. @V{=}VV @V{\simeq}V{I_{A}}V @V{\simeq}V{{{\rm Lie\,}}\iota_{A}}V @. \\
0 @>>> \E_{{\hat{A}}} @>>> H^1_{{{\rm dR}}}({\hat{A}}/k) @>>> H^1({\hat{A}}, {{\mathcal O}}_{{\hat{A}}}) @>>> 0.\\
\end{CD}$$ (The second line defines the Hodge filtration on $H^1_{{{\rm dR}}}({\hat{A}}/k)$.)
Finally we get an isomorphism of $k$-vector spaces $$J_{A}:=\varpi_{A}^{-1}\circ I_{A} : {{\rm Lie\,}}E(A) {\stackrel{\sim}{\longrightarrow}}H_{1{{\rm dR}}}(A/k).$$ It is easily checked to be functorial. Namely, for any morphism $\phi: A {\longrightarrow}B$ of abelian varieties over $k$, the diagram $$\begin{CD}
{{\rm Lie\,}}E(A) @> {{{\rm Lie\,}}E(\phi)}>> {{\rm Lie\,}}E(B) \\
@V{\simeq}V{J_{A}}V @V{\simeq}V{J_{B}}V \\
H_{1{{\rm dR}}}(A/k) @>{H_{1{{\rm dR}}}(\phi)}>> H_{1{{\rm dR}}}(B/k)
\end{CD}$$ is commutative.
The category ${{\mathcal C}_{\rm dRB}}$ {#CDRB}
---------------------------------------
### Definitions
We define an additive category ${{\mathcal C}_{\rm dRB}}$ — where $\mathcal{C}$ stands for “category” or “comparison” and ${{\rm dRB}}$ stands for “de Rham – Betti” — in the following way.
Its objects are triples $$M = (M_{{{\rm dR}}}, M_{{{\rm B}}}, c_{M}),$$ where $M_{{{\rm dR}}}$ is a finite-dimensional ${{\overline {\mathbb Q}}}$-vector space, $M_{{{\rm B}}}$ a free $\Z$-module of finite rank, and $c_{M}$ an isomorphism of $\C$-vector spaces: $$c_{M}: M_{{{\rm dR}}}\otimes_{{{\overline {\mathbb Q}}}}\C {\stackrel{\sim}{\longrightarrow}}M_{{{\rm B}}}\otimes_{\Z}\C.$$
In other terms, an object $M$ of ${{\mathcal C}_{\rm dRB}}$ may be seen as the data of the finite-dimensional $\C$-vector space $$M_{\C} := M_{{{\rm dR}}}\otimes_{{{\overline {\mathbb Q}}}}\C \simeq M_{{{\rm B}}}\otimes_{\Z}\C,$$ together with a “${{\overline {\mathbb Q}}}$-form” $M_{{{\rm dR}}}$ and a “$\Z$-form” $M_{{{\rm B}}}$ of $M_{\C}$.
If $M$ and $N$ are objects in ${{\mathcal C}_{\rm dRB}}$, the additive group of morphisms from $M$ to $N$ in ${{\mathcal C}_{\rm dRB}}$ is the subgroup ${\operatorname{Hom}}_{{{\rm dRB}}}(M,N)$ in ${\operatorname{Hom}}_{{{\overline {\mathbb Q}}}}(M_{{{\rm dR}}},N_{{{\rm dR}}}) \oplus {\operatorname{Hom}}_{\Z}(M_{{{\rm B}}},N_{{{\rm B}}})$ consisting of pairs of maps $\phi = (\phi_{dR},\phi_{{{\rm B}}})$ such that the following diagram is commutative: $$\begin{CD}
M_{{{\rm dR}}}\otimes_{{{\overline {\mathbb Q}}}}\C @>{\phi_{{{\rm dR}}}\otimes_{{{\overline {\mathbb Q}}}}Id_{\C}}>> N_{{{\rm dR}}}\otimes_{{{\overline {\mathbb Q}}}}\C \\
@V{\simeq}V{c_{M}}V @V{\simeq}V{c_{N}}V \\
M_{{{\rm B}}}\otimes_{\Z}\C @>{\phi_{{{\rm B}}}\otimes_{\Z}Id_{\C}}>> N_{{{\rm B}}}\otimes_{\Z}\C.
\end{CD}$$
These morphisms may be identified with the $\C$-linear maps $\phi_{\C}$ from $M_{\C}$ to $N_{\C}$ which are compatible both to their ${{\overline {\mathbb Q}}}$-forms and their $\Z$-forms. The composition of these morphisms is the obvious one, defined by the composition of the “de Rham”, “Betti”, and “complex” realizations $\phi_{{{\rm dR}}},$ $\phi_{{{\rm B}}}$, and $\phi_{\C}$ respectively.
The category ${{\mathcal C}_{\rm dRB}}$ is endowed with an internal tensor product, defined by $$M\otimes N := (M_{{{\rm dR}}}\otimes_{{{\overline {\mathbb Q}}}}N_{{{\rm dR}}},M_{{{\rm B}}}\otimes_{\Z}N_{{{\rm B}}}, c_{M}\otimes_{\C}c_{N}),$$ and with an internal duality functor, defined by $$M^\vee := ({\operatorname{Hom}}_{{{\overline {\mathbb Q}}}}(M_{{{\rm dR}}},{{\overline {\mathbb Q}}}), {\operatorname{Hom}}_{\Z}(M_{{{\rm B}}}, \Z), c^t),$$ and $$\phi^\vee := (\phi_{{{\rm dR}}}^{t},\phi_{{{\rm B}}}^t) = (. \circ \phi_{{{\rm dR}}}, . \circ \phi_{{{\rm B}}}).$$
For any integer $k$, we denote $\Z(k)$ the object of ${{\mathcal C}_{\rm dRB}}$ defined by $\Z(k)_{{{\overline {\mathbb Q}}}}= {{\overline {\mathbb Q}}}$ and $\Z(k)_{{{\rm B}}} = (2\pi i)^k \Z$ in $\Z(k)_{\C}= \C$. Observe that $\Z(0)$ and the obvious isomorphism $\Z(0)\otimes\Z(0) {\stackrel{\sim}{\longrightarrow}}\Z(0)$, mapping $1\otimes 1$ to $1$, define a unit object of ${{\mathcal C}_{\rm dRB}}$, which, endowed with $\otimes$ and $.^\vee$ becomes a rigid tensor category. In particular, for any two objects $M$ and $N$ of ${{\mathcal C}_{\rm dRB}}$, we have a natural isomorphism: $$\label{dualhom}
\begin{array}{rcl}
{\operatorname{Hom}}_{{{\rm dRB}}}(M,N) & {\stackrel{\sim}{\longrightarrow}}& {\operatorname{Hom}}_{{{\rm dR}}}(\Z(0), M^\vee\otimes N) \\
(\phi_{{{\rm dR}}}, \phi_{{{\rm B}}}) & {\longmapsto}& (1\mapsto \phi_{{{\rm dR}}}, 1 \mapsto \phi_{{{\rm B}}}).
\end{array}$$
Moreover, for every integer $k$, we get an identification $$\label{homtwist}
{\operatorname{Hom}}_{{{\rm dRB}}}(\Z(0), M\otimes \Z(k)) {\stackrel{\sim}{\longrightarrow}}M_{{{\rm dR}}} \cap (2\pi i)^k M_{{{\rm B}}},$$ where the intersection is taken in $M_{\C}$, by mapping a morphism $\phi:\Z(0) {\longrightarrow}M\otimes\Z(k)$ to $\phi_{\C}(1)$.
### Examples, I: The (co)homology of smooth projective varieties over ${{\overline {\mathbb Q}}}$.
For any smooth projective variety $X$ over ${{\overline {\mathbb Q}}}$ and for any integer $i\geq 0,$ the algebraic de Rham cohomology of $X$ and the Betti cohomology of $X^{{\rm an}}_{\C}$ determine an object $H^i_{{{\rm dRB}}}(X)$ in ${{\mathcal C}_{\rm dRB}}$ defined as follows: $$H^i_{{{\rm dRB}}}:= (H^i_{{{\rm dR}}}(X/{{\overline {\mathbb Q}}}), H^i_{{{\rm B}}}(X^{{\rm an}}_{\C},\Z)/\text{torsion}, c),$$ where $c$ denotes the composition of the comparison isomorphism defined by the base change isomorphism, analytification, and the de Rham isomorphism $$H^i_{{{\rm dR}}}(X/{{\overline {\mathbb Q}}})\otimes_{{{\overline {\mathbb Q}}}}{\C} {\stackrel{\sim}{\longrightarrow}}H^i_{{{\rm dR}}}(X_{\C}/\C) {\stackrel{\sim}{\longrightarrow}}H^i_{{{\rm dR}}}(X^{{\rm an}}_{\C}) {\stackrel{\sim}{\longrightarrow}}H^i (X_{\C}^{{\rm an}}, \C)$$ and of the inverse of the isomorphism defined by extension of coefficients $$(H^i(X^{{\rm an}}_{\C},\Z)/\text{torsion}) \otimes_{\Z} \C \simeq H^i(X^{{\rm an}}_{\C},\Z)\otimes_{\Z}\C {\stackrel{\sim}{\longrightarrow}}H^i(X^{{\rm an}}_{\C},\C).$$
To a morphism $$\phi : X \longrightarrow Y$$ of smooth projective varieties over ${{\overline {\mathbb Q}}}$ is attached a morphism in “de Rham–Betti cohomology”: $$H^i_{{{\rm dRB}}}(\phi):=(H^i_{{{\rm dR}}}(\phi),H^i_{B}(\phi))$$ defined by the “pullback” morphisms $$H^i_{{{\rm dR}}}(\phi) := \phi^\ast : {H_{\rm dR}}^i(Y/{{\overline {\mathbb Q}}}) {\longrightarrow}{H_{\rm dR}}^i(X/{{\overline {\mathbb Q}}})$$ and $$H^i_{{{\rm B}}}(\phi) := \phi_{\C}^{{{\rm an}}\ast} : H^i(Y^{{\rm an}}_{\C},\Z)/\text{torsion} {\longrightarrow}H^i(X^{{\rm an}}_{\C},\Z)/\text{torsion}$$ in algebraic de Rham and Betti cohomology. This construction is clearly functorial.
Observe that, as an instance of (\[homtwist\]), we have a natural identification: $$\label{HGrHom}
H^2_{\rm Gr}(X) \simeq {\operatorname{Hom}}_{{{\rm dRB}}}(\Z(0), H^2_{{{\rm dRB}}}(X)\otimes \Z(1)).$$
We shall also define the de Rham–Betti *homology* functor by duality in ${{\mathcal C}_{\rm dRB}}$: $$H_{i {{\rm dRB}}}(X) := H^i_{{{\rm dRB}}}(X)^\vee \mbox{ and } H_{i {{\rm dRB}}}(\phi) := H^i_{{{\rm dRB}}}(\phi)^\vee.$$ Observe that $H_{i {{\rm dRB}}}(X)_{{{\rm B}}}$ and $H_{i {{\rm dRB}}}(X)_{\C}$ may be identified with the Betti homology groups $H_{i}(X_{\C}^{{\rm an}}, \Z)$ modulo torsion and $H_{i}(X_{\C}^{{\rm an}}, \C)$ of $X_{\C}^{{\rm an}}$.
### Examples, II: The homology of abelian varieties. {#ExamHom}
Let $A$ be an abelian variety of dimension $g$ over ${{\overline {\mathbb Q}}}$, and $E(A)$ its universal vector extension.
Consider the exponential map of the associated complex Lie group: $$\exp_{E(A)_{\C}} : {{\rm Lie\,}}E(A)_{\C} {\longrightarrow}E(A)_{\C}^{{\rm an}}.$$ Its kernel, the group of periods ${{\rm Per}\,}E(A)_{\C}$ of $E(A)_{\C}$, is a free $\Z$-module of rank $2g$, and the inclusion ${{\rm Per}\,}E(A)_{\C} \hookrightarrow {{\rm Lie\,}}E(A)_{\C}$ extends to an isomorphism $$\label{perlie}
{{\rm Per}\,}E(A)_{\C} \otimes_{\Z} \C {\stackrel{\sim}{\longrightarrow}}{{\rm Lie\,}}E(A)_{\C}.$$ Consequently we may attach the following object of ${{\mathcal C}_{\rm dRB}}$ to the abelian variety $A$: $${{\rm LiePer}}E(A) := ({{\rm Lie\,}}E(A), {{\rm Per}\,}E(A)_{\C}, c),$$ where $c$ denotes the inverse of the isomorphism (\[perlie\]).
As recalled in \[biduve\] above, the construction of $E(A)$ as the moduli space of line bundles with (integrable) connections over the dual abelian variety $\hat{A}$ provides a canonical isomorphism of ${{\overline {\mathbb Q}}}$-vector spaces: $$I_{A} : {{\rm Lie\,}}E(A) {\stackrel{\sim}{\longrightarrow}}H^1_{{{\rm dR}}}({\hat{A}}/{{\overline {\mathbb Q}}}).$$ Moreover the isomorphism of complex vector spaces $$\begin{CD} {{\rm Lie\,}}E(A)_{\C} @>{I_{A,\C}= I_{A_{\C}}}>> {H_{\rm dR}}^1({\hat{A}}/{{\overline {\mathbb Q}}})\otimes_{{{\overline {\mathbb Q}}}}\C \simeq {H_{\rm dR}}^1({\hat{A}}_{\C}/\C) @>{\text{GAGA + de Rham}}>> H^1({\hat{A}}^{{\rm an}}_{\C}, \C)
\end{CD}$$ maps ${{\rm Per}\,}E(A)_{\C}$ onto $H^1({\hat{A}}_{\C}^{{\rm an}}, 2\pi i \Z).$ This follows from the description of $E(A)^{{\rm an}}_{\C}$ as $H^1({\hat{A}}^{{\rm an}}_{\C}, \Omega^\times_{{\hat{A}}^{{\rm an}}_{\C}}),$ where $\Omega^\times_{{\hat{A}}^{{\rm an}}_{\C}}$ denotes the complex ${{\mathcal O}}^{{\rm an}}_{{\hat{A}}^{{\rm an}}_{\C}} \stackrel{d\log}{{\longrightarrow}} \Omega^1_{{\hat{A}}^{{\rm an}}_{\C}}\stackrel{d}{{\longrightarrow}} \Omega^1_{{\hat{A}}^{{\rm an}}_{\C}}\stackrel{d}{{\longrightarrow}} \cdots $.
In other words, $I_{A}$ defines an isomorphism in ${{\mathcal C}_{\rm dRB}}$: $$I_{A,{{\rm dRB}}}: {{\rm LiePer}}E(A) {\stackrel{\sim}{\longrightarrow}}H^1_{{{\rm dRB}}}({\hat{A}}) \otimes \Z(1).$$
Besides, the isomorphism $\varpi_{A,{{\rm dR}}}$ constructed in paragraph \[subsubPoinc\] above admits an obvious analogue $\varpi_{A_{\C},{{\rm B}}}$ involving the Betti (co)homology of $A^{{\rm an}}_{\C}$ and ${\hat{A}}^{{\rm an}}_{\C},$ which are defined by means of $c_{1{{\rm B}}}({{\mathcal P}}_{A_{\C}}).$ Up to a factor $2\pi i$ coming from the relation $$c_{1{{\rm dR}}}({{\mathcal P}}_{A})_{\C} = 2 \pi i \; c_{1{{\rm B}}}({{\mathcal P}}_{A_{\C}}),$$ it is compatible with the isomorphism $\varpi_{A,{{\rm dR}}}$ in algebraic de Rham (co)homology. In other words, they define an isomorphism in ${{\mathcal C}_{\rm dRB}}$: $$\varpi_{A,{{\rm dRB}}} := (\varpi_{A,{{\rm dR}}}, \varpi_{A_{\C},{{\rm B}}}) : H_{1,{{\rm dRB}}}(A) {\stackrel{\sim}{\longrightarrow}}H^1_{{{\rm dRB}}}({\hat{A}})\otimes \Z(1).$$
Finally, we get a canonical isomorphism in ${{\mathcal C}_{\rm dRB}}$: $$\label{JA}
J_{A,{{\rm dRB}}}:= \varpi_{A,{{\rm dRB}}}^{-1} \circ I_{A,{{\rm dRB}}} : {{\rm LiePer}}E(A) {\stackrel{\sim}{\longrightarrow}}H_{1,{{\rm dRB}}}(A).$$ This construction is easily seen to be functorial in $A$. Namely, for any morphism $\phi: A {\longrightarrow}B$ of abelian varieties over ${{\overline {\mathbb Q}}},$ $${{\rm LiePer}}E(\phi) := ({{\rm Lie\,}}E(\phi), {{\rm Lie\,}}E(\phi)_{\C \vert {{\rm Per}\,}E(A)_{\C}})$$ is an element of ${\operatorname{Hom}}_{{{\rm dRB}}}({{\rm LiePer}}E(A), {{\rm LiePer}}E(B)),$ and the following diagram commutes in ${{\mathcal C}_{\rm dRB}}$: $$\begin{CD}
{{\rm LiePer}}E(A) @>{{{\rm LiePer}}E(\phi)}>> {{\rm LiePer}}E(B) \\
@V{\simeq}V{J_{A,{{\rm dRB}}}}V @V{\simeq}V{J_{B,{{\rm dRB}}}}V \\
H_{1,{{\rm dRB}}}(A) @>{H_{1{{\rm dRB}}}(\phi)}>> H_{1{{\rm dRB}}}(B).
\end{CD}$$
### Extensions
For any two objects $M$ and $N$ in ${{\mathcal C}_{\rm dRB}},$ we may consider the set ${{\rm Ext}}^1_{{{\rm dRB}}}(M,N)$ of 1-extensions of $M$ by $N$ in ${{\mathcal C}_{\rm dRB}},$ namely of diagrams in ${{\mathcal C}_{\rm dRB}}$ of the form $${{\mathcal E}}: \;\; 0 {\longrightarrow}N \stackrel{\alpha}{{\longrightarrow}} X \stackrel{\beta}{{\longrightarrow}} M {\longrightarrow}0$$ such that $\beta \circ \alpha = 0$ and the diagrams $${{\mathcal E}}_{{{\rm dR}}} : \;\; 0 {\longrightarrow}N_{{{\rm dR}}} \stackrel{\alpha_{{{\rm dR}}}}{{\longrightarrow}} X_{{{\rm dR}}} \stackrel{\beta_{{{\rm dR}}}}{{\longrightarrow}} M_{{{\rm dR}}} {\longrightarrow}0$$ and $${{\mathcal E}}_{{{\rm B}}} : \;\; 0 {\longrightarrow}N_{{{\rm B}}} \stackrel{\alpha_{{{\rm B}}}}{{\longrightarrow}} X_{{{\rm B}}} \stackrel{\beta_{{{\rm B}}}}{{\longrightarrow}} M_{{{\rm B}}} {\longrightarrow}0$$ are short exact sequences of ${{\overline {\mathbb Q}}}$-vector spaces and of $\Z$-modules respectively.
Equipped with the Baer sum, ${{\rm Ext}}^1_{{{\rm dRB}}}(M,N)$ becomes an abelian group. Actually, for any extension ${{\mathcal E}}$ as above, we may choose a ${{\overline {\mathbb Q}}}$-linear splitting $\sigma_{{{\rm dR}}}: M_{{{\rm dR}}} \rightarrow X_{{{\rm dR}}}$ of ${{\mathcal E}}_{{{\rm dR}}}$ and a $\Z$-linear splitting $\sigma_{{{\rm B}}} : M_{{{\rm B}}} \rightarrow X_{{{\rm B}}}$ of ${{\mathcal E}}_{{{\rm B}}}.$ Then $\sigma_{{{\rm dR}}\C}:= \sigma_{{{\rm dR}}} \otimes_{{{\overline {\mathbb Q}}}}1_{\C}$ and $\sigma_{{{\rm B}}\C}:= \sigma_{{{\rm B}}}\otimes_{\Z}1_{\C}$ are $\C$-linear splittings of $${{\mathcal E}}_{\C} : \;\; 0 {\longrightarrow}N_{\C} \stackrel{\alpha_{\C}}{{\longrightarrow}} X_{\C} \stackrel{\beta_{\C}}{{\longrightarrow}} M_{\C} {\longrightarrow}0,$$ and consequently $\sigma_{{{\rm dR}}\C}-\sigma_{{{\rm B}}\C}$ may be written $\alpha_{\C}\circ \phi$ for some uniquely determined $\phi$ in $(M^\vee \otimes N)_{\C}.$ The map $$\label{Extiso}
\begin{array}{rcl}
{{\rm Ext}}^1_{{{\rm dRB}}}(M,N) & {\stackrel{\sim}{\longrightarrow}}& (M^\vee \otimes N)_{\C}/[(M^\vee \otimes N)_{{{\rm dR}}}+(M^\vee \otimes N)_{{{\rm B}}}] \\
{[{{\mathcal E}}]}& {\longmapsto}& [\phi]
\end{array}$$ so defined is easily seen to be an isomorphism of abelian groups.
In particular,we get the usual isomorphisms: $$\label{Extisobis}
{{\rm Ext}}^1_{{{\rm dRB}}}(M,N) {\stackrel{\sim}{\longrightarrow}}{{\rm Ext}}^1_{{{\rm dRB}}}(\Z(0), M^\vee\otimes N) {\stackrel{\sim}{\longrightarrow}}{{\rm Ext}}^1_{{{\rm dRB}}}(M\otimes N^\vee, \Z(0)).$$
Abelian varieties over ${{\overline {\mathbb Q}}}$ satisfy $GPC^1$
------------------------------------------------------------------
We are now in position to complete the proof of Theorem \[GPCA\].
As already observed, universal vector extensions of abelian varieties satisfy Condition $\mathbf{LP}$. Corollary \[CorSL2\] therefore implies that, for any two abelian varieties $A$ and $B$ aver ${{\overline {\mathbb Q}}}$, the map $$\begin{array}{rcl}
{{\rm LiePer}}:{\operatorname{Hom}}_{{{\rm gp}}/{{\overline {\mathbb Q}}}} (E(A), E(B)) & {\longrightarrow}& {\operatorname{Hom}}_{{{\rm dRB}}} ({{\rm LiePer}}E(A), {{\rm LiePer}}E(B)) \\
\psi & {\longmapsto}& {{\rm LiePer}}\,\psi := ({{\rm Lie\,}}\psi, {{\rm Lie\,}}\psi_{\C \vert {{\rm Per}\,}E(A)_{\C}}).
\end{array}$$ is an isomorphism of $\Z$-modules.
Together with the isomorphism (\[Eff\]), which identifies morphisms between abelian varieties and between their universal vector extensions, this establishes the first assertion in the following theorem; the second assertion follows from the existence of a functorial isomorphism (\[JA\]) between ${{\rm LiePer}}E(A)$ and $H_{1,{{\rm dRB}}}(A)$:
\[HomAB\] For any two abelian varieties $A$ and $B$ over ${{\overline {\mathbb Q}}}$, the maps $$\begin{array}{rcl}
{\operatorname{Hom}}_{{{\rm gp}}/{{\overline {\mathbb Q}}}}(A,B) & {\longrightarrow}& {\operatorname{Hom}}_{{{\rm dRB}}} ({{\rm LiePer}}E(A), {{\rm LiePer}}E(B)) \\
\phi & {\longmapsto}& {{\rm LiePer}}E(\phi)
\end{array}$$ and $$H_{1,{{\rm dRB}}}: {\operatorname{Hom}}_{{{\rm gp}}/{{\overline {\mathbb Q}}}}(A,B) {\longrightarrow}{\operatorname{Hom}}_{{{\rm dRB}}}(H_{1,{{\rm dRB}}}(A),H_{1,{{\rm dRB}}}(B))$$ are isomorphisms of $\Z$-modules.
In other words, the realization functor $H_{1,{{\rm dRB}}}$ from the category of abelian varieties over ${{\overline {\mathbb Q}}}$ to the category ${{\mathcal C}_{\rm dRB}}$ is fully faithful. (Compare with [@AndreMotives04], 7.5.3, where a “rational” version of this isomorphism is established, by a reference to some advanced transcendence results of Wüstholz [@Wuestholz84].)
To complete the proof of Theorem \[GPCA\], we consider an abelian variety $A$ over ${{\overline {\mathbb Q}}}$ and we apply Theorem \[HomAB\] to $A$ and its dual abelian variety $\hat{A}$. In this way, we get an isomorphism $$H_{1,{{\rm dRB}}}: {\operatorname{Hom}}_{{{\rm gp}}/{{\overline {\mathbb Q}}}}(A,\hat{A}) {\stackrel{\sim}{\longrightarrow}}{\operatorname{Hom}}_{{{\rm dRB}}}(H_{1,{{\rm dRB}}}(A),H_{1,{{\rm dRB}}}({\hat{A}})).$$ Composing this isomorphism with the transpose of $$\varpi_{A,{{\rm dRB}}} : H_{1,{{\rm dRB}}}(A) {\stackrel{\sim}{\longrightarrow}}H^1_{{{\rm dRB}}}({\hat{A}})\otimes \Z(1),$$ and with the natural identification (\[dualhom\]), we get an isomorphism $${\operatorname{Hom}}_{{{\rm gp}}/{{\overline {\mathbb Q}}}}(A,\hat{A}) {\stackrel{\sim}{\longrightarrow}}{\operatorname{Hom}}_{dRB}(\Z(0), H^1_{{{\rm dRB}}}(A)\otimes H^1_{{{\rm dRB}}}(A)\otimes \Z(1)).$$ The discussion on signs in paragraph \[NSsym\] (notably the identity (\[altphi\])) shows that this isomorphism maps the subgroup of *symmetric* morphisms from $A$ to $\hat{A}$ onto the subgroup of skew-symmetric, or *alternating*, elements[^18] in ${\operatorname{Hom}}_{{{\rm dRB}}}(\Z(0), H^1_{{{\rm dRB}}}(A)\otimes H^1_{{{\rm dRB}}}(A)\otimes \Z(1))$: $$\label{isosymalt}
{\operatorname{Hom}}_{{{\rm gp}}/{{\overline {\mathbb Q}}}}(A,\hat{A})^{\text{sym}} {\stackrel{\sim}{\longrightarrow}}{\operatorname{Hom}}_{dRB}(\Z(0), H^1_{{{\rm dRB}}}(A)\otimes H^1_{{{\rm dRB}}}(A)\otimes \Z(1))^{\text{alt}}.$$
The fact that the morphism of $\Z$-modules in (\[isosymalt\]) is an isomorphism is nothing but, in a disguised form, the validity of $GPC^1(A)$. Indeed, by composition with the isomorphism $$\begin{array}{rcl}
NS(A) := {{\rm Pic}}(A)/{{\rm Pic}}_{0}(A) & {\stackrel{\sim}{\longrightarrow}}& {\operatorname{Hom}}_{{{\rm gp}}/{{\overline {\mathbb Q}}}}(A,{\hat{A}})^{\text{sym}} \\
{[L]} & {\longmapsto}& \phi_{L},
\end{array}$$ the isomorphism (\[isosymalt\]) becomes the isomorphism $$\label{isosymaltbis}
NS(A) {\stackrel{\sim}{\longrightarrow}}{\operatorname{Hom}}_{{{\rm dRB}}}(\Z(0), H^1_{{{\rm dRB}}}(A)\otimes H^1_{{{\rm dRB}}}(A)\otimes \Z(1))^{\text{alt}}.$$ The “Betti” component of (\[isosymaltbis\]) takes its values in $(H^1_{B}(A_{\C})\otimes_{\Z} H^1_{B}(A_{\C}))^{\text{alt}}$ and is well known to coincide with the classical “Riemann form” of elements of the Néron-Severi group (see for instance [@BirkenhakeLange04], Chapter 2). Consequently, after the identification of $${\operatorname{Hom}}_{{{\rm dRB}}}(\Z(0), H^1_{{{\rm dRB}}}(A)\otimes H^1_{{{\rm dRB}}}(A)\otimes \Z(1))^{\text{alt}}$$ and $${\operatorname{Hom}}_{{{\rm dRB}}}(\Z(0), H^2_{{{\rm dRB}}}(A) \otimes \Z(1)) = H^2_{\text{Gr}}(A),$$ the isomorphism (\[isosymalt\]) may be read as asserting that the map $$c_{1{{\rm dRB}}}: NS(A) {\longrightarrow}H^2_{\text{Gr}}(A)$$ is an isomorphism. This is precisely the content of $GPC^1(A)$.
${{\overline {\mathbb Q}}}$-points of abelian varieties and extensions in ${{\mathcal C}_{\rm dRB}}$
----------------------------------------------------------------------------------------------------
[^19]\[kappadRB\]
Let $A$ denote an abelian variety over ${{\overline {\mathbb Q}}}$.
Consider some line bundle $L$ over $A$, algebraically equivalent to zero, equipped with some rigidification $\epsilon : k \simeq L_{0_{A}}.$ Recall that the $\G_{m}$-torsor $L^\times \xrightarrow{\pi_{L}} A$ over $A$, deduced from the total space of $L$ by deleting its zero section, may be endowed with a unique structure of ${{\overline {\mathbb Q}}}$-algebraic group which makes the diagram $$0 {\longrightarrow}\G_{m{{\overline {\mathbb Q}}}} \stackrel{\epsilon}{{\longrightarrow}} L^\times \stackrel{\pi_{L}}{{\longrightarrow}} A {\longrightarrow}0$$ a short exact sequence of commutative ${{\overline {\mathbb Q}}}$-algebraic groups, and that this construction establishes an isomorphism of groups: $${\hat{A}}({{\overline {\mathbb Q}}}) {\stackrel{\sim}{\longrightarrow}}{{\rm Ext}}^1_{{\rm{c-gp}}/{{\overline {\mathbb Q}}}}(A, \G_{m{{\overline {\mathbb Q}}}}).$$
The fiber product $$E(L^\times) \simeq L^\times \times_{A} E(A)$$ defines a commutative ${{\overline {\mathbb Q}}}$-algebraic group which fits into the following commutative diagram with exact lines: $$\begin{CD}
0 @>>> \G_{m{{\overline {\mathbb Q}}}} @>{\tilde{\epsilon}}>> E(L^\times) @>{\tilde{\pi}_{L}}>> E(A) @>>> 0 \\
@. @VV{=}V @VVV @VV{p_{A}}V @. \\
0 @>>> \G_{m{{\overline {\mathbb Q}}}} @>{\epsilon}>> L^\times @>{\pi_{L}}>> A @>>> 0.
\end{CD}$$
By considering the Lie algebra (over ${{\overline {\mathbb Q}}}$) and the periods (over $\C$) of the first line, we get a 1-extension in ${{\mathcal C}_{\rm dRB}}$: $$\label{ExtdRBL}
0 {\longrightarrow}\Z(1) \xrightarrow{{{\rm LiePer}}\, \tilde{\epsilon}} {{\rm LiePer}}E(L^\times) \xrightarrow{{{\rm LiePer}}\, \tilde{\pi}_{L}} {{\rm LiePer}}E(A) {\longrightarrow}0.$$ Thanks to the canonical isomorphisms in ${{\mathcal C}_{\rm dRB}}$ $${{\rm LiePer}}E(A) {\stackrel{\sim}{\longrightarrow}}H_{1{{\rm dRB}}}(A) {\stackrel{\sim}{\longrightarrow}}H^1_{{{\rm dRB}}}({\hat{A}})\otimes \Z(1) {\stackrel{\sim}{\longrightarrow}}H_{1{{\rm dRB}}}(A)^\vee \otimes \Z(1),$$ its class may be seen as an element $\kappa_{{{\rm dRB}}}(L)$ in $${{\rm Ext}}_{{{\rm dRB}}}^1(H_{1{{\rm dRB}}}(A), \Z(1)) {\stackrel{\sim}{\longrightarrow}}{{\rm Ext}}_{{{\rm dRB}}}^1(\Z(0), H_{1{{\rm dRB}}}({\hat{A}}))$$ and defines a morphism of abelian groups $$\kappa_{{{\rm dRB}}} : {\hat{A}}({{\overline {\mathbb Q}}}) {\longrightarrow}{{\rm Ext}}_{{{\rm dRB}}}^1(\Z(0), H_{1{{\rm dRB}}}({\hat{A}})).$$
The proof of the following proposition is again an application of Corollary \[CorSL2\]:
\[KummerTransc\] The map $\kappa_{{{\rm dRB}}}$ is injective.
We leave the details to the reader, and only emphasize that giving a direct description of the subgroup $\kappa_{{{\rm dRB}}}( {\hat{A}}({{\overline {\mathbb Q}}}))$ of ${{\rm Ext}}_{{{\rm dRB}}}^1(\Z(0), H_{1{{\rm dRB}}}({\hat{A}}))$ appears to be an intriguing and difficult issue.
$D$-group schemes
=================
In this part, we introduce $D$-schemes and $D$-group schemes in a geometric setting, suitable for the application to Diophantine geometry we want to discuss in the sequel. These definitions are variants of the original definitions by Buium ([@Buium86], [@Buium92], [@Buium94] Chapter 3), which make sense over some fixed differential base field (of characteristic zero). Here we shall consider $D$-schemes and group schemes over some smooth base variety instead : this framework is the one of Malgrange in [@Malgrange10], with the field of complex numbers replaced by some arbitrary field of characteristic zero.
For simplicity, we shall make smoothness and quasi-projectivity assumptions which actually could be relaxed in many places. Actually, on a base scheme of finite type over a field of characteristic zero, $D$-schemes are nothing but the “crystals in relative schemes” mentioned in a famous letter of Grothendieck to Tate[^20]. The approach to $D$-schemes as “crystals”, defined in terms of infinitesimal sites and stratifications, has much to recommend it (see for instance [@Simpson94II], Section 8), but I have preferred to stick to a more naive approach in the spirit of classical differential geometry, at the expense of extra regularity assumptions, based on a definition of $D$-schemes that mimics the one of integrable Ehresmann connections on differentiable fiber bundles ([@Ehresmann51]).
In the following sections we denote $k$ a fixed field *of characteristic zero*.
Basic definitions {#basicD}
-----------------
Let $S$ denote a smooth quasi-projective scheme over $k$.
### $D$-schemes {#Ds}
By a *$D$-scheme* over $S$, we shall mean a pair $(X,{{\mathcal F}})$ where $X\stackrel{\pi}{\longrightarrow} S$ is a smooth, quasi-projective scheme over $S$ (hence over $k$), and ${{\mathcal F}}$ is an integrable[^21] sub-vector bundle of the “absolute” tangent bundle $T_{X/k}$ of $X$ such that $$T_{X/k} = T_{X/S} \oplus {{\mathcal F}}.$$ This last condition means precisely that ${{\mathcal F}}$ determines a splitting of the exact sequence of vector bundles over the $k$-scheme $X$ $$0 {\longrightarrow}T_{X/S} {{\lhook\joinrel\longrightarrow}}T_{X/k} \stackrel{D\pi}{{\longrightarrow}} \pi^\ast T_{S/k} {\longrightarrow}0$$ defined by the differential of $\pi,$ or equivalently that the restriction of $D\pi$ to ${{\mathcal F}}$ is an isomorphism: $$\label{Dpiiso}
D\pi_{\vert {{\mathcal F}}} : {{\mathcal F}}{\stackrel{\sim}{\longrightarrow}}\pi^\ast T_{S/k}.$$
A *morphism of $D$-schemes* over $S$ $$\label{morD}
\phi : (X_{1},{{\mathcal F}}_{1}) {\longrightarrow}(X_{2},{{\mathcal F}}_{2})$$ is a morphism of $S$-schemes $\phi : X_{1} \rightarrow X_{2}$ whose “absolute” differential $$D\phi : T_{X_{1}/k} {\longrightarrow}\phi^\ast T_{X_{2}/k}$$ maps ${{\mathcal F}}_{1}$ to $\phi^\ast{{\mathcal F}}_{2}$.
Observe that, if $\phi$ is a morphism of $D$-schemes over $S$ from $(X_{1},{{\mathcal F}}_{1})$ to $(X_{2},{{\mathcal F}}_{2})$, then Conditions (\[Dpiiso\]) for $(X_{1},{{\mathcal F}}_{1})$ and $(X_{2},{{\mathcal F}}_{2})$ imply that $D\phi $ maps ${{\mathcal F}}_{1}$ isomorphically onto $\phi^\ast{{\mathcal F}}_{2}$.
Morphisms of $D$-schemes may be obviously composed and define the category of (smooth, quasi-projective) $D$-schemes over $S$. Clearly, this category admits finite products: $(S, T_{S/k})$ is a final object, and the product of two $D$-schemes $(X_{1},{{\mathcal F}}_{1})$ and $(X_{2},{{\mathcal F}}_{2})$ over $S$ may be constructed as the $D$-scheme $(X,{{\mathcal F}})$ consisting of their product as schemes over $S$, $$X:=X_{1} \times_{S} X_{2},$$ equipped with the sub-vector bundle ${{\mathcal F}}$ of $T_{X/k}$ which is the “direct sum of ${{\mathcal F}}_{1}$ and ${{\mathcal F}}_{2}$ over $T_{S/k}$,” formally defined as the kernel of the surjective morphism of vector bundles over $X$: $$(D_{\pi_{1}}, -D_{\pi_{2}}) : ({{\mathcal F}}_{1}\boxplus {{\mathcal F}}_{2})_{\vert X} {\longrightarrow}\pi^\ast T_{S/k}.$$ (It lies inside the kernel of $$(D_{\pi_{1}}, -D_{\pi_{2}}) : (T_{X_{1}/k}\boxplus T_{X_{2}/k})_{\vert X} {\longrightarrow}\pi^\ast T_{S/k},$$ which may be identified with $T_{X/k}$.)
A *closed $D$-subscheme* of a $D$-scheme $(X,{{\mathcal F}})$ over $S$ is the image of a morphism of $D$-schemes with range $(X,{{\mathcal F}})$ that is also a closed immersion. Equivalently it is a closed, smooth subscheme $Y$ of $X$ such that its tangent bundle $T_{Y/k},$ which may be identified to a sub-vector bundle of $T_{X/k\vert Y},$ contains ${{\mathcal F}}_{\vert Y}.$
A *horizontal section* of some $D$-scheme $(X,{{\mathcal F}})$ over $S$ is a right inverse of the structural morphism $X {\longrightarrow}S$ in the category of $D$-schemes over $S$. In other words, it is a section ${{\mathcal P}}$ of this morphism over $S$, the differential of which $D{{\mathcal P}}:T_{S/k}{\longrightarrow}{{\mathcal P}}^\ast T_{X/k}$ takes its values in ${{\mathcal P}}^\ast {{\mathcal F}},$ or equivalently, the image of which is a $D$-subscheme of $(X,{{\mathcal F}}).$
From the integrable sub-vector bundle ${{\mathcal F}}$ of $T_{X/k}$, the normal bundle ${{\mathcal P}}^\ast T_{X/S}$ of any horizontal section ${{\mathcal P}}$ inherits an integrable connection.
### $D$-group schemes {#Dgs}
A (smooth, quasi-projective) *$D$-group scheme* over $S$ is defined as a group object in the category of $D$-schemes over $S$.
A $D$-group scheme $\bG$ over $S$ may be identified with a pair $(G,{{\mathcal F}})$ where $G$ is a smooth, quasi-projective group scheme over $S$ and ${{\mathcal F}}$ a sub-vector bundle of $T_{G/k}$ which makes $(G,{{\mathcal F}})$ a $D$-scheme over $S$, in such a way that the graphs of the unit section $e_{G}$, of the inverse map, and of the composition map of the group scheme $G$ become $D$-subschemes of the $D$-schemes $G$, $G^2$, and $G^3$ over $S$.
In intuitive terms, a $D$-group scheme may be thought as a smooth group scheme over $S$ equipped with some “algebraic connection” compatible with its group structure.
Since its unit section $e_{G}$ is horizontal, the relative Lie algebra ${{\rm Lie}}_{S}G := e_{G}^\ast T_{G/S}$ of the group scheme $G$ over $S$ underlying some $D$-group scheme $\bG$ over $S$ becomes endowed with a natural integrable connection. The so-defined module with integrable connection shall be denoted ${{\rm Lie}}_{S}\bG$.
Assume that $S$ is integral (or equivalently, connected), of dimension $s$, and consider its field of rational functions $k(S).$ Let us choose some $k(S)$-basis $(v_{1},\ldots,v_{s})$ of the $k(S)$-vector space of rational sections of $T_{S/k}$ such that the Lie brackets $[v_{i},v_{j}]$ all vanish[^22]. Then the field $k(S)$ equipped with the derivations $(\delta_{1},\ldots,\delta_{s})$ becomes a differential field in the classical sense of Ritt and Kolchin. Let us finally choose a differential closure $(K; \delta_{1},\ldots,\delta_{s})$ of $(k(S); \delta_{1},\ldots,\delta_{s}).$ Through the base changes $${{\rm Spec\, }}K {\longrightarrow}{{\rm Spec\, }}k(S) {{\lhook\joinrel\longrightarrow}}S,$$ any $D$-group scheme $(G,{{\mathcal F}})$ over $S$ in our sense defines $D$-group schemes in the sense of Buium over the differential fields $(k(S); \delta_{1},\ldots,\delta_{s})$ and $(K; \delta_{1},\ldots,\delta_{s})$, and a $\Delta_{0}$-group, that is, a differential algebraic group of finite dimension in the sense of Kolchin, by considering the subgroup of the group $G(K)$ of $K$-points of $G$ consisting of its “horizontal points". (We refer the reader to [@Buium92], Chapter 5, [@Pillay97], [@Pillay04], and [@BertrandPillay10] for discussions of the relations between Buium’s $D$-groups and differential algebraic groups.)
### Extensions {#subsubsec:Ext}
Let $\bG_{1} =(G_{1}, {{\mathcal F}}_{1})$ and $\bG_{2}=(G_{2},{{\mathcal F}}_{2})$ be two commutative $D$-group schemes over $S$. An *extension* of $\bG_{1}$ by $\bG_{2}$ in the category of commutative $D$-group schemes over $S$ is a diagram $$\label{extDtyp}
0 {\longrightarrow}\bG_{2} \stackrel{i}{{\longrightarrow}} \bG \stackrel{p}{{\longrightarrow}} \bG_{1} {\longrightarrow}0$$ in this category such that the underlying diagram of commutative group schemes over $S$ $$0 {\longrightarrow}G_{2} \stackrel{i}{{\longrightarrow}} G \stackrel{p}{{\longrightarrow}} G_{1} {\longrightarrow}0$$ is a short exact sequence[^23] (compare [@KowalskiPillay06]).
The Baer sum of two extensions of $\bG_{1}$ by $\bG_{2}$ may be defined in an obvious way. Equipped with this operation, the set ${{\rm Ext}}^1_{{\text{c$D$-gp}}/S}(\bG_{1},\bG_{2})$ of isomorphism classes of these extensions defines an abelian group, which satisfies the usual functorialities in $S$, $\bG_{1}$, and $\bG_{2}$.
We may apply the functor ${{\rm Lie}}_{S}$ to the extension (\[extDtyp\]). We obtain a short exact sequence of modules with integrable connections over $S$: $$0 {\longrightarrow}{{\rm Lie}}_{S}\bG_{2} \xrightarrow{{{\rm Lie}}_{S} i}{{\rm Lie}}_{S}\bG \xrightarrow{{{\rm Lie}}_{S}p}{{\rm Lie}}_{S}\bG_{1} {\longrightarrow}0.$$ This construction defines an additive map, say when $S$ is projective: $${{\rm Lie}}_{S}^1 : {{\rm Ext}}^1_{{\text{c$D$-gp}}/S}(\bG_{1},\bG_{2}) {\longrightarrow}{{\rm Ext}}^1_{{\text{mic}}/S}({{\rm Lie}}_S{\bG_{1}}, {{\rm Lie}}_{S}\bG_{2}) \simeq
H^1_{{{\rm dR}}}(S, ({{\rm Lie}}_S{\bG_{1}})^\vee\otimes{{\rm Lie}}_{S}\bG_{2}),$$ where we use the notation introduced in paragraph \[deRhamcoeff\], formula (\[coeffalg\]).
### Functoriality in $S$
If $\phi: S' {\longrightarrow}S$ is a morphism of projective schemes over $k$, then, from any $D$-scheme $(X,{{\mathcal F}})$ over $S$, we may deduce a $D$-scheme $(X',{{\mathcal F}}')$ over $S'$ by “pulling it back” by $\phi$ as follows : $X'$ is the smooth, quasi-projective $S'$-scheme defined as the fiber product $X\times_{S}S'$; if $\tilde{\phi} : X' {\longrightarrow}X$ denotes the canonical “first projection” morphism and $D\tilde{\phi}: T_{X'/k} {\longrightarrow}\tilde{\phi}^\ast T_{X/k}$ its differential, the $D$-structure on $X'$ over $S'$ is defined by the integrable sub-vector bundle of $T_{X'/k}$ $${{\mathcal F}}' := D\tilde{\phi}^{-1} (\tilde{\phi}^\ast {{\mathcal F}}).$$
This construction of “base change” is functorial, and transforms $D$-group schemes over $S$ into $D$-group schemes over $S'$. It satisfies an obvious compatibility with the Lie algebra functor (from $D$-group schemes to modules with integrable connections) and the pullback of modules with integrable connections.
The $D$-schemes over ${{\rm Spec\, }}k$ are nothing but the smooth, quasi-projective schemes over $k$. A *constant* $D$-scheme over $S$ is a $D$-scheme isomorphic to the pullback by the $k$-morphism $S{\longrightarrow}{{\rm Spec\, }}k$ of some smooth, quasi-projective schemes over $k$. In the sequel, we shall denote $\bG_{m,S}$ the constant multiplicative group scheme over $S$, defined as the pullback of the algebraic group $\G_{m,k}$. After the change of base $S{\longrightarrow}{{\rm Spec\, }}k$, the isomorphism $$\begin{array}{rcl}
{{\rm Lie\,}}\G_{m,k} & {\stackrel{\sim}{\longrightarrow}}& k \\
X.\partial /\partial X & \longmapsto & 1
\end{array}$$ becomes an isomorphism of modules with integrable connections: $${{\rm Lie}}_{S} \bG_{m,S} {\stackrel{\sim}{\longrightarrow}}({{\mathcal O}}_{S}, d).$$
### Change of base fields {#kk'}
If $k'$ is a field extension of $k$, the extension of scalars from $k$ to $k'$ associates a $D$-scheme $(X_{k'}, {{\mathcal F}}_{k'})$ over $S_{k'}$, defined over the base field $k'$, to any $D$-scheme $(X, {{\mathcal F}})$ over $S$. This operation satisfies obvious functoriality properties that we shall use freely in the sequel. In particular, it attaches $D$-group schemes over $S_{k'}$ to $D$-group schemes over $S$, and defines morphisms of extension groups: $${{\rm Ext}}^1_{{\text{c$D$-gp}}/S}(\bG_{1},\bG_{2}) {\longrightarrow}{{\rm Ext}}^1_{{\text{c$D$-gp}}/S_{k'}}(\bG_{1k'},\bG_{2k'}).$$
$D$-schemes and analytification {#Dan}
-------------------------------
When the base field $k$ is $\C,$ a $D$-scheme $(X,{{\mathcal F}})$ (resp. a $D$-group scheme $(G,{{\mathcal F}})$) over $S$ determines, through analytification, a “$D$-analytic space” $(X^{{\rm an}}, {{\mathcal F}}^{{\rm an}})$ (resp. a “$D$-complex Lie group” $(G^{{\rm an}}, {{\mathcal F}}^{{\rm an}})$) over the complex manifold $S^{{\rm an}}$. We shall omit the formal definitions of these notions — just “copy” the above ones in the analytic context — and content ourselves with a few observations.
First, after analytification, a $D$-scheme $(X,{{\mathcal F}})$ *projective* over $S$ becomes locally constant in the analytic category. Namely, for any point $s_{0}$ of $S^{{\rm an}}$, there exists an open neighbourhood $\Omega$ of $s_{0}$ in $S^{{\rm an}}$ and an isomorphism of $\C$-analytic spaces over $\Omega$ $$\label{psiun}
\Psi_{s_{0}}: \Omega \times X^{{\rm an}}_{s_{0}} {\stackrel{\sim}{\longrightarrow}}X^{{\rm an}}_{\Omega}$$ such that $$\label{init}
\Psi_{s_{0}}(s_{0}, . ) = Id_{X^{{\rm an}}_{s_{0}}}$$ and, for any $(s,x)$ in $\Omega \times X^{{\rm an}}_{s_{0}},$ $$\label{horiz}
{{\mathcal F}}_{\Psi_{s_{0}}(s,x)} = D\Psi_{s_{0}}(s,x)(T_{s}\Omega \oplus 0).$$ This follows from the analytic integrability of ${{\mathcal F}}^{{\rm an}}$, together with the properness of the structural morphism $X^{{\rm an}}{\longrightarrow}S^{{\rm an}}$ in the analytic topology. (Observe that Conditions (\[init\]) and (\[horiz\]) uniquely determine $\Psi_{s_{0}}$ for $\Omega$ connected.)
Second, as pointed out by Hamm ([[*cf.* ]{}]{}[@Buium92], Chapter 2, 1.3), a similar statement holds for any $D$-group scheme $(G, {{\mathcal F}})$ over $S$. Thus we get a (unique) isomorphism of complex Lie groups over[^24] $\Omega$ (assumed to be small enough and connected) $$\label{psideux}
\Psi_{s_{0}}: \Omega \times G^{{\rm an}}_{s_{0}} {\stackrel{\sim}{\longrightarrow}}G^{{\rm an}}_{\Omega}$$ which satisfy the initial condition (\[init\]) and the horizontality condition (\[horiz\]).
Consider in particular the case of a commutative $D$-group scheme $\bG = (G,{{\mathcal F}})$ over $S$, with connected fibers. Then the “relative” exponential map $$\exp_{G/S} : {{\rm Lie}}_{S} G {\longrightarrow}G^{{\rm an}}$$ defines a surjective morphism of complex Lie groups over $S^{{\rm an}}$. It is compatible with the “horizontal” structures defined by the integrable connection on ${{\rm Lie}}_{S} \bG$ and by (\[horiz\]), and consequently its kernel $${{\rm Per}}_{S} G := \ker \exp_{G/S}$$ is a local system (that is, a locally free sheaf) of $\Z$-modules of finite rank over $S^{{\rm an}}$, which fits into a short exact sequence in the category of commutative complex Lie groups over $S^{{\rm an}}$: $$0 {\longrightarrow}{{\rm Per}}_{S} G {{\lhook\joinrel\longrightarrow}}{{\rm Lie}}_{S} G \xrightarrow{\exp_{G/S}} G^{{\rm an}}{\longrightarrow}0.$$
This is even a short exact sequence of commutative $D$-complex Lie groups, which should be denoted $$0 {\longrightarrow}{{\rm Per}}_{S} G {{\lhook\joinrel\longrightarrow}}{{\rm Lie}}_{S} \bG \xrightarrow{\exp_{G/S}} \bG^{{\rm an}}{\longrightarrow}0.$$ This shows, in particular, that when $s$ varies in $S^{{\rm an}}$, the dimension of the complex sub-vector space of ${{\rm Lie\,}}G_{s}$ generated by its period lattice ${{\rm Per}\,}G_{s}$ is locally constant (in the analytic topology). Consequently, if $S$ (hence $S^{{\rm an}}$) is connected and if, for some $s_{0} \in S$, $G_{s_{0}}$ satisfies condition $\mathbf{LP}$ ([[*cf.* ]{}]{}Section \[Morag\]), then $G_{s}$ satisfies $\mathbf{LP}$ for every $s$ in $S$, and the structure of $\bG$ as a $D$-group scheme over $S$ is uniquely determined by its structure of a group scheme. Similarly, if $\bG_{1}$ and $\bG_{2}$ are two commutative $D$-groups schemes over $S$, and if $\bG_{1}$ has connected fibers satisfying $\mathbf{LP},$ then any morphism of group schemes from $G_{1}$ to $G_{2}$ is a morphism of $D$-group schemes from $\bG_{1}$ to $\bG_{2}$.
These remarks will apply to the $D$-group schemes associated to abelian schemes and to their extension by multiplicative groups considered in Sections \[ED\] and \[ExtD\] *infra*. (See also [@BertrandPillay10], Lemma 3.4, for similar unicity statements in a more “differential algebraic" formulation.)
Associating its local system of periods ${{\rm Per}}_{S}G$ to a $D$-group scheme $\bG$ is a functorial construction (in $S$ and $\bG$). Applied to extensions, it defines a morphism of $\Z$-modules, for any two commutative $D$-groups schemes $\bG_{1}$ and $\bG_{2}$ with connected fibers over $S$: $$\label{Perext}
{{\rm Per}}^1_{S}: {{\rm Ext}}^1_{{\text{c$D$-gp}}/S}(\bG_{1},\bG_{2}) {\longrightarrow}{{\rm Ext}}^1_{\text{Ab-Sheaves}/S^{{\rm an}}}({{\rm Per}}_{S}G_{1},{{\rm Per}}_{S}G_{2})
\simeq H^1(S^{{\rm an}},({{\rm Per}}_{S}G_{1})^\vee \otimes{{\rm Per}}_{S}G_{2}).$$
Moduli spaces of vector bundles with connections as $D$-schemes {#ModuliD}
---------------------------------------------------------------
If the $S$-scheme $X$ underlying some $D$-scheme $(X,{{\mathcal F}})$ as above is projective over $S,$ then, locally in the étale topology of $S$, $X$ is “constant" over $S$ (namely, when $k$ is algebraically closed, of the form $X_{0}\times_{k} S$, after replacing $S$ by some étale neighborhood of any given point of $S$). This follows from the representability of the Isom-functors in the projective case, together with the formal integrability of ${{\mathcal F}}$ and Artin’s algebraization theorem (compare with [@Buium86], II.1, and [@Gillet02], Section 3).
This property is a refinement, which makes sense in pure algebraic geometry, of the local analytic triviality of projective $D$-schemes when $k= \C$. It strongly limits the possible constructions of smooth projective $D$-schemes.
It is remarkable that, in contrast, highly “nonconstant” smooth *quasi-projective* $D$-schemes arise naturally. Indeed the construction of the moduli spaces ${\mathbf{MIC}}_{N}(M,o)$ of vector bundles with connection recalled in paragraph \[MIC\] above, applied to smooth projective families of pointed projective varieties parameterized by $S$, provides quasi-projective $D$-schemes over $S$.
Namely, if $M$ is a smooth, projective $S$-scheme with geometrically connected fibers, and if $o$ denotes a section of $M$ over $S$, then Simpson’s techniques apply to this relative situation. They lead to the construction of a flat, quasi-projective $S$-scheme[^25] ${\mathbf{MIC}}_{N}(M/S, o)$, the fiber of which over some point $s\in S({\overline}{k})$ may be identified with the moduli space ${\mathbf{MIC}}_{N}(M_{s}, o(s))$. Formally, for any $S$-scheme $\Sigma,$ ${\mathbf{MIC}}_{N}(M/S, o)(\Sigma)$ classifies vector bundles of rank $N$ over $X_{\Sigma}:=X\times_{S}\Sigma,$ rigidified over $o_{\Sigma},$ and equipped with an integrable connection relative to $\Sigma.$
The $S$-scheme ${\mathbf{MIC}}_{N}(M/S, o)$ admits a canonical structure of $D$-scheme over $S$, which reflects its so-called crystalline nature. For general $M$ and $N$, this scheme may actually not be smooth over $S$, and properly speaking it is not covered by the above definition of $D$-schemes (which should be replaced by a suitable definition in terms of the infinitesimal site and stratifications associated to $X/k$). However, in the sequel, we shall be mainly concerned by the situation where $N=1,$ in which case ${\mathbf{MIC}}_{1}(M/S, o)$ is a *smooth*, quasi-projective, group scheme over $S$, and we allow ourselves to neglect this issue of regularity.
When $k=\C,$ the $D$-scheme structure of ${\mathbf{MIC}}_{N}(M/S, o)$ may be described as follows. When $s$ varies in the complex manifold $S^{{\rm an}},$ the family of fundamental groups $$\Gamma_{s} := \pi_{1}(M^{{\rm an}}_{s}, o(s))$$ define a local system (that is, a locally constant sheaf) of groups on $S^{{\rm an}}.$ Over any simply connected open subset $\Omega$ in $S^{{\rm an}}$, it may be trivialized : for any pair of points $(s_{0},s_{1})$ in $\Omega,$ we get a canonical isomorphism $$\gamma_{s_{1},s_{0}}: \Gamma_{s_{0}} {\stackrel{\sim}{\longrightarrow}}\Gamma_{s_{1}},$$ which clearly induces an isomorphism of representations spaces: $$\begin{array}{crcl}
\Phi^{\text{Rep}}_{s_{1},s_{0}}:& {\mathbf{Rep}}_{N}(\Gamma_{s_{0}}) & {\stackrel{\sim}{\longrightarrow}}& {\mathbf{Rep}}_{N}(\Gamma_{s_{1}}) \\
& \rho& {\longmapsto}& \rho \circ \gamma_{s_{0},s_{1}}^{-1}\;\; .
\end{array}$$ Moreover the monodromy isomorphisms (\[aniso\]) $${{\rm mon}}_{o(s)}: {\mathbf{MIC}}_{N}(M/S,o)_{s} = {\mathbf{MIC}}_{N}(M_{s},o(s)) {\stackrel{\sim}{\longrightarrow}}{\mathbf{Rep}}_{N}(\Gamma_{s})$$ and their inverses depend analytically on $s$, in the sense that, if $s_{0}$ denotes a base point in $\Omega,$ the bijection of sets $$\label{monOm}
\begin{array}{crcl}
\Psi_{s_{0}}:& \Omega \times {\mathbf{Rep}}_{N}(\Gamma_{s_{0}}) & {\stackrel{\sim}{\longrightarrow}}& {\mathbf{MIC}}_{N}(M/S,o)_{\Omega} \\
& (s, \rho) & {\longmapsto}& {{\rm mon}}^{-1}_{s}(\Phi^{\text{Rep}}_{s, s_{0}}(\rho))
\end{array}$$ is an isomorphism of $\C$-analytic spaces over $\Omega.$
The $D$-scheme structure over $s$ of $X:={\mathbf{MIC}}_{N}(M/S,o)$ is compatible with the “analytic trivialization” (\[monOm\]). Assume indeed that ${\mathbf{MIC}}_{N}(M/S,o)$ is smooth over $S$ (for instance, suppose that $N=1$); then the subvector bundle ${{\mathcal F}}$ of $T_{X/\C}$ which defines this structure becomes “horizontal” via the above isomorphism : $$\mbox{for any $(s,\rho) \in \Omega \times {\mathbf{Rep}}_{N}(\Gamma_{s_{0}})$, } {{\mathcal F}}_{\Phi(s,\rho)} = D\Psi_{s_{0}}(s,\rho)(T_{s}\Omega \oplus 0).$$
It is quite remarkable that the *analytic* sub-vector bundle ${{\mathcal F}}$ of $T_{X/\C}$ defined through this formula in terms of the local analytic trivializations (\[monOm\]) of $X$ over $S$ is an *algebraic* subvector bundle of $T_{X/\C}.$
This is due to Grothendieck and to Mazur and Messing ([@MazurMessing74]) when $N=1$ (see also [@Buium92]), and to Simpson ([@Simpson94II], Section 8) in general. Basically their proof consists in considering the avatar in formal geometry (over the formal completion $\widehat{S}_{s_{0}}$ of $S$ at $s_{0}$) of the local analytic trivialization of ${\mathbf{MIC}}_{N}(M/S,o)$ over $\Omega$ induced by (\[monOm\]): $$\label{monOmbis}
\Psi^{\text{MIC}}_{s_{0}}:= \Psi_{s_{0}}\circ (Id_{\Omega}\times {{\rm mon}}_{o(s_{0})}) : \Omega \times {\mathbf{MIC}}_{N}(M/S,o)_{0} {\stackrel{\sim}{\longrightarrow}}{\mathbf{MIC}}_{N}(M/S,o)_{\Omega}.$$ It turns out that the formal analogue of (\[monOmbis\]) over $\widehat{S}_{s_{0}}$ may be directly constructed in (formal) algebraic geometry, with no recourse to analytic techniques, over any base field $k$ of characteristic zero.
The existence of the local analytic trivializations $\Psi^{\text{MIC}}_{s_{0}}$ is indeed a direct consequence of the following basic observation : if $(E,\nabla)$ is an analytic vector bundle with integrable connection over some connected analytic submanifold $Y$ of some analytic manifold $X$, then $(E,\nabla)$ uniquely extends, as a vector bundle with integrable connection, over any sufficiently small open connected neighbourhood of $Y$ in $X$. This property admits a natural avatar in formal geometry, valid over any base field of characteristic zero, which implies the existence of a formal analogue of $\Psi^{\text{MIC}}_{s_{0}}$. This construction, with $s_{0}$ varying in $S,$ endows ${\mathbf{MIC}}_{N}(M/S,o)$ with a structure of $D$-scheme over $S.$
Universal vector extensions as $D$-group schemes {#ED}
------------------------------------------------
[^26] The above discussion may be specialized to the case $N=1$. Then ${\mathbf{MIC}}_{1}(M/S,o)$ is a smooth, quasi-projective group scheme over $S$ — its group structure is induced by the tensor product of rigidified line bundles with connections — and its neutral component ${\mathbf{MIC}}_{1}(M/S,o)^0$ may be identified with the universal vector extension $E({{\rm Pic}}_{0}(M/S))$ of the connected relative Picard variety ${{\rm Pic}}_{0}(M/S)$ of $M$ over $S$. Moreover, the structure of a $D$-scheme over $S$ on ${\mathbf{MIC}}_{1}(M/S,o)$ is compatible with its structure of a group scheme.
Let us introduce the relative Albanese variety of $M$ over $S$, namely the abelian scheme over $S$ defined as $${{\mathcal A}}:= \widehat{{{\rm Pic}}_{0}(M/S)},$$ and the relative Albanese morphism $$\alpha_{o}: M {\longrightarrow}{{\mathcal A}}$$ attached to the section $o$. It induces an isomorphism of group schemes over $S$ (see for instance [@BK09], Appendix B): $$\alpha_{o}^\ast : {\mathbf{MIC}}_{1}(M/S, o)^0 {\stackrel{\sim}{\longrightarrow}}{\mathbf{MIC}}_{1}({{\mathcal A}}/S,0_{{{\mathcal A}}, 0})^0,$$ compatible with their structure of $D$-schemes. Together with the identification of group schemes over $S$ $${\mathbf{MIC}}_{1}({{\mathcal A}}/S,0_{{{\mathcal A}}})^0 = {\mathbf{MIC}}_{1}({{\mathcal A}}/S,0_{{{\mathcal A}}}) {\stackrel{\sim}{\longrightarrow}}E(\widehat{{{\mathcal A}}}),$$ this shows that (i) to study ${\mathbf{MIC}}_{1}(M/S, o)^0$, we may consider the case where $M$ is some abelian scheme over $S;$ and (ii) that the universal vector extension $E(\widehat{{{\mathcal A}}})$ — hence by duality the universal vector extension of any abelian scheme over $S$ — is endowed with a natural structure of $D$-group schemes, that we shall denote ${{\mathbf E}}(\widehat{{{\mathcal A}}})$.
The analytic description of the $D$-structure on the moduli spaces ${\mathbf{MIC}}_{N}(M/S, o)$ boils down in the present situation to the following description of the $D$-group scheme ${{\mathbf E}}({{\mathcal B}})$ defined by the universal vector extension $E({{\mathcal B}})$ attached to some abelian scheme ${{\mathcal B}}$ (see also [@MazurMessing74], 4.4).
Assume that $k=\C,$ and consider an abelian scheme over $S$, of relative dimension $g$, $$\pi: {{\mathcal B}}{\longrightarrow}S.$$ As in Section \[Dan\], we may consider the analytic description of the complex Lie group ${{\mathcal B}}^{{\rm an}}$ over $S^{{\rm an}}$ as a quotient of ${{\rm Lie}}_{S}{{\mathcal B}}$ by its local system of periods : $$0 {\longrightarrow}{{\rm Per}}_{S} {{\mathcal B}}{{\lhook\joinrel\longrightarrow}}{{\rm Lie}}_{S} {{\mathcal B}}\xrightarrow{\exp_{{{\mathcal B}}/S}}
{{\mathcal B}}^{{\rm an}}{\longrightarrow}0.$$
This local system ${{\rm Per}}_{S}{{\mathcal B}}$ is locally free of rank $2g,$ and may be identified with the local systems of fundamental groups, of fiber at $s\in S$: $$\pi_{1}({{\mathcal B}}_{s}, 0_{{{\mathcal B}}_{s}}) \simeq H_{1}({{\mathcal B}}^{{\rm an}}_{s}, \Z).$$ In the sequel, we shall denote it ${{\mathcal H}}_{1{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}})$. In turn, the dual local system $${{\mathcal H}}^1_{{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}}) := {{\mathcal H}}_{1{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}})^\vee$$ may be identified with $R^1\pi^{{\rm an}}_{\ast} \Z_{{{\mathcal B}}^{{\rm an}}}.$
As discussed in paragraph \[ExamHom\], for any $s\in S^{{\rm an}},$ we have a canonical isomorphism $$J_{{{\mathcal B}}_{s}}: {{\rm Lie\,}}E({{\mathcal B}}_{s}) {\stackrel{\sim}{\longrightarrow}}H_{1{{\rm dR}}}({{\mathcal B}}_{s}/\C) \simeq H_{1}({{\mathcal B}}^{{\rm an}}_{s}, \C) \simeq H_{1}({{\mathcal B}}^{{\rm an}}_{s}, \Z) \otimes_{\Z}\C,$$ which sends ${{\rm Per}\,}{{\mathcal B}}_{s}$ isomorphically onto $H_{1}({{\mathcal B}}^{{\rm an}}_{s}, \Z).$ These isomorphisms depend analytically on $s\in S^{{\rm an}},$ and define isomorphisms $J_{{{\mathcal B}}}$ of analytic vector bundles and local systems over $S^{{\rm an}}$, which fit into a commutative diagram: $$\begin{CD}
{{\rm Per}}_{S} E({{\mathcal B}}) @>{J_{{{\mathcal B}}}}>\sim> {{\mathcal H}}_{1{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}}) \\
@VVV @VVV \\
{{\rm Lie}}_{S}E({{\mathcal B}}) @>{J_{{{\mathcal B}}}}>\sim> {{\mathcal H}}_{1{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}})\otimes_{\Z}\C,
\end{CD}$$ where the vertical maps are the obvious injections. They induce an isomorphism of complex Lie groups over $S^{{\rm an}}$: $$J_{{{\mathcal B}}}^\times : E({{\mathcal B}})^{{\rm an}}{\stackrel{\sim}{\longrightarrow}}{{\mathcal H}}_{1{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}}) \otimes_{\Z}\G^{{\rm an}}_{m\C}$$ which makes the following diagram commutative: $$\label{extaniso}
\begin{CD}
0 @>>> {{\mathcal H}}_{1{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}}) @>{J_{{{\mathcal B}}}^{-1}}>> {{\rm Lie}}_{S}E({{\mathcal B}}) @>{\exp_{E({{\mathcal B}})/S}}>> E({{\mathcal B}})^{{\rm an}}@>>>0 \\
@. @VV{=}V @VV{J_{{{\mathcal B}}}}V @VV{J_{{{\mathcal B}}}^\times}V @. \\
0 @>>> {{\mathcal H}}_{1{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}}) @>{. \otimes_{\Z}1_{\C}}>>{{\mathcal H}}_{1{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}})\otimes_{\Z} \C @>{Id_{{{\mathcal H}}_{1{{\rm B}}}}\otimes_{\Z}{{\mathbf e}}}>> {{\mathcal H}}_{1{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}}) \otimes_{\Z}\G^{{\rm an}}_{m\C} @>>>0.
\end{CD}$$ (Recall that ${{\mathbf e}}:= \exp (2\pi i .)$.)
In (\[extaniso\]), both lines are short exact sequences of commutative complex Lie groups over $S^{{\rm an}}$, and the vertical arrows are isomorphisms. These isomorphisms are actually compatible with the $D$-structures in the analytic category: the connection on ${{\rm Lie}}_{S}{{\mathbf E}}({{\mathcal B}})$ is the dual of the Gauss-Manin connection on ${{\mathcal H}}^1_{{{\rm dR}}}({{\mathcal B}}/S)$, and is mapped by $J_{{{\mathcal B}}}$ to the connection on ${{\mathcal H}}_{1{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}})\otimes_{\Z}\C$ which makes horizontal the sections of the local system ${{\mathcal H}}_{1{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}})$; the local analytic trivializations of $E({{\mathcal B}})^{{\rm an}}$ induced by the $D$-structure of ${{\mathbf E}}({{\mathcal B}})$, become, under the isomorphism $J_{{{\mathcal B}}}^\times$, the local trivializations of ${{\mathcal H}}_{1{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}}) \otimes_{\Z}\G^{{\rm an}}_{m\C}$ induced by local trivializations of ${{\mathcal H}}_{1{{\rm B}}}({{\mathcal B}}^{{\rm an}}/S^{{\rm an}})$.
Extensions of abelian schemes by $\G_{m}$ and $D$-group schemes {#ExtD}
---------------------------------------------------------------
The construction of the algebraic groups $L^\times$ and $E(L^\times)$ attached to some line bundle $L$ algebraically equivalent to zero on some abelian variety $A$ discussed in Section \[kappadRB\] extends to a relative situation.
Consider for instance an abelian scheme ${{\mathcal B}}$ over $S$ as in the previous section. If ${{\mathcal L}}$ is a line bundle over ${{\mathcal B}},$ equipped with a rigidification along the zero section of ${{\mathcal B}}$ $$\epsilon : {{\mathcal O}}_{S} {\stackrel{\sim}{\longrightarrow}}0_{{{\mathcal B}}}^\ast {{\mathcal L}},$$ and algebraically equivalent to zero on the fibers of ${{\mathcal B}}$ — in other words, if $({{\mathcal L}}, \epsilon)$ defines a section ${{\mathcal P}}$ over $S$ over the dual abelian scheme $\widehat{{{\mathcal B}}}$ — then the $\G_{m}$-torsor $\pi_{{{\mathcal L}}} : {{\mathcal L}}^\times {\longrightarrow}{{\mathcal B}}$, deduced from the total space of ${{\mathcal L}}$ by deleting its zero section, admits a unique structure of a commutative group scheme over $S$ which makes the diagram $$\label{extcL}
0 {\longrightarrow}\G_{m S} \stackrel{\epsilon}{{\longrightarrow}} {{\mathcal L}}^\times \stackrel{\pi_{{{\mathcal L}}}}{{\longrightarrow}} {{\mathcal B}}{\longrightarrow}0$$ an extension of smooth commutative group schemes over $S$. By pulling back this extension along the morphism $$p_{{{\mathcal B}}}: E({{\mathcal B}}) {\longrightarrow}{{\mathcal B}},$$ we define a smooth commutative group scheme $$E({{\mathcal L}}^\times) := {{\mathcal L}}^\times \times_{{{\mathcal B}}} E({{\mathcal B}})$$ which fits into an short exact sequence of group schemes over $S$: $$\label{extEcL}
0 {\longrightarrow}\G_{m S} \stackrel{\epsilon'}{{\longrightarrow}} E({{\mathcal L}}^\times) \stackrel{\tilde{\pi}_{{{\mathcal L}}}}{{\longrightarrow}} E({{\mathcal B}}) {\longrightarrow}0$$
In the sequel we shall use that $E({{\mathcal L}}^\times)$ may be canonically equipped with a $D$-structure, so that it becomes a commutative $D$-group scheme ${{\mathbf E}}({{\mathcal L}}^\times)$ over $S$ and the extension of commutative group schemes (\[extEcL\]) becomes an extension of commutative $D$-group schemes: $$\label{DextEcL}
0 {\longrightarrow}\bG_{m S} \stackrel{\epsilon'}{{\longrightarrow}} {{\mathbf E}}({{\mathcal L}}^\times) \stackrel{\tilde{\pi}_{{{\mathcal L}}}}{{\longrightarrow}} {{\mathbf E}}({{\mathcal B}}) {\longrightarrow}0$$
This construction is alluded to in [@Brylinski83] (2.2.2.1), appears in a “differential algebraic context” in [@BertrandPillay10] Lemma 3.4 (i-ii), and in a “geometric context” in [@AndreattaBarbieri-Viale05] (see also [@AndreattaBertapelle11]). The construction of the $D$-structure on ${{\mathbf E}}({{\mathcal L}}^\times)$ and of the extension (\[DextEcL\]) may be understood as follows in terms of moduli spaces of vector bundles with integrable connections.
The construction of the relative moduli spaces ${\mathbf{MIC}}_{N}(M/S, o)$ and of their $D$-structure discussed in Section \[ModuliD\] directly extends to the moduli spaces ${\mathbf{MIC}}_{N}(M/S, o,o')$ of vector bundles equipped with a relative integrable connection rigidified along two sections $o$ and $o'$ of $M$ over $S$. Besides, as explained in Section \[ED\], ${{\mathbf E}}({{\mathcal B}})$ may be identified with the $D$-group scheme ${\mathbf{MIC}}_{1}(\widehat{{{\mathcal B}}}/S, 0_{\widehat{{{\mathcal B}}}})$. The discussion of Section \[VarMic\] may be extended to the relative case, and allows one to identify $E({{\mathcal L}}^\times)$ with ${\mathbf{MIC}}_{1}(\widehat{{{\mathcal B}}}/S, 0_{\widehat{{{\mathcal B}}}}, {{\mathcal P}})$, in a way compatible with their respective structure of $\G_{m}$-torsors over $E({{\mathcal B}})$ and ${\mathbf{MIC}}_{1}(\widehat{{{\mathcal B}}}/S, 0_{\widehat{{{\mathcal B}}}})$. The canonical $D$-structure on $E({{\mathcal L}}^\times)$ is the $D$-structure deduced from the one on ${\mathbf{MIC}}_{1}(\widehat{{{\mathcal B}}}/S, 0_{\widehat{{{\mathcal B}}}}, {{\mathcal P}})$ through this identification.
A conjecture {#sec:Conj}
============
In this final part, we consider the following geometric data: a smooth projective connected curve $C$ over ${{\overline {\mathbb Q}}}$, and an abelian scheme over $C$, $\pi: {{\mathcal A}}{\longrightarrow}C.$
As before we denote $E({{\mathcal A}})$ the universal vector extension of this abelian scheme. It is a smooth connected commutative group scheme over $C,$ endowed with a canonical structure of a $D$-group scheme. If necessary, we shall use the notation ${{\mathbf E}}({{\mathcal A}})$ to denote $E({{\mathcal A}})$ considered as a $D$-group scheme over $C$, to distinguish it from the “plain” group scheme $E({{\mathcal A}})$ over $C$.
As usual, we denote $\widehat{{{\mathcal A}}}$ the abelian scheme over $C$ dual to ${{\mathcal A}}$.
We shall make the following simplifying assumption : $$\label{Hodgepositive}
\mbox{\emph{the vector bundle $\E_{{{\mathcal A}}} := ({{\rm Lie}}_{C} {{\mathcal A}})^\vee$ is ample.}}$$ Recall that, in general, $\E_{{{\mathcal A}}}$ is only semipositive. Condition (\[Hodgepositive\]) implies the vanishing of the $\overline{{{\overline {\mathbb Q}}}(C)}/{{\overline {\mathbb Q}}}$-trace of the geometric generic fiber ${{\mathcal A}}_{\overline{{{\overline {\mathbb Q}}}(C)}}$ of ${{\mathcal A}}$, and shall ensure that the extensions of formal $D$-groups (\[forC\]) and local systems (\[extGamma\]) considered below have no nontrivial automorphisms (hence have their middle term defined, up to unique isomorphism, by their extension class).
A construction {#Cons}
--------------
Suppose that we are given the following datum:
\(i) *a section ${{\mathcal P}}$ over $C$ of the dual abelian scheme* ${{\mathcal A}}.$
By the very definition of ${{\mathcal A}}$, it defines
\(ii) *a line bundle ${{\mathcal L}}$ over ${{\mathcal A}}$, equipped with a rigidification $\epsilon : {{\mathcal O}}_{C} {\stackrel{\sim}{\longrightarrow}}0_{{{\mathcal A}}}^\ast {{\mathcal L}}$ along the zero section, algebraically equivalent to zero on the fiber of $\pi: {{\mathcal A}}{\longrightarrow}C.$*
As recalled above, the $\G_{m}$-torsor ${{\mathcal L}}^\times$ over ${{\mathcal A}}$ defines in a unique way
\(iii) *an extension of smooth commutative group schemes over $C$,* $$0 {\longrightarrow}\G_{m, S} \stackrel{\epsilon}{{\longrightarrow}} {{\mathcal L}}^\times {\longrightarrow}{{\mathcal A}}{\longrightarrow}0.$$
Finally, through the construction descibed in Section \[ExtD\], we obtain:
\(iv) *an extension of commutative $D$-group scheme over $C$,* $$0 {\longrightarrow}{{\mathbf{G}_{m}}}_{S} {\longrightarrow}{{\mathbf E}}({{\mathcal L}}^\times){\longrightarrow}{{\mathbf E}}({{\mathcal A}}) {\longrightarrow}0.$$
These successive operations are easily seen to establish a bijective correspondence between the four kinds of data (i)–(iv) above, and to be additive:
The above construction defines isomorphisms of $\Z$-modules: *$${\widehat{{{\mathcal A}}}(C) {\stackrel{\sim}{\longrightarrow}}{{\rm Ext}}^1_{{{\rm{c-gp}}}/C}({{\mathcal A}}, \G_{m S}) {\stackrel{\sim}{\longrightarrow}}}{{\rm Ext}}_{{\text{c$D$-gp}}}^1({{\mathbf E}}({{\mathcal A}}),{{\mathbf{G}_{m}}}_{S}).$$*
This would actually hold in the general situation considered in Section \[ExtD\], without any further assumption on the base scheme $S$.
${{\rm Lie}}_{C}^1$ and ${{\rm Per}}^1_{C_{\C}^{{\rm an}}}$ {#subsec:LiePer1}
-----------------------------------------------------------
Recall that the dual of the module with integrable connection ${{\rm Lie}}_{C} {{\mathbf E}}({{\mathcal A}})$ over $C$ may be identified with the relative de Rham cohomology of ${{\mathcal A}}$ over $C$ equipped with the Gauss-Manin connection $({{\mathcal H}}^1_{{{\rm dR}}}({{\mathcal A}}/C), \nabla_{GM})$, and the local system of periods ${{\rm Per}}_{C_{\C}}E({{\mathcal A}})_{\C}$ over $C^{{\rm an}}_{\C}$ with the local system defined by the relative Betti first homology of ${{\mathcal A}}^{{\rm an}}_{\C}$ over $\C_{\C}^{{\rm an}}$, which we denote ${{\mathcal H}}^1_{{{\rm B}}}({{\mathcal A}}_{\C}^{{\rm an}}/C^{{\rm an}}_{\C}).$
Besides, the module with integrable connection ${{\rm Lie}}_{C} \bG_{m,C}$ over $C$ may be identified with the trivial module with integrable connection $({{\mathcal O}}_{C}, d)$, and the local system of periods ${{\rm Per}}_{C_{\C}} \G_{m,C_{\C}}$ over $C^{{\rm an}}_{\C}$ with the constant local system $\Z_{C^{{\rm an}}_{\C}}.$
Consequently the maps ${{\rm Lie}}^1_{S}$ and ${{\rm Per}}^1_{S}$ defined on extension classes of commutative $D$-group schemes in paragraph \[subsubsec:Ext\] and Section \[Dan\] take here the following form: $${{\rm Lie}}_{C}^1 : {{\rm Ext}}^1_{{\text{c$D$-gp}}/C}({{\mathbf E}}({{\mathcal A}}),{{\mathbf{G}_{m}}}_{S}) {\longrightarrow}H^1_{{{\rm dR}}}(C, ({{\mathcal H}}^1_{{{\rm dR}}}({{\mathcal A}}/C), \nabla_{GM}))$$ and $${{\rm Per}}^1_{C_{\C}^{{\rm an}}} : {{\rm Ext}}^1_{{\text{c$D$-gp}}/C_{\C}}({{\mathbf E}}({{\mathcal A}})_{\C},{{\mathbf{G}_{m}}}_{S_{\C}}) {\longrightarrow}H^1(C^{{\rm an}}_{\C}, {{\mathcal H}}^1_{{{\rm B}}}({{\mathcal A}}_{\C}^{{\rm an}}/C^{{\rm an}}_{\C})).$$
Observe that, after tensoring with $\C$, the range spaces of these two maps become canonically isomorphic. Indeed we have “elementary” isomorphisms defined by the base change from ${{\overline {\mathbb Q}}}$ to $\C$ $$\label{elem1}
H^1_{{{\rm dR}}}(C, ({{\mathcal H}}^1_{{{\rm dR}}}({{\mathcal A}}/C), \nabla_{GM})) \otimes_{{{\overline {\mathbb Q}}}} \C {\stackrel{\sim}{\longrightarrow}}H^1_{{{\rm dR}}}(C_{\C}, ({{\mathcal H}}^1_{{{\rm dR}}}({{\mathcal A}}_{\C}/C_{\C}), \nabla_{GM}))$$ and by extension of coefficients from $\Z$ to $\C$ $$\label{elem2}
H^1(C^{{\rm an}}_{\C}, {{\mathcal H}}^1_{{{\rm B}}}({{\mathcal A}}_{\C}^{{\rm an}}/C^{{\rm an}}_{\C}))\otimes_{\Z}\C {\stackrel{\sim}{\longrightarrow}}H^1(C^{{\rm an}}_{\C}, {{\mathcal H}}^1_{{{\rm B}}}({{\mathcal A}}_{\C}^{{\rm an}}/C^{{\rm an}}_{\C})_{\C}),$$ and the complex vector spaces in the right-hand sides of (\[elem1\]) and (\[elem2\]) may be identified by means of the comparison isomorphisms between Betti and algebraic de Rham cohomology (with coefficients) discussed in paragraph \[deRhamcoeff\].
If ${{\mathcal E}}$ is an element of ${{\rm Ext}}^1_{{\text{c$D$-gp}}}({{\mathbf E}}({{\mathcal A}}),{{\mathbf{G}_{m}}}_{S})$, we shall denote ${{\mathcal E}}_{\C}$ its “complexification” in the group $ {{\rm Ext}}^1_{{\text{c$D$-gp}}}({{\mathbf E}}({{\mathcal A}})_{\C},{{\mathbf{G}_{m}}}_{S_{\C}})$ (in the sense of paragraph \[kk’\]).
The following lemma is proved in the same way as Lemma \[compChern\], which compared the first Chern classes in de Rham and Betti cohomology (see also the discussion in paragraph \[final\] *infra*).
\[compdRB1\] For any extension class ${{\mathcal E}}$ in *${{\rm Ext}}^1_{{\text{c$D$-gp}}}({{\mathbf E}}({{\mathcal A}}),{{\mathbf{G}_{m}}}_{C})$*, the equality $$\label{eqcompdRB1}
({{\rm Lie}}_{C}^1 {{\mathcal E}})\otimes_{{{\overline {\mathbb Q}}}} 1_{\C} = 2 \pi i ({{\rm Per}}^1_{C_{\C}^{{\rm an}}} {{\mathcal E}}_{\C}) \otimes_{\Z} 1_{\C}$$ holds in $$H^1_{{{\rm dR}}}(C, ({{\mathcal H}}^1_{{{\rm dR}}}({{\mathcal A}}/C), \nabla_{GM})) \otimes_{{{\overline {\mathbb Q}}}}\C \simeq H^1(C^{{\rm an}}_{\C}, {{\mathcal H}}^1_{{{\rm B}}}({{\mathcal A}}_{\C}^{{\rm an}}/C^{{\rm an}}_{\C}))\otimes_{\Z}\C.$$
A conjecture {#subsec:Conj}
------------
### {#section-2}
We finally arrive at the formulation of the conjecture which constitutes the aim of this article.
\[Main\] Any pair of classes of extensions $(\alpha, \beta)$ with $\alpha$ in $H^1_{{{\rm dR}}}(C, ({{\mathcal H}}^1_{{{\rm dR}}}({{\mathcal A}}/C), \nabla_{GM}))$ and $\beta$ in $H^1(C^{{\rm an}}_{\C}, {{\mathcal H}}^1_{{{\rm B}}}({{\mathcal A}}_{\C}^{{\rm an}}/C^{{\rm an}}_{\C}))$ which satisfies the compatibility relation $$\label{eqcompdRB1bis}
\alpha\otimes_{{{\overline {\mathbb Q}}}} 1_{\C} = 2 \pi i \, \beta \otimes_{\Z} 1_{\C}$$ in $$H^1_{{{\rm dR}}}(C, ({{\mathcal H}}^1_{{{\rm dR}}}({{\mathcal A}}/C), \nabla_{GM})) \otimes_{{{\overline {\mathbb Q}}}}\C \simeq H^1(C^{{\rm an}}_{\C}, {{\mathcal H}}^1_{{{\rm B}}}({{\mathcal A}}_{\C}^{{\rm an}}/C^{{\rm an}}_{\C}))\otimes_{\Z}\C$$ is of the form $({{\rm Lie}}_{S}{{\mathcal E}}, {{\rm Per}}^1_{C_{\C}^{{\rm an}}} {{\mathcal E}}_{\C})$ for some class ${{\mathcal E}}$ in *${{\rm Ext}}^1_{{\text{c$D$-gp}}}({{\mathbf E}}({{\mathcal A}}),{{\mathbf{G}_{m}}}_{C})$* and hence is obtained from some section ${{\mathcal P}}$ of the dual abelian scheme $\widehat{{{\mathcal A}}}$ over $C$.
The class ${{\mathcal E}}$ and the section ${{\mathcal P}}$, if they exist, are uniquely determined by these conditions.
By using the Leray-Serre spectral sequence to analyze the group $H^2_{\rm Gr}({{\mathcal A}})$ attached to ${{\mathcal A}}$ (seen as a smooth projective variety over ${{\overline {\mathbb Q}}}$) by means of the fibering $\pi : {{\mathcal A}}{\longrightarrow}C,$ and by using a relative generalization (over $C$) of Theorem \[GPCA\], we may prove the following:
\[MainGPCA\] With the above notation, Conjecture \[Main\] holds if and only if the smooth projective variety ${{\mathcal A}}$ over ${{\overline {\mathbb Q}}}$ satisfies $GPC^1({{\mathcal A}}).$
### {#section-3}
Consider $f : S {\longrightarrow}C$ a smooth projective connected surface $S$ over ${{\overline {\mathbb Q}}}$ fibered over $C$. Assume for simplicity that $f$ is a smooth morphism (all fibers of $f$ are therefore smooth projective curve) and admits a section $o$. Then we may introduce the relative Jacobian $${{\mathcal J}}:= {{\rm Jac}}(S/C)$$ of $S$ over $C$. It is an abelian scheme over $C$. Using the section $o$, we may define a relative Jacobian embedding $$j_{o}: S {{\lhook\joinrel\longrightarrow}}{{\mathcal J}}.$$ (It is a closed embedding, over $S$, which maps $o$ to the zero section $0_{{{\mathcal J}}}$ of ${{\mathcal J}}$ over $C$.) Pulling back by $j_{o}$ establishes a bijection between line bundles ${{\mathcal L}}$ over ${{\mathcal J}}$, defining as above sections over $C$ of the dual abelian schemes $\widehat{{{\mathcal J}}}$[^27], and line bundles ${{\mathcal M}}$ over $S$, rigidified along $o$ and of degree zero on the fibers of $f$.
With this notation, we have the following variant[^28] of Proposition \[MainGPCA\]:
\[MainGPCS\] The validity of $GPC^1(S)$ is equivalent to the validity of Conjecture \[Main\] for ${{\mathcal A}}= {{\mathcal J}}$.
Conjecture \[Main\] may be extended to possibly degenerating families of abelian varieties over $C$ (say, with semi-abelian bad fibers). This generalized version may be applied to the relative Jacobian of any smooth projective surface fibered over $C$ (say, with semi-stable fibers) and would imply the validity of $GPC^1$ for any smooth projective surface and, actually, for any smooth projective variety over ${{\overline {\mathbb Q}}}$. This approach to $GPC^1$ through fibrations of surfaces over curves and associated families of Jacobian varieties is very much in the spirit of the classical works of Picard, Poincaré, and Lefschetz which constituted our starting point in Section \[subsec:algline\].
### {#final}
To avoid technicalities, I prefer not to discuss this in detail, and would instead stress the fact that Conjecture \[Main\] may be rephrased as an algebraization criterion concerning formal line bundles, satisfying suitable “differential algebraic” and “analytic” conditions, in the spirit of Theorems \[SL1\] and \[SL2\] *à la* Schneider-Lang, as expected in Section \[subsec:analogy\].
Indeed, consider a pair of classes $(\alpha, \beta)$ as in Conjecture $\ref{Main}$.
The class $\alpha$ lies in $$H^1_{{{\rm dR}}}(C, ({{\mathcal H}}^1_{{{\rm dR}}}({{\mathcal A}}/C), \nabla_{GM})) \simeq {{\rm Ext}}^1_{{\text{mic}}/C}({{\rm Lie}}_{C}{{\mathbf E}}({{\mathcal A}}),{{\rm Lie}}_{C}{{\mathbf{G}_{m}}}_{S}),$$ and defines an extension of vector bundles with (integrable) connections over $C$, defined over ${{\overline {\mathbb Q}}}$: $$0 {\longrightarrow}{{\rm Lie}}_{C}{{\mathbf{G}_{m}}}_{C} {\longrightarrow}(M,\nabla) {\longrightarrow}{{\rm Lie}}_{C}{{\mathbf E}}({{\mathcal A}}) {\longrightarrow}0.$$ It may be interpreted as an extension of “formal commutative $D$-group schemes over $C$”: $$\label{forC}
0 {\longrightarrow}\widehat{{{\mathbf{G}_{m}}}}_{C} {\longrightarrow}\mathbf{G}_{\text{for}} {\longrightarrow}\widehat{{{\mathbf E}}({{\mathcal A}})} {\longrightarrow}0,$$ where $\widehat{{{\mathbf{G}_{m}}}}_{C}$ (resp. $\widehat{{{\mathbf E}}({{\mathcal A}})}$) denotes the completion of the $D$-group scheme ${{\mathbf{G}_{m}}}_{C}$ (resp. ${{\mathbf E}}({{\mathcal A}})$) over $C$ along its unit (resp. zero) section. (Here we use that the base field ${{\overline {\mathbb Q}}}$ has characteristic zero, so that we have formal exponential maps at our disposal.)
Observe that, by forgetting the $D$-structure, from (\[forC\]) we deduce an extension of formal groups over $C$, $$0 {\longrightarrow}\widehat{{{{\mathbb G}_m}}}_{C} {\longrightarrow}{G}_{\text{for}} {\longrightarrow}\widehat{E({{\mathcal A}})} {\longrightarrow}0,$$ which in turn defines a $\G_{m}$-torsor or, equivalently, a line bundle ${{\mathcal N}}_{\text{for}}$, on the formal completion $\widehat{E({{\mathcal A}})}$.
The class $\beta$ lies in $$H^1(C^{{\rm an}}_{\C}, {{\mathcal H}}^1_{{{\rm B}}}({{\mathcal A}}_{\C}^{{\rm an}}/C^{{\rm an}}_{\C}))
\simeq
{{\rm Ext}}^1_{\text{Ab-Sheaves}}({{\rm Per}}_{C_{\C}}{{\mathcal A}}_{\C}, \Z_{C^{{\rm an}}_{\C}})$$ and defines an extension of local systems over free $\Z$-modules of finite rank over $C^{{\rm an}}_{\C}$: $$\label{extGamma}
0 {\longrightarrow}\Z_{C^{{\rm an}}_{\C}} {\longrightarrow}\Gamma {\longrightarrow}{{\rm Per}}_{C_{\C}}{{\mathcal A}}_{\C} {\longrightarrow}0.$$ After tensoring with the multiplicative group $\G_{m\C}^{{\rm an}}$, we deduce from (\[extGamma\]) an extension of “commutative $D$-complex Lie groups” over $C_{\C}^{{\rm an}}$: $$\label{extGammaLie}
0 {\longrightarrow}{\bG}^{{\rm an}}_{m,C_{\C}} {\longrightarrow}\Gamma \otimes \bG^{{\rm an}}_{m,C_{\C}} {\longrightarrow}{{\mathbf E}}({{\mathcal A}})^{{\rm an}}_{\C} {\longrightarrow}0.$$ This construction is easily seen to establish a one-to-one correspondence between extensions of local systems (\[extGamma\]) and extension in the analytic category of ${{\mathbf E}}({{\mathcal A}})^{{\rm an}}_{\C}$ by ${\bG}^{{\rm an}}_{m,C_{\C}}$. When $\beta$ is the image by ${{\rm Per}}^1_{C_{\C}}$ of some extension class $[{{\mathcal E}}_{\C}]$, the extension (\[extGammaLie\]) is nothing but the analytification ${{\mathcal E}}^{{\rm an}}_{\C}$ of ${{\mathcal E}}_{\C}$.
Here again the extension (\[extGammaLie\]) defines some analytic line bundle ${{\mathcal N}}^{{\rm an}}$ over $E({{\mathcal A}})_{\C}^{{\rm an}}$, by forgetting the $D$-structure and part of the group structure on $\Gamma \otimes \G^{{\rm an}}_{m,C_{\C}}$.
The equality (\[eqcompdRB1bis\]) $$\alpha\otimes_{{{\overline {\mathbb Q}}}} 1_{\C} = 2 \pi i \, \beta \otimes_{\Z} 1_{\C}$$ expresses the fact that the extension of “commutative formal analytic $D$-groups” over $C^{{\rm an}}_{\C}$ deduced from (\[extGammaLie\]) by completion along the zero sections coincides with the analytification of the “commutative formal $D$-groups” over $C_{\C}$ deduced from (\[forC\]) by extending the base field from ${{\overline {\mathbb Q}}}$ to $\C.$
Finally Conjecture \[Main\] may be rephrased as asserting the algebraicity of any pair $({{\mathcal N}}^{\text{for}}, {{\mathcal N}}^{{\rm an}})$, consisting of a formal line bundle ${{\mathcal N}}^{\text{for}}$ on the formal completion $\widehat{E({{\mathcal A}})}$ of $E({{\mathcal A}})$ along its zero section and of some analytic line bundle ${{\mathcal N}}^{{\rm an}}$ over $E({{\mathcal A}})^{{\rm an}}_{\C}$ such that the associated $\G_{m}$-torsors ${{\mathcal N}}^{\text{for} \times}$ and ${{\mathcal N}}^{{{\rm an}}\times}$ may be endowed with suitably compatible structures of $D$-group schemes over $C$ and $C^{{\rm an}}_{\C}$ (in the respective formal and analytic categories).
[KGG[[$^{+}$]{}]{}04]{}
Y. André. . , 2004.
F. Andreatta and L. Barbieri-Viale. Crystalline realizations of 1-motives. , 331(1):111–172, 2005.
F. Andreatta and A. Bertapelle. Universal extension crystals of 1-motives and applications. , 215(8):1919–1944, 2011.
A. Andreotti. Théorèmes de dépendance algébrique sur les espaces complexes pseudo-concaves. , 91:1–38, 1963.
Anonymous. Correspondence. , 78:898, 1956.
A. Baker and G. W[ü]{}stholz. , volume 9 of [ *New Mathematical Monographs*]{}. Cambridge University Press, Cambridge, 2007.
P. Berthelot, L. Breen, and W. Messing. , volume 930 of [*Lecture Notes in Mathematics*]{}. Springer-Verlag, Berlin, 1982.
D. Bertrand. Endomorphismes de groupes algébriques; applications arithmétiques. In [*Diophantine approximations and transcendental numbers ([L]{}uminy, 1982)*]{}, volume 31 of [*Progr. Math.*]{}, pages 1–45. Birkhäuser Boston, Boston, MA, 1983.
D. Bertrand and A. Pillay. A [L]{}indemann-[W]{}eierstrass theorem for semi-abelian varieties over function fields. , 23(2):491–533, 2010.
C. Birkenhake and H. Lange. , volume 302 of [*Grundlehren der Mathematischen Wissenschaften*]{}. Springer-Verlag, Berlin, second edition, 2004.
F. Bogomolov and M. L. McQuillan. Rational curves on foliated varieties. Preprint I.H.E.S., 2001.
E. Bombieri. , 10:267–287, 1970.
J.-B. Bost. Algebraic leaves of algebraic foliations over number fields. , 93:161–221, 2001.
J.-B. Bost. In [*[A. Adolphson et al. (ed.), Geometric aspects of Dwork theory]{}*]{}, volume II, pages 371–418. Walter de Gruyter, Berlin, 2004.
J.-B. Bost. Evaluation maps, slopes, and algebraicity criteria. In [*Proceedings of the [I]{}nternational [C]{}ongress of [M]{}athematicians, [M]{}adrid 2006*]{}, volume II, pages 537–562. European Mathematical Society, 2006.
J.-B. Bost and A. Chambert-Loir. Analytic curves in algebraic varieties over number fields. In [*Algebra, arithmetic, and geometry: in honor of [Y]{}u. [I]{}. [M]{}anin. [V]{}ol. [I]{}*]{}, volume 269 of [*Progr. Math.*]{}, pages 69–124. Birkhäuser, Boston, MA, 2009.
J.-B. Bost and K. K[ü]{}nnemann. Hermitian vector bundles and extension groups on arithmetic schemes [II]{}. [T]{}he arithmetic [A]{}tiyah extension. , (327):361–424 (2010), 2009.
E. Bouscaren, editor. , volume 1696 of [*Lecture Notes in Mathematics*]{}. Springer-Verlag, Berlin, 1998.
J.-L. Brylinski. -motifs” et formes automorphes (théorie arithmétique des domaines de [S]{}iegel). In [*Conference on automorphic theory ([D]{}ijon, 1981)*]{}, volume 15 of [*Publ. Math. Univ. Paris VII*]{}, pages 43–106. Univ. Paris VII, Paris, 1983.
A. Buium. , volume 1226 of [*Lecture Notes in Mathematics*]{}. Springer-Verlag, Berlin, 1986.
A. Buium. , volume 1506 of [*Lecture Notes in Mathematics*]{}. Springer-Verlag, Berlin, 1992.
A. Buium. Intersections in jet spaces and a conjecture of [S]{}. [L]{}ang. , 136(3):557–567, 1992.
A. Buium. Effective bound for the geometric [L]{}ang conjecture. , 71(2):475–499, 1993.
A. Buium. Geometry of differential polynomial functions. [I]{}. [A]{}lgebraic groups. , 115(6):1385–1444, 1993.
A. Buium. . Actualités Mathématiques. Hermann, Paris, 1994.
A. Buium and J. F. Voloch. Integral points of abelian varieties over function fields of characteristic zero. , 297(2):303–307, 1993.
H. Cartan and J.-P. Serre. Un théorème de finitude concernant les variétés analytiques compactes. , 237:128–130, 1953.
A. Chambert-Loir. Th' eorèmes d’algébricit' e en géométrie diophantienne. [S]{}éminaire [B]{}ourbaki, [V]{}ol. 2000/2001, [E]{}xposé 886. , 282:175–209, 2002.
W.-L. Chow. On compact complex analytic varieties. , 71:893–914, 1949.
W.-L. Chow. Formal functions on homogeneous spaces. , 86(1):115–130, 1986.
D. V. Chudnovsky and G. V. Chudnovsky. Applications of [P]{}adé approximations to the [G]{}rothendieck conjecture on linear differential equations. In [*Number theory, Semin. New York 1983-84*]{}, volume 1135 of [ *Lectures Notes in Mathematics*]{}, pages 52–100. Springer, Berlin, 1985.
D. V. Chudnovsky and G. V. Chudnovsky. Padé approximations and [D]{}iophantine geometry. , 82(8):2212–2216, 1985.
R. F. Coleman. The universal vectorial bi-extension and [$p$]{}-adic heights. , 103(3):631–650, 1991.
F. Conforto. Sopra le trasformazioni in sè della varietà di [J]{}acobi relativa ad una curva di genere effettivo diverso dal genere virtuale, in ispecie nel caso di genere effettivo nullo. , 27:273–291, 1948.
F. Conforto. Sulla nozione di corpi equivalenti e di corpi coincidenti nella teoria delle funzioni quasi abeliane. , 18:292–310, 1949.
P. Deligne. Théorie de [H]{}odge. [II]{}. , (40):5–57, 1971.
J.-P. Demailly. Formules de [J]{}ensen en plusieurs variables et applications arithmétiques. , 110(1):75–102, 1982.
C. Ehresmann. Les connexions infinitésimales dans un espace fibré différentiable. In [*Centre Belge Rech. Math., Colloque de Topologie, Bruxelles, Juin 1950*]{}, pages 29–55. Masson, Paris, 1951.
G. Faltings. Algebraisation of some formal vector bundles. , 110(3):501–514, 1979.
G. Faltings. Some theorems about formal functions. , 16(3):721–737, 1980.
G. Faltings. Formale [G]{}eometrie und homogene [R]{}äume. , 64(1):123–165, 1981.
C. Gasbarri. , 346(1):199–243, 2010.
H. Gillet. Differential algebra—a scheme theory approach. In [*Differential algebra and related topics ([N]{}ewark, [NJ]{}, 2000)*]{}, pages 95–123. World Sci. Publ., River Edge, NJ, 2002.
A. Grothendieck. Éléments de géométrie algébrique. [III]{}. Étude cohomologique des faisceaux cohérents. [I]{}. , (11):167, 1961.
A. Grothendieck. . Secrétariat mathématique, Paris, 1962.
A. Grothendieck. On the de [R]{}ham cohomology of algebraic varieties. , (29):95–103, 1966.
A. Grothendieck. . S[é]{}minaire de G[é]{}om[é]{}trie Alg[é]{}brique du Bois-Marie, SGA 2, 1962, Advanced Studies in Pure Mathematics, Vol. 2. North-Holland Publishing Co., Amsterdam, 1968.
R. C. Gunning. . The Wadsworth & Brooks/Cole Mathematics Series. Wadsworth & Brooks/Cole Advanced Books & Software, Monterey, CA, 1990.
R. Hartshorne. Cohomological dimension of algebraic varieties. , 88:403–450, 1968.
R. Hartshorne. , volume 156 of [ *Lecture Notes in Mathematics*]{}. Springer-Verlag, Berlin, 1970.
R. Hartshorne. On the [D]{}e [R]{}ham cohomology of algebraic varieties. , (45):5–99, 1975.
M. Herblot. Algebraic points on meromorphic curves. arXiv:1204.6336 \[math. NT\], 2012.
H. Hironaka and H. Matsumura. Formal functions and formal embeddings. , 20:52–82, 1968.
W. V. D. Hodge. . Cambridge University Press, Cambridge, England, 1941.
E. Hrushovski. The [M]{}ordell-[L]{}ang conjecture for function fields. , 9(3):667–690, 1996.
E. Hrushovski and B. Zilber. , 9(1):1–56, 1996.
L. Illusie. Crystalline cohomology. In [*Motives ([S]{}eattle, [WA]{}, 1991)*]{}, volume 55 of [*Proc. Sympos. Pure Math.*]{}, pages 43–70. Amer. Math. Soc., Providence, RI, 1994.
L. Illusie. Grothendieck’s existence theorem in formal geometry. In [*Fundamental algebraic geometry*]{}, volume 123 of [ *Mathematical Surveys and Monographs*]{}, pages 179–233. Amer. Math. Soc., Providence, RI, 2005.
U. Jannsen. , volume 1400 of [ *Lecture Notes in Mathematics*]{}. Springer-Verlag, Berlin, 1990.
S. Kebekus, L. Solá Conde, and M. Toma. , 16(1):65–81, 2007.
K. Kodaira and D. C. Spencer. Divisor class groups on algebraic varieties. , 39:872–877, 1953.
K. Kodaira and D. C. Spencer. Groups of complex line bundles over compact [K]{}ähler varieties. , 39:868–872, 1953.
J. J. Kohn, P. A. Griffiths, H. Goldschmidt, E. Bombieri, B. Cenkl, P. Garabedian, and L. Nirenberg. Donald [C]{}. [S]{}pencer (1912–2001). , 51(1):17–29, 2004.
P. Kowalski and A. Pillay. Quantifier elimination for algebraic [$D$]{}-groups. , 358(1):167–181 (electronic), 2006.
S. Lang. , 1:313–318, 1962.
S. Lang. , 3:183–191, 1965.
S. Lang. , 5:363–370, 1966.
S. Lang. Addison-Wesley Series in Mathematics. [Addison-Wesley Publishing Company]{}, 1966.
J. Le Potier. Fibrés de [H]{}iggs et systèmes locaux. [S]{}[é]{}minaire [B]{}ourbaki, [V]{}ol. 1990/91, [E]{}xposé 737. , (201-203):221–268, 1991.
B. Malgrange. Differential algebraic groups. In [*Algebraic approach to differential equations*]{}, pages 292–312. World Sci. Publ., Hackensack, NJ, 2010.
Ju. I. Manin. Algebraic curves over fields with differentiation. , 22:737–756, 1958.
Ju. I. Manin. The [H]{}asse-[W]{}itt matrix of an algebraic curve. , 25:153–172, 1961.
Ju. I. Manin. Rational points on algebraic curves over function fields. , 27:1395–1440, 1963.
D. Marker. Manin kernels. In [*Connections between model theory and algebraic and analytic geometry*]{}, volume 6 of [*Quad. Mat.*]{}, pages 1–21. Dept. Math., Seconda Univ. Napoli, Caserta, 2000.
B. Mazur and W. Messing. . Lecture Notes in Mathematics, Vol. 370. Springer-Verlag, Berlin, 1974.
W. Messing. In [*Symposia Mathematica, Vol. XI (Algebra commut., Geometria, Convegni 1971/1972, Roma INDAM)*]{}, pages 359–372. Academic Press, London–New York, 1973.
D. Mumford. , volume 221 of [*Grundlehren der Mathematischen Wissenschaften*]{}. Springer-Verlag, Berlin, 1976.
T. Oda. The first de [R]{}ham cohomology group and [D]{}ieudonné modules. , 2:63–135, 1969.
A. Pillay. Model theory and [D]{}iophantine geometry. , 34(4):405–422, 1997.
A. Pillay. Some foundational questions concerning differential algebraic groups. , 179(1):179–200, 1997.
A. Pillay. Algebraic [$D$]{}-groups and differential [G]{}alois theory. , 216(2):343–360, 2004.
H. Poincaré. Sur les fonctions abéliennes. , 26:43–98, 1902.
V. Puiseux. Recherches sur les fonctions alg[é]{}briques. , 15:365–480, 1850.
V. Puiseux. Nouvelles recherches sur les fonctions alg[é]{}briques. , 16:228–240, 1851.
Mich[è]{}le Raynaud. . Société Mathématique de France, Paris, 1975. Bull. Soc. Math. France, M[é]{}m. No. 41, Suppl[é]{}ment au Bull. Soc. Math. France, Tome 103.
R. Remmert. Meromorphe [F]{}unktionen in kompakten komplexen [R]{}äumen. , 132:277–288, 1956.
B. Riemann. Theorie der [A]{}bel’schen [F]{}unctionen. , 54:115–155, 1857.
M. Rosenlicht. Extensions of vector groups by abelian varieties. , 80:685–714, 1958.
Th. Schneider. Zur [T]{}heorie der [A]{}belschen [F]{}unktionen und [I]{}ntegrale. , 183:110–128, 1941.
Th. Schneider. . Springer-Verlag, Berlin, 1957.
J. P. Serre. Fonctions automorphes: quelques majorations dans le cas où ${X}/{G}$ est compact. , 6, 1953–1954. Exposé 2.
J. P. Serre. Géométrie algébrique et géométrie analytique. , 6:1–42, 1955–1956.
J.-P. Serre. . Publications de l’institut de mathématique de l’université de Nancago, VII. Hermann, Paris, 1959.
F. Severi. , volume 20 of [*Pontificiae Academiae Scientiarum Scripta Varia*]{}. Pontificia Academia Scientiarum, Vatican City, second augmented edition, 1961.
I.R. Shafarevich. , 1977.
C. L. Siegel. Meromorphe [F]{}unktionen auf kompakten analytischen [M]{}annigfaltigkeiten. , 1955:71–77, 1955.
C. T. Simpson. Moduli of representations of the fundamental group of a smooth projective variety. [I]{}. , (79):47–129, 1994.
C. T. Simpson. Moduli of representations of the fundamental group of a smooth projective variety. [II]{}. , (80):5–79 (1995), 1994.
W. Thimm. Über meromorphe [A]{}bbildungen von komplexen [M]{}annigfaltigkeiten. , 128:1–48, 1954.
M. Waldschmidt. , volume 69-70 of [*Astérisque*]{}. Société Mathématique de France, Paris, 1979. With appendices by D. Bertrand and J.-P. Serre.
G. W[ü]{}stholz. , 78:381–391, 1984.
O. Zariski. Theory and applications of holomorphic functions on algebraic varieties over arbitrary ground fields. , 1951(5):90, 1951.
O. Zariski. , volume 61 of [*Ergebnisse der Mathematik und ihrer Grenzgebiete*]{}. Springer-Verlag, New York, supplemented edition, 1971. With appendices by S. S. Abhyankar, J. Lipman, and D. Mumford.
[^1]: We refer the reader to [@Buium92], [@Buium94], and [@Pillay97BAMS], [@Bouscaren98], [@Marker2000] for more systematic presentations, surveys, and additional references.
[^2]: ANR-09-BLAN-0047.
[^3]: that is, given on the Zariski-open set $U_{\alpha}\cap U_{\beta}$ by the quotient of two (nonvanishing over $U_{\alpha}\cap U_{\beta}$) homogeneous polynomials of the same degree on $\C^{N+1}.$
[^4]: Conversely, to recover Kodaira-Spencer’s version from the Lefschetz-Hodge’s, one needs to know that any topologically trivial analytic line bundle over $X$ is algebraic : this follows from the algebraicity of the Albanese variety and of the Albanese morphism of $X$, and from the algebraicity of analytic line bundles over complex abelian varieties. But for the algebraicity of the Albanese morphism, itself a consequence of Chow’s theorem ([[*cf.* ]{}]{}2.3.1 *infra*), these results are actually consequences of Hodge theory and of Lefschetz’s work on complex abelian varieties.
[^5]: Curiously enough, Siegel points out the relation of Chow’s paper with Poincaré’s article, but does not seem aware that Chow’s Theorem may be derived from (\[degtrleqdim\]).
[^6]: In a more mundane vein, I would simply add that an especially negative assessment by Lefschetz of the approach of Kodaira-Spencer [@KodairaSpencer53II] turns out to be well documented (see for instance [@SpencerVita04], p. 21).
[^7]: The precise definition of the map $\alpha \mapsto \alpha^{{{\rm an}}}_{\C}$ actually involves the specific sign conventions used in homological algebra and sheaf cohomology. The “standard” convention used in [@Deligne71] indeed introduces a minus sign in the above compatibility relation : $c_{1, {{\rm dR}}}(L)_{C}^{{{\rm an}}} = - 2\pi i\, c_{1,{{\rm top}}}(L_{\C}^{{{\rm an}}}).$
In the sequel, we shall generally neglect these delicate problems of signs involved in various “canonical” isomorphisms and their compatibility — although the important sign issue encountered in Section \[PrelAb\] (see notably (\[sign\]) and (\[altphi\])) would plead for a more careful treatment, on the model of [@BerthelotBreenMessing82], Section V.1 .
[^8]: To “algebraize” an analytic connection $\nabla^{{\rm an}}$ over $E^{{\rm an}}$ by means of GAGA Comparison Theorem, identify (algebraic or analytic) connections with (algebraic or analytic) splittings of the Atiyah extension of $E$, $0\rightarrow \Omega^1_{M}\otimes E \rightarrow J_{M}^1E \rightarrow E \rightarrow 0$, defined by the vector bundle $J_{M}^1E$ of 1-jets of $E$ over $M$.
[^9]: This occurence of commutative algebraic groups over $\C$ that are analytically, but not algebraically, isomorphic has been first pointed out by Conforto; see [@Conforto48], [@Conforto49II], and [@Severi61], Appendice.
[^10]: In other words, for every $t \in \C$, $f(t)=(F_{0}(t):\cdots:F_{N}(t)).$
[^11]: In other words, it is deduced by extension of scalars from $K$ to $\C$ from a formal germ in the formal completion $\widehat{X}_{f(z)}$ of $X$ at the $K$-rational point $f(z)$.
[^12]: that is, when the “irregularity" $h^{1,0} (X) = h^{0,1} (X)$ of $X$ is positive.
[^13]: This conjecture is mentioned briefly in [@Grothendieck66] (note (10) p.102) and with more details in [@Lang66b] (Historical Note of Chapter IV). We refer the reader to [@AndreMotives04], Section 7.5 and Chapitre 23 for a “modern” presentation and for variants and generalizations.
[^14]: Notably the original Grothendieck Period Conjecture for a given smooth projective variety $X$ over ${{\overline {\mathbb Q}}}$ should imply the conjunction of Conjectures $GPC^k(X^n)$ for all positive integers $k$ and $n$.
[^15]: Where ${{\rm Ext}}^1_{{\rm{c-gp}}/k}$ and ${{\rm Ext}}^1_{{{\mathcal O}}_{A}-\text{mod}}$ stand for “group of 1-extensions” in the category of commutative algebraic groups over $k$, and of sheaves of ${{\mathcal O}}_{A}$-modules, respectively.
[^16]: Recall that $\sigma^{\geq 1} \Omega^\bullet_{{\hat{A}}/k}$ denotes the “stupid” truncation $0\rightarrow \Omega^1_{{\hat{A}}/k} \rightarrow \Omega^2_{{\hat{A}}/k}\rightarrow \cdots$ of $\Omega^\bullet_{{\hat{A}}/k}$.
[^17]: Both the above isomorphism $\iota_{E(A)}$ at the level of $k$-points and this infinitesimal version are special instances of a canonical isomorphism $\iota_{E(A)}$ of fpqc $k$-sheaves; [[*cf.* ]{}]{}[@MazurMessing74], [@BK09].
[^18]: Namely the elements sent to their opposite by the automorphism of ${\operatorname{Hom}}_{dRB}(\Z(0), H^1_{{{\rm dRB}}}(A)\otimes H^1_{{{\rm dRB}}}(A)\otimes \Z(1))$ defined by “switching” the two copies of $H^1_{{{\rm dRB}}}(A)$.
[^19]: This section could be skipped at first reading. It has been included since Proposition \[KummerTransc\] constitutes an application of the theorem of Schneider-Lang close in spirit to the ones in the previous section, and for comparison with Conjecture \[Main\] *infra*.
[^20]: quoted in [@Illusie94], Section 4.1:*“un cristal possède deux propriétés caractéristiques : la rigidité et la faculté de croître dans un voisinage approprié. Il y a des cristaux de toute espèce de substances : des cristaux de soude, de soufre, de modules, d’anneaux, de schémas relatifs, etc."*
[^21]: In other words, its sheaf of regular sections is closed under Lie bracket.
[^22]: Such bases exist: simply write $k(S)$ as a finite degree extension of $k(X_{1},\ldots,X_{s})$, and lift the standard basis $(\partial/\partial X_{1},\ldots,\partial/\partial X_{s})$.
[^23]: As usual, by this we mean a short exact sequence of fppf sheaves over $S$. Since we work over a base field $k$ of characteristic zero, this is equivalent to the following “geometric” condition, expressed in terms of some algebraic closure ${\overline}{k}$ of $k$: for any point $s \in S({\overline}{k}),$ the diagram $$0 {\longrightarrow}G_{2s}({\overline}{k}) \stackrel{i_{s}}{{\longrightarrow}} G_{s}({\overline}{k}) \stackrel{p_{s}}{{\longrightarrow}} G_{1s}({\overline}{k}) {\longrightarrow}0$$ is a short exact sequence of abelian groups.
[^24]: By a “complex Lie group over a complex analytic manifold $M$”, we mean a group object in the category of complex analytic manifolds “smooth” (in the “algebrogeometric” sense, that is “submersive”) over $M$.
[^25]: In [@Simpson94II], this $S$-scheme is denoted ${\mathbf R}_{\rm DR}(M/S,o,N)$.
[^26]: The content of Sections \[ED\] and \[ExtD\] is thoroughly discussed, with a slightly different perspective, in [@BertrandPillay10], Part 3 and Appendix, which constitutes the main reference for these two sections.
[^27]: that is, line bundles rigidified along ${{\mathcal J}}$, and algebraically equivalent to zero in the fibers of ${{\mathcal J}}$ over $C$.
[^28]: This variant is actually simpler than Proposition \[MainGPCA\]: its proof does not require Theorem \[GPCA\] and its relative generalization.
|
---
abstract: 'We report transport measurements of composite Fermions at filling factor $\nu=3/2$ in AlAs quantum wells as a function of strain and temperature. In this system the composite Fermions possess a valley degree of freedom and show piezoresistance qualitatively very similar to electrons. The temperature dependence of the resistance ($R$) of composite Fermions shows a metallic behavior ($dR/dT > 0$) for small values of valley polarization but turns insulating ($dR/dT < 0$) as they are driven to full valley polarization. The results highlight the importance of discrete degrees of freedom in the transport properties of composite Fermions and the similarity between composite Fermions and electrons.'
address: 'Department of Electrical Engineering, Princeton University, Princeton, NJ 08544'
author:
- 'T. Gokmen'
- Medini Padmanabhan
- 'M. Shayegan'
title: Temperature dependence of piezoresistance of composite Fermions with a valley degree of freedom
---
Since the discovery of the fractional quantum Hall effect (FQHE) [@TsuiPRL82], a great deal of research has been devoted to understand the ground state of a two-dimensional electron system (2DES) at high magnetic fields. Although Laughlin’s original wave function successfully explained the first observed FQHE state at filling factor $\nu=1/3$ [@LaughlinPRL83], it is the composite Fermion theory [@JainPRL89; @KalmeyerPRB92; @HalperinPRB93; @CFbook] that has unified the origin of nearly all the fractional states. Composite Fermions (CFs) are formed by attachment of an even number of magnetic flux quanta to each electron. At exact half-fillings the attached flux cancels out the external magnetic field and the CFs feel zero *effective* magnetic field. They are therefore expected to have Fermi liquid properties and, in particular, form a Fermi sea [@HalperinPRB93; @CFbook]; this has indeed been verified in numerous experiments [@WillettPRL93; @KangPRL93; @GoldmanPRL94].
Although the flux attachment cancels the external magnetic field at half-fillings, any spatial inhomogeneity in the density of electrons (because of the random impurity potential) results in a random, non-zero effective magnetic field that is seen by CFs. Such a random field is expected to suppress the weak localization effect and give rise to a metallic ground state for CFs in a low-disorder 2DES [@KalmeyerPRB92]. With increasing disorder, the CF system can be driven through a metal-insulator transition (MIT) [@KalmeyerPRB92]. Indeed, for CFs at $\nu=1/2$ a metallic temperature dependence at high densities and a disorder-induced MIT (via lowering the density) was experimentally demonstrated [@LiangSSC1997].
Here we report piezoresistance measurements for CFs at $\nu=3/2$ in an AlAs quantum well 2DES. In this system, the CFs possess a valley degree of freedom [@BishopPRL07], and their valley occupation can be controlled via the application of in-plane strain. Our piezoresistance traces at $\nu=3/2$ show an increase in the resistance of CFs and a clear “kink” as the CFs make a two-valley to single-valley transition. This is qualitatively similar to the piezoresistance of electrons at zero magnetic field. The temperature dependence of the piezoresistance reveals that, like their electron counterparts [@GunawanNature], increasing valley polarization changes the sign of the temperature dependence of resistance of CFs signaling the importance of the discrete degrees of freedom in the transport properties of CFs.
We performed experiments on a 2DES confined to a 15 nm thick layer of AlAs, and modulation-doped with Si. Our sample was grown by molecular beam epitaxy on a (001) GaAs substrate. The electrons in this sample occupy two in-plane valleys with elliptical Fermi contours as shown in Fig. 1(b) [@ShayeganPSS2006], each centered at an X point of the Brillouin zone, and with an anisotropic effective mass (longitudinal mass $m_{l}=1.05$ and transverse mass $m_{t}=0.205$, in units of free electron mass). We refer to these valleys by the orientation of their major axis, \[100\] and \[010\]. To vary the occupations of the two valleys we glue our samples to a piezoelectric actuator (piezo), and apply voltage bias to the piezo to stretch the sample in one direction and compress it in the perpendicular direction [@BishopPRL07; @GunawanNature; @ShayeganPSS2006; @GunawanPRL2006; @ShkolnikovAPL2004]. This results in a symmetry breaking strain $\epsilon =\epsilon
_{[100]} - \epsilon _{[010]}$, where $\epsilon_{[100]}$ and $\epsilon_{[010]}$ are the strain values along the \[100\] and \[010\] directions. For $\epsilon > 0$ electrons are transferred from the \[100\] valley to the \[010\] valley and vice-versa for $\epsilon < 0$; in either case the total density remains fixed with strain. The resulting valley splitting energy is given by $E_V = \epsilon E_2$ where $E_2$ is the deformation potential which in AlAs has a band value of 5.8 eV [@ShayeganPSS2006]. We use a metal-foil strain gauge glued to the opposite face of the piezo to measure the applied strain [@ShayeganPSS2006]. The lithographically defined Hall-bar mesa is aligned along the \[110\] direction to pass current at $45^{\circ}$ with respect to the major axes of the valleys so that the antisymmetric piezoresistance due to mass anisotropy [@ShkolnikovAPL2004] is minimized. We used a top gate to vary the electron density ($n$). All measurements were done in a dilution refrigerator with a base temperature ($T$) of 20 mK and an 18 T superconducting magnet.
In Fig. 1(a) we show resistance ($R$) vs. magnetic field ($B$) data at $n=5.47 \times 10^{11}$ cm$^{-2}$ for $\epsilon = 0$ (balanced valleys). In addition to the integer quantum Hall states at low $B$, well-developed FQHE states around $\nu=3/2$ such as $\nu=$ 5/3, 4/3, 8/5 and 7/5 (and also in the second Landau level at $\nu=8/3$ and 7/3) can be seen. After determining the density and the position of $\nu=3/2$ from magnetoresistance data, we take piezoresistance traces at $B=0$ and at $\nu=3/2$ as shown in Figs. 1(d) and (e). Each trace in these figures is normalized to the value of resistance at $\epsilon = 0$ and shifted by 0.3 units vertically for clarity. In the remainder of the paper we first discuss the piezoresistance data taken at $B=0$ and then come back to $\nu=3/2$. Finally, we present data on the temperature dependence of the piezoresistance.
The Fermi contour of the electrons at zero strain, and the $B=0$ piezoresistance data for a range of densities are shown in Figs. 1(b) and (d), respectively. The piezoresistance data show two noteworthy features: First, the resistance increases as strain is swept away from zero. Second, at high values of strain the resistance shows a kink following which it changes very slowly. (Traces taken at high densities do not show the kink because of our limited strain range.) The kink positions in the piezoresistance traces mark the onset of full valley polarization of electrons, as documented previously [@GunawanNature]. The increase in resistance is because of the transfer of electrons between the valleys and can be understood reasonably well by incorporating the anisotropic effective mass of electrons [@DordaPRB1978] and the role of screening and scattering. For the current configuration shown in Fig. 1(b) the prediction of a simple Drude model, which adds the conductivities of the two valleys with an effective mass anisotropy ratio of $r=m_l/m_t=5.12$ and assumes a fixed scattering time, is shown in Fig. 1(d) with a dashed line. This curve is only adjusted to match the kink position and the resistance minimum of the $n=3.93 \times 10^{11}$ cm$^{-2}$ data. This model predicts the ratio of the resistances from saturation to balance to be $R_e^{[110]}=(r+1)^2/4r=1.83$. We point out that there might be additional contributions from screening and scattering effects. Screening becomes less effective with increasing valley polarization and can cause an extra increase in the resistance at larger valley polarizations [@Screening; @GunawanNature]. In contrast to screening, the inter-valley scattering is more pronounced when two valleys are occupied and results in a larger resistance around balance. Since the experimentally measured values of $R_e^{[110]}$ range from 1.8 to 2.1 and the data show a faster rise in resistance with strain compared to the simple Drude model, we conclude that the piezoresistance at $B=0$ mainly comes from the mass anisotropy and the loss of screening.
![(Color online) (a) Magnetoresistance ($R$ vs. $B_{\perp}$) trace at zero strain (balanced valleys). (b) Schematic showing the Fermi contours of the electrons in two valleys and the strain induced inter-valley electron transfer. The current ($I$) is applied along \[110\]. (c) Energy level diagram at $\nu=3/2$ for balanced valleys. (d) and (e) Piezoresistance traces at different densities for electrons and CFs, respectively. The values of density are given in units of $10^{11}$ cm$^{-2}$, and the positions of the “kinks” are marked by vertical lines.](Fig1.eps)
The energy level diagram at $\nu=3/2$ for balanced valleys and piezoresistance of CFs are shown in Figs. 1(c) and (e), respectively. At high densities, the piezoresistance of CFs exhibits features qualitatively similar to the piezoresistance of electrons: the resistance increases as strain is swept away from zero and then saturates at high strain values, signaling the full valley polarization of CFs. However, there are several differences. The resistance ratio $R_{CF}^{[110]}$ for CF piezoresistance from balance to saturation is smaller than $R_e^{[110]}$ and the kink occurs at smaller strain values for CFs compared to electrons. Furthermore, at low densities, we observe a dramatic rise in the piezoresistance at higher strains beyond the kink. We address these points in the following three paragraphs.

By making an analogy to electrons, the kink position in the $\nu=3/2$ piezoresistance traces can be associated with the full valley polarization of CFs, and the strain value at the kink position gives the valley splitting energy that is equal to the Fermi energy of CFs. Since the CF Fermi sea is a direct manifestation of the Coulomb interaction, the valley splitting energy needed to valley polarize the CFs is determined by the Coulomb energy which is quantified by the magnetic length [@BishopPRL07; @PadmanabhanPRB09]. The kink positions at $\nu=3/2$ piezoresistance traces in Fig. 1(e) are indeed in agreement with the results of previous studies of CF valley polarization energies, determined from coincidence measurements of CF Landau levels [@BishopPRL07; @PadmanabhanPRB09].
The experimentally measured value of the resistance ratio $R_{CF}^{[110]}$ for CFs from balance to saturation is 1.4 for the highest density, and drops almost to unity for the lowest densities. We note that the experimentally measured value of $R_{CF}^{[110]}$ is smaller than $R_e^{[110]}$. Similar to $R_e^{[110]}$, we expect that $R_{CF}^{[110]}$ would be affected by the effective mass anisotropy of CFs and the screening/scattering effects. Our piezoresistance resistance measurements along the \[100\] direction indeed show that CFs inherit the transport anisotropy of electrons at zero field, suggesting an anisotropy in CF effective mass [@GokmenUnpublished]. However, we emphasize that the observed transport anisotropy of CFs along the \[100\] is smaller compared to electrons and therefore qualitatively in agreement with the observation of $R_{CF}^{[110]} < R_e^{[110]}$.
Another feature of the piezoresistance traces at $\nu=3/2$ for low densities is the significant increase in resistance for high strains. The reason for this increase is the coincidence of the *electron* Landau levels. As can be surmised from Fig. 1(c), for sufficiently large strains, when the valley splitting energy is equal to the electron cyclotron energy, the lowest electron Landau level of one valley coincides with the second Landau level of the other valley. Beyond this coincidence, the electrons are fully valley polarized. Note that the increase in resistance at $\nu=3/2$ occurs roughly at the same strain value where the $B=0$ piezoresistance traces show the kink and signal full electron valley polarization. We add that at much higher strains, well past the coincidence, the piezoresistance at $\nu=3/2$ saturates again, consistent with the $B=0$ data.
Now we present the temperature dependence of the CF piezoresistance at $\nu=3/2$. In Figs. 2(b) and 2(c) we show piezoresistance traces for CFs at two densities. The data in Fig. 2(b) reveal that, at high densities and for small strains so that the valley polarization is small, CFs exhibit a metallic behavior ($dR/dT > 0$). With increasing strain, however, the resistance turns insulating ($dR/dT < 0$) as CFs become valley polarized. This observation demonstrates the importance of the discrete degrees of freedom for the temperature dependence of resistance of CFs. At lower densities (Fig. 2(c)) the metallic behavior around zero strain disappears, and the CFs act insulating in the full strain range, including at $\epsilon=0$ where they are valley degenerate.
Before presenting more details of our data, we briefly discuss the temperature dependence of the resistance of 2D *electrons* and the related MIT. The scaling theory predicts an insulating phase for a non-interacting 2DES with arbitrarily weak disorder, thanks to the weak localization of electrons [@AbrahamsPRL1979]. However, experiments in high quality Si 2DESs [@KravchenkoPRB1994] showed that, at high densities, the 2DES exhibits a metallic temperature dependence, and that the system can be driven to an insulating phase by lowering the 2DES density (increasing the disorder). Similar behavior has been reported for several other 2DESs [@AbrahamsRevModPhys01]. Although there is no consensus whether or not there exists a true metallic ground state (in the limit of zero temperature) for a low-disorder 2DES, it is generally believed that the observed metallic behavior is due to the interplay of disorder, interaction and finite temperature effects. Additional measurements have also shown that the spin polarization of the 2DES plays a critical role in the apparent MIT: The system’s behavior changes from metallic to insulating with increasing spin polarization [@Screening]. A recent study [@GunawanNature] revealed that, for AlAs 2DESs, not only spin but also valley polarization is an important parameter for the apparent MIT, namely, the 2DES exhibits an insulating behavior when both valley and spin polarizations pass beyond some threshold value.
We illustrate this behavior for our AlAs 2DES sample in Fig. 2(a) where we show piezoresistance traces for electrons at a very high parallel magnetic field such that the 2D electrons are fully spin polarized ($B_{\perp}=0$ in Fig. 2(a) trace). Both qualitatively and quantitatively our results are consistent with the results of Ref. [@GunawanNature]. At zero and small parallel fields, where the 2DES is not spin-polarized, it shows a metallic behavior in the entire strain range (data not shown). But when the 2DES is fully spin polarized via the application of a large parallel magnetic field, increasing the valley polarization changes the temperature dependence of resistance from metallic to insulating (Fig. 2(a)).
Our $\nu=3/2$ data in Fig. 2(b) show that the CFs qualitatively behave like the electrons. Note that because of the large g-factor, the Zeeman energy in our sample is comparable to and even larger than the cyclotron energy, as illustrated by the energy level diagram in Fig. 1(c). Therefore, the CFs at $\nu=3/2$ are fully spin polarized while we control their valley occupation via the application of strain. The electron data of Fig. 2(a) and CF data of Fig. 2(b) imply that, when the spin degree of freedom is frozen, (high density) electrons and CFs both show a metallic behavior when they have a valley degree of freedom. However, both systems show insulating behavior when the valley polarization is above some threshold value.
We observe the metallic behavior in the absence of strain and the valley polarization driven MIT for CFs in the high density range ($n>3.93 \times 10^{11}$ cm$^{-2}$). As the density is lowered, the metallic temperature dependence disappears and CFs exhibit an insulating behavior regardless of their valley polarization (Fig. 2(c)). This is consistent with the increasing disorder in the system at lower densities, and similar to the disorder-induced MIT of CFs demonstrated by Ref. [@LiangSSC1997]. We emphasize, however, that the CFs seem to be more sensitive to disorder than electrons: In the absence of strain, the density below which the *electrons* exhibit an insulating behavior is $\sim 1
\times 10^{11}$ cm$^{-2}$ while for the $\nu=3/2$ CFs it is $\sim 4
\times 10^{11}$ cm$^{-2}$.
Our piezoresistance results demonstrate that CFs and electrons show qualitatively very similar behaviors. However, they differ quantitatively in several ways. First, the valley splitting energy required to completely valley polarize the CFs is smaller than the electrons and this difference is reasonably well understood [@BishopPRL07; @PadmanabhanPRB09]. Second, the piezoresistance ratio $R_{CF}^{[110]}$ from balance to saturation for CFs is smaller than the corresponding ratio $R_e^{[110]}$ for electrons and has a stronger temperature dependence. Third, the temperature dependence of the resistance, and the observation of an insulating behavior for CFs with increasing valley polarization or decreasing density, are qualitatively similar to electrons.
We thank the NSF for support. Part of this work was performed at the NHMFL, Tallahassee, which is also supported by the NSF. We thank E. Palm, T. Murphy, J. Park, and G. Jones for assistance.
[00]{}
D.C. Tsui, H.L. Stormer, and A.C. Gossard Phys. Rev. Lett., 1559 (1982).
R.B. Laughlin Phys. Rev. Lett. [**50**]{}, 1395 (1983).
J.K. Jain, Phys. Rev. Lett. [**63**]{}, 199 (1989).
V. Kalmeyer and S.C. Zhang, Phys. Rev. B [**46**]{}, 9889 (1992).
B.I. Halperin, P.A. Lee, and N. Read, Phys. Rev. B [**47**]{}, 7312 (1993).
*Composite Fermions*, Jainendra K. Jain, Cambridge University Press, 2007.
R.L. Willett, R.R. Ruel, K.W. West, and L.N. Pfeiffer, Phys.Rev. Lett. [**71**]{}, 3846 (1993).
W. Kang, H.L. Stormer, L.N. Pfeiffer, K.W. Baldwin, and K.W.West, Phys. Rev. Lett. [**71**]{}, 3850 (1993).
V.J. Goldman, B. Su, and J.K. Jain, , Phys. Rev. Lett. [**72**]{}, 2065 (1994).
C.T. Liang, J.E.F. Frost, M.Y. Simmons, D.A. Ritchie, and M. Pepper, Sol. Stat. Comm. [**102**]{}, 327 (1997).
N.C. Bishop, M. Padmanabhan, K. Vakili, Y.P. Shkolnikov, E.P.De Poortere, and M. Shayegan, Phys. Rev. Lett. [**98**]{}, 266404 (2007).
O. Gunawan, T. Gokmen, K. Vakili, M. Padmanabhan, E.P. De Poortere, and M. Shayegan, Nature Physics [**3**]{}, 388 (2007).
M. Shayegan, E.P. De Poortere, O. Gunawan, Y.P. Shkolnikov, E.Tutuc, and K. Vakili, Phys. Stat. Sol. (b) [**243**]{}, 3629 (2006).
O. Gunawan, Y.P. Shkolnikov, K. Vakili, T. Gokmen, E.P. De Poortere, and M. Shayegan, Phys. Rev. Lett. [**97**]{}, 186404 (2006).
Y.P. Shkolnikov, K. Vakili, E.P. De Poortere, and M. Shayegan, Appl. Phys. Lett. [**85**]{}, 3766 (2004).
G. Dorda, I. Eisele, and H. Gesch, Phys. Rev. B [**17**]{}, 1785 (1978).
This is analogous to the case of spin in many 2DESs where the resistance increases by a factor of up to 5 when the spin polarization changes from zero to one and screening is diminished \[S. Das Sarma and E.H. Hwang, Phys. Rev. B [**72**]{}, 205303 (2005)\]. For experimental work, see: \[D. Simonian *et al.*, Phys. Rev. Lett. [**79**]{}, 2304 (1997); T. Okamoto *et al.*, Phys. Rev. Lett. [**82**]{}, 3875 (1999); E. Tutuc *et al.*, Phys. Rev. Lett. [**86**]{}, 2858 (2001)\].
M. Padmanabhan, T. Gokmen, and M. Shayegan, Phys. Rev. B [**80**]{}, 035423 (2009).
T. Gokmen, M. Padmanabhan, and M. Shayegan, Nature Physics [**6**]{}, 621 (2010).
E. Abrahams, P.W. Anderson, D.C. Licciardello, and T.V. Ramakrishnan, Phys. Rev. Lett. [**42**]{}, 673 (1979)
S.V. Kravchenko, G.V. Kravchenko, J.E. Furneaux, V.M. Pudalov, and M. D’Iorio, Phys. Rev. B [**50**]{}, 8039 (1994).
E. Abrahams, S.V. Kravchenko, and M.P. Sarachik, Rev. Mod. Phys. [**73**]{}, 251 (2001).
|
---
abstract: |
\
We use symmetry considerations in order to predict new magnetoelectric fluorides. In addition to these magnetoelectric properties, we discuss among these fluorides the ones susceptible to present multiferroic properties. We emphasize that several materials present ferromagnetic properties. This ferromagnetism should enhance the interplay between magnetic and dielectric properties in these materials.
address: |
Solid State Chemistry Laboratory, Zernike Institute for Advanced Materials,\
University of Groningen, Nijenborg 4, 9747 AG Groningen, The Netherlands
author:
- 'G. Nénert\*, and T. T. M. Palstra'
title: Prediction for new magnetoelectric fluorides
---
Introduction
============
In recent years, there has been a renewed interest in the coexistence and interplay of magnetism and electrical polarization [@Fiebig; @Eerenstein; @Maxim]. This interest has been concentrated on multiferroics and magnetoelectric materials. In multiferroics, a spontaneous polarization coexists with a long range magnetic order. In magnetoelectrics (we consider here only the linear effect), the polarization is induced by a magnetic field in a magnetically ordered phase [@ITC]. In the Landau theory framework, multiferroics which are not magnetoelectric present at least a coupling of the type P$^{2}M^{2}$ (P: polarization, M: total magnetization) while linear magnetoelectrics are characterized by terms like PM$^{2}$ or LMP (L: antiferromagnetic order parameter) [@Toledano]. Terms like P$^{2}M^{2}$ are of higher degree than PM$^{2}$ or LMP terms. Consequently, we expect a stronger interplay between dielectric and magnetic properties in linear magnetoelectrics than in simple multiferroics (e.g. YMnO$_{3}$ [@Agung]). More complicated coupling terms can also characterize the magnetoelectric effect (e.g. magnetic gradient)[@Harris]. These kind of terms are outside the purpose of the present contribution. In the search for materials presenting a strong coupling of magnetism and polarization, the most promising ones are multiferroics presenting linear magnetoelectric properties. These materials are scarce. Thus, it is of interest to look for new magnetoelectric materials by itself.
Recent efforts have concentrated on two main ideas: magnetic frustration and breaking of the inversion center due to an antiferromagnetic ordering. These approaches have been generated by the ideas of on one side Katsura [@Katsura] and Sergienko [@Sergienko] and on the other side of Mostovoy [@Mostovoy]. They described in the case of non collinear magnets a possible mechanism for magnetoelectricity and polarization induced by antiferromagnetic ordering, respectively. The new mechanism proposed by Katsura *et al.* does not involve the Dzialoshinskii-Moriya (DM) interaction contrary to typical magnetoelectric compound such as Cr$_{2}$O$_{3}$ [@Cr2O3]. Most of the recent research on multiferroics concerns centrosymmetric oxides [@nature]. These materials present a breaking of the symmetry giving rise to a spontaneous polarization which may be or not reversible by application of a magnetic field. The idea of using symmetry analysis to predict magnetoelectric compounds is not new. The first reported magnetoelectric compound Cr$_{2}$O$_{3}$ was predicted to be magnetoelectric prior to any experimental evidence [@Dzialoshinskii]. It is the same philosophy that we aim to take here.
In this article, we present a symmetry analysis of selected materials. All these materials should present magnetoelectricity based on symmetry arguments. We made a literature survey considering various magnetically ordered compounds for which neutron data were available. We made a systematic symmetry analysis of all the studied compounds (about 50 materials). We present here only our investigation of selected fluorides. This choice is motivated by two reasons. The first one is that there is a need to look for other materials than oxides if we search for new materials since magnetoelectric/multiferroic materials are scarce. The second reason is that polarization cannot exist in conducting materials. Thus, the high charge transfer in the fluorides make them good candidates for experimental investigations.
Several fluorides were reported to crystallize in a polar structure. Consequently, in addition to magnetoelectric properties, several fluorides are potentially ferroelectric. When this is the case, we discuss this possibility in the light of known ferroelectrics related to the material under investigation. All the compounds discussed in this article have been the subject of detailed crystallographic and magnetic studies by means of neutron diffraction. We present below the results of our search for new magnetoelectric fluorides.
Study of $\alpha$-KCrF$_{4}$
============================
$\alpha$-KCrF$_{4}$ is the first in the selected fluorides we present with possible magnetoelectric properties. The crystal structure of $\alpha$-KCrF$_{4}$ is orthorhombic (space group $Pnma$ (n$^{\circ}$62), *a* = 15.76 , *b*= 7.43 , *c* = 18.38 ). It consists of infinite columns of CrF$_{6}$ octahedra sharing edges along the *b* axis (see Fig. \[KCrF4-1\]) [@kissel].
![Crystal structure of KCrF$_{4}$ projected along *b* left) and *c* axis (right). We show the Cr$^{3+}$ sites in their octahedral environment. The white atoms are the K$^{+}$ atoms. The different grey scales represent the three inequivalent Cr$^{3+}$ sites.[]{data-label="KCrF4-1"}](KCrF4-1 "fig:"){width="8cm"}\
This compound presents a high magnetic frustration among the fluorides that we present here. It orders antiferromagnetically only under T$_{N}$ = 4 K with a quasi 1D behavior. We present in Fig. \[KCrF4-2\] a representation of its magnetic structure as determined from neutron scattering [@lacorre1].
![Magnetic structure of KCrF$_{4}$ in the (*a*,*c*) plane. Arrows indicate the magnetic moments on he chromium atoms with a quasi-120$^{\circ}$ configuration.[]{data-label="KCrF4-2"}](KCrF4-2 "fig:"){width="8cm"}\
There are three inequivalent Cr$^{3+}$ ions per unit cell and occupying the Wyckoff position 8d. Consequently, we have eight different magnetic sites all carrying one spin S$_{j}$. We can define the following eight magnetic vectors (one ferromagnetic and seven antiferromagnetic ones):
$$\begin{aligned}
\label{1}
&\overrightarrow{M}=\overrightarrow{S_{1}}+\overrightarrow{S_{2}}+\overrightarrow{S_{3}}+\overrightarrow{S_{4}}+\overrightarrow{S_{5}}+\overrightarrow{S_{6}}+\overrightarrow{S_{7}}+\overrightarrow{S_{8}},\\
&\overrightarrow{L_{1}}=\overrightarrow{S_{1}}-\overrightarrow{S_{2}}+\overrightarrow{S_{3}}-\overrightarrow{S_{4}}+\overrightarrow{S_{5}}-\overrightarrow{S_{6}}+\overrightarrow{S_{7}}-\overrightarrow{S_{8}},\\
&\overrightarrow{L_{2}}=\overrightarrow{S_{1}}+\overrightarrow{S_{2}}-\overrightarrow{S_{3}}-\overrightarrow{S_{4}}+\overrightarrow{S_{5}}+\overrightarrow{S_{6}}-\overrightarrow{S_{7}}-\overrightarrow{S_{8}},\\
&\overrightarrow{L_{3}}=\overrightarrow{S_{1}}-\overrightarrow{S_{2}}-\overrightarrow{S_{3}}+\overrightarrow{S_{4}}+\overrightarrow{S_{5}}-\overrightarrow{S_{6}}-\overrightarrow{S_{7}}+\overrightarrow{S_{8}},\\
&\overrightarrow{L_{4}}=\overrightarrow{S_{1}}+\overrightarrow{S_{2}}+\overrightarrow{S_{3}}+\overrightarrow{S_{4}}-\overrightarrow{S_{5}}-\overrightarrow{S_{6}}-\overrightarrow{S_{7}}-\overrightarrow{S_{8}},\\
&\overrightarrow{L_{5}}=\overrightarrow{S_{1}}-\overrightarrow{S_{2}}+\overrightarrow{S_{3}}-\overrightarrow{S_{4}}-\overrightarrow{S_{5}}+\overrightarrow{S_{6}}-\overrightarrow{S_{7}}+\overrightarrow{S_{8}},\\
&\overrightarrow{L_{6}}=\overrightarrow{S_{1}}+\overrightarrow{S_{2}}-\overrightarrow{S_{3}}-\overrightarrow{S_{4}}-\overrightarrow{S_{5}}-\overrightarrow{S_{6}}+\overrightarrow{S_{7}}+\overrightarrow{S_{8}},\\
&\overrightarrow{L_{7}}=\overrightarrow{S_{1}}-\overrightarrow{S_{2}}-\overrightarrow{S_{3}}+\overrightarrow{S_{4}}-\overrightarrow{S_{5}}+\overrightarrow{S_{6}}+\overrightarrow{S_{7}}-\overrightarrow{S_{8}}
$$
Lacorre and collaborators have investigated also the transformation properties of the different components of the magnetic vectors. We reproduce in Table \[KCrF4-IRs\] the results of their derivations [@lacorre1].
[|[c]{}|[c]{}|]{} IR & Magnetic components\
$\Gamma_{1}$ & L$_{1x}$, L$_{2y}$, L$_{3z}$\
$\Gamma_{2}$ & M$_{x}$, L$_{3y}$, L$_{2z}$\
$\Gamma_{3}$ & L$_{2x}$, L$_{1y}$, M$_{z}$\
$\Gamma_{4}$ & L$_{3x}$, M$_{y}$, L$_{1z}$\
$\Gamma_{5}$ & L$_{5x}$, L$_{6y}$, L$_{7z}$\
$\Gamma_{6}$ & L$_{4x}$, L$_{7y}$, L$_{6z}$\
$\Gamma_{7}$ & L$_{6x}$, L$_{5y}$, L$_{4z}$\
$\Gamma_{8}$ & L$_{7x}$, L$_{4y}$, L$_{5z}$\
\
As stated above, we need to look for the possible LMP terms allowed by symmetry. These terms are the signature of the linear magnetoelectric effect. For this, we need to know what are the transformation properties of the polarization components. It is sufficient to look at the transformation properties of the different polarization components under the effect of the generators of the space group. In Table \[Polarization-KCrF4\], we present the transformation properties of the polarization components in the space group $Pnma$.
[|[c]{}|[c]{}|[c]{}|[c]{}|]{} & 2$_{1x}$ & 2$_{1z}$ & $\overline{1}$\
P$_{x}$ & 1 & -1 & -1\
P$_{y}$ & -1 & -1 & -1\
P$_{z}$ & -1 & 1 & -1\
\
According to the Tables \[KCrF4-IRs\] and \[Polarization-KCrF4\], we can determine the allowed LMP terms which may be present and giving rise to an induced polarization under magnetic field. We know that below T$_{N}$, the magnetic structure is described by the irreducible representation $\Gamma_{6}$. It is experimentally observed that L$_{4x}$$>$L$_{6z}$ and L$_{7z}\simeq$0 [@lacorre1]. Taking into account these experimental results, we find that the most relevant magnetoelectric terms are L$_{4x}$P$_{y}$M$_{z}$ and L$_{4x}$P$_{z}$M$_{y}$. Consequently, an induced polarization may appear along P$_{y}$ (P$_{z}$) if one applies a magnetic field along z (y). Since this compound is centrosymmetric, it cannot present a multiferroic character.
Study of KMnFeF$_{6}$
=====================
The fluoride KMnFeF$_{6}$ presents a partial ordering of the Mn and Fe atoms giving rise to an enlargement of the unit cell compared to the usual tetragonal tungsten bronze type [@lacorre2]. The family of tetragonal tungsten bronze and related ones have been extensively investigated due to their ferroelectric properties [@ferro]. This compound crystallizes in the space group $Pba2$ (n$^{\circ}$32), where the Mn and Fe ions order on the 8c Wyckoff position of the structure and occupy statistically the 4b Wyckoff position. This compound is magnetically frustrated due to the presence of triangular cycles of antiferromagnetic interactions. All the Mn and Fe cations have an octahedral environment of fluorine atoms. In the *ab* plane, Mn and Fe ions alternate along the *c* axis. The magnetic structure is presented in Fig. \[K2MnFeF6\] [@lacorre2]. Although the ferroelectric properties have not been investigated to our knowledge, this compound is likely to present a multiferroic character below T$_{C}$. Indeed since many materials of this family are ferroelectric, it is likely that this compound presents such property.
![Magnetic structure of KMnFeF$_{6}$ in the (*a*,*b*) plane. Arrows indicate the magnetic moments on the iron atoms (mostly along the *a* axis) from [@lacorre2].[]{data-label="K2MnFeF6"}](K2MnFeF6 "fig:"){width="8cm"}\
Although presenting magnetic frustration, the compound KMnFeF$_{6}$ orders ferrimagnetically below T$_{C}$ = 148 K with a ratio $\frac{\Theta}{T_{C}}$=3. The magnetic structure is identical to the chemical unit cell and thus $\overrightarrow{k}$ = $\overrightarrow{0}$. The symmetry analysis by Bertaut’s method gives rise to the results presented in Table \[table1\] [@lacorre2; @Bertaut].
[|[c]{}|[c]{}|[c]{}|[c]{}|[c]{}|]{} Modes & x & y & z & Magnetic space groups\
$\Gamma_{1}$ & G$_{x}$ & A$_{y}$ & C$_{z}$ & $Pba2$\
$\Gamma_{2}$ & C$_{x}$ & F$_{y}$ & G$_{z}$ & $Pba'2'$\
$\Gamma_{3}$ & A$_{x}$ & G$_{y}$ & F$_{y}$ & $Pb'a'2$\
$\Gamma_{4}$ & F$_{x}$ & C$_{y}$ & A$_{y}$ & $Pb'a2'$\
\
The neutron data show that the best model for the magnetic structure is given by the $\Gamma_{4}$ mode. The corresponding magnetic space group is thus $Pb'a2'$ which has the magnetic point group m’m2’. According to Ref. 4, we have a linear magnetoelectric effect which is allowed having the following allowed terms (after transformation of the coordinates system):
$$\centering \label{2} \mathbf{[\alpha_{ij}]} = \left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & \alpha_{23} \\
0 & \alpha_{32} & 0 \\
\end{array} \right)$$
We remind that KMnFeF$_{6}$ presents a polar structure and is likely to be ferroelectric. Consequently, KMnFeF$_{6}$ is a multiferroic material which presents a strong interplay between magnetism and polarization below T$_{C}$=148K. Moreover, we notice here that it would be one of the scarce ferrimagnetic compounds presenting such properties. Under the application of a magnetic field below T$_{C}$ along the *c* axis (direction of spontaneous polarization) should create a polarization along the *b* axis (term $\alpha_{23}$) and vice versa (term $\alpha_{32}$). Thus it will be possible to switch the polarization direction under the application of a magnetic field. This is of high interest for technological applications. Another remarkable feature is that this compound orders at 148K which is much higher than the actual compounds [@nature].
Study of 2 members of the Ba$_{6}$M$_{n}$F$_{12+2n}$ family
===========================================================
In the previous fluorides, the magnetic frustration appeared in corner-sharing octahedra, which leads to a single type of interaction. P. Lacorre and coworkers have been also investigating compounds like Ba$_{2}$Ni$_{3}$F$_{10}$ (n = 9) and Ba$_{2}$Ni$_{7}$F$_{18}$ (n = 21) which are members of the Ba$_{6}$M$_{n}$F$_{12+2n}$ family [@lacorre3; @lacorre4]. In this family where M=Ni, there are not only corner-sharing octahedra but also edge-sharing octahedra. Both types of interaction exist in the Ba$_{2}$Ni$_{3}$F$_{10}$ and Ba$_{2}$Ni$_{7}$F$_{18}$ compounds. These compounds have been investigated by means of powder neutron diffraction at room and low temperatures.
We start by looking at the Ba$_{2}$Ni$_{3}$F$_{10}$ material. This compound crystallizes in the space group $C2/m$ (n$^{\circ}$12) containing 3 different Ni$^{2+}$ per unit cell. 2 Ni ions occupy the Wyckoff position 4i and the other one occupies the Wyckoff position 4h. Below T$_{N}$ = 50 K, an antiferromagnetic ordering starts to develop characterized by a magnetic wave-vector $\overrightarrow{k}$=(0,0,1/2). All the (hkl) magnetic reflections do not satisfy the C-centering of the chemical cell but a primitive lattice. P. Lacorre and collaborators have shown that the magnetic space group is $P2/m'$ where the magnetic moments lie in the *ac* plane. Consequently, the magnetic point group of this compound below its T$_{N}$ is 2/m’. According to Ref. 4, a linear magnetoelectric effect is allowed having the following expression:
$$\centering \label{2} \mathbf{[\alpha_{ij}]} = \left(
\begin{array}{ccc}
\alpha_{11} & 0 & \alpha_{13} \\
0 & \alpha_{22} & 0 \\
\alpha_{31} & 0 & \alpha_{33} \\
\end{array} \right)$$
Consequently, induced polarization can be observed along the three crystallographic directions under the application of an applied magnetic field. This material is not multiferroic since its structure is centrosymmetric. Moreover the structure remains centrosymmetric in the magnetic ordered phase. Consequently no spontaneous polarization can develop below and above T$_{N}$.
The other member of the family of interest is for n=21. Ba$_{2}$Ni$_{7}$F$_{18}$ crystallizes in the polar space group $P1$ (n$^{\circ}$1) containing four inequivalent sets of Ni$^{2+}$ ions. Each Ni$^{2+}$ ion occupies the Wyckoff position 1a in the general position. From all the fluorides that we treat here, it is the second which orders ferrimagnetically under T$_{C}$ = 36 K. Due to the low symmetry of the crystal, we have to deal here with magnetic components along the three crystallographic directions. While all the already studied fluorides present magnetic frustrations, it is not the case in this compound. We mean there is no competition between next nearest neighbors. Below T$_{C}$, all the new magnetic reflections can be indexed in the same cell as the chemical one. Consequently, the star of the magnetic wave-vector has only one arm. The irreducible representations associated to the space group $P1$ with $\overrightarrow{k}$=$\overrightarrow{0}$ are given in Table \[P11\].
[|[c]{}|[c]{}|]{} & h$_{1}$\
$\Gamma_{1}$ & 1\
\
According to the Table \[P11\], there is only one possibility for the magnetic space group which is $P1$. Referring to the Ref. 4, a linear magnetoelectric effect is allowed with non-zero components:
$$\centering \label{2} \mathbf{[\alpha_{ij}]} = \left(
\begin{array}{ccc}
\alpha_{11} & \alpha_{12} & \alpha_{13} \\
\alpha_{21} & \alpha_{22} & \alpha_{23} \\
\alpha_{31} & \alpha_{32} & \alpha_{33} \\
\end{array} \right)$$
Consequently, Ba$_{2}$Ni$_{7}$F$_{18}$ is a potential multiferroic material (polar structure and ferrimagnetic below T$_{C}$=36K). Moreover, irrespective of the direction of an applied magnetic field, the polarization parallel to the magnetic field will increase due to the magnetoelectric effect below T$_{C
}$.
Study of CsCoF$_{4}$
====================
CsCoF$_{4}$ is the last compound among the fluorides that we investigate in the light of a possible magnetoelectric effect. This compound crystallizes in the non-polar space group $I\overline{4}c2$ (n$^{\circ}$120) with two different Co$^{3+}$ Wyckoff positions in the unit cell: 4d and 16i. The antiferromagnetic order occurring below T$_{N}$ = 54 K is characterized by a magnetic wave-vector $\overrightarrow{k}$ = $\overrightarrow{0}$ [@lacorre5]. This structure is also magnetically frustrated due the presence of ferromagnetic interactions within an antiferromagnetic plane as described in Fig. \[CsCoF4\].
![Magnetic structure of CsCoF$_{4}$ in the (*a*,*b*) plane. Plus and Minus signs indicate the magnetic moments along the *c* axis (up or down).[]{data-label="CsCoF4"}](CsCoF4 "fig:"){width="8cm"}\
Based on geometrical considerations and comparison with magnetic structure of compounds of the same family (namely LiCoF$_{4}$), the authors proposed some constraints on the orientation of the magnetic moments. From these considerations, they found that the magnetic space group of CsCoF$_{4}$ is $I\overline{4}'$. The corresponding magnetic point group is $\overline{4}'$. If one compares this magnetic point group with the ones listed in Ref. 4, we observe that a linear magnetoelectric effect is possible along several directions:
$$\centering \label{2} \mathbf{[\alpha_{ij}]} = \left(
\begin{array}{ccc}
\alpha_{11} & \alpha_{12} & 0 \\
-\alpha_{12} & \alpha_{11} & 0 \\
0 & 0 & \alpha_{33} \\
\end{array} \right)$$
Discussion
==========
In the previous sections we have investigated the magnetic symmetry of various fluorides. We shall discuss here the common mechanism which may give rise to the magnetoelectric effect in the studied fluorides and compare it to other known magnetoelectric fluorides such as BaMnF$_{4}$ [@BaMnF4]. But prior to this, we should stress that there is an upper bound for the magnetoelectric effect [@Brown] which is defined as:
$$\begin{aligned}
\centering \label{2}
\alpha_{ij}\leq\varepsilon_{0}\mu_{0}\varepsilon_{ii}\mu_{jj}\end{aligned}$$
$\varepsilon_{ii}$ and $\varepsilon_{0}$ are respectively the permittivity of free space and the relative permittivity of the considered material. While $\mu_{jj}$ and $\mu_{0}$ are the relative permeability and the permeability of free space, respectively. As a consequence of Eq. (\[2\]), the magnetoelectric effect will remain small compared to unity except possibly in ferroelectric and ferromagnetic materials. Thus multiferroic magnetoelectric materials with ferromagnetic order are the most interesting. Among the various compounds that we have investigated, KMnFeF$_{6}$ and Ba$_{2}$Ni$_{7}$F$_{18}$ are likely to be good representatives of such materials.
Various mechanisms may contribute to the magnetoelectric effect. In the old literature, we can count about four different mechanisms which may participate in the magnetoelectric effect [@alcantara]. We can consider the molecular field theory expression of a magnetic field for the magnetoelectric susceptibility.
$$\begin{aligned}
\centering \label{3} H=H_{0}+V\\end{aligned}$$
where the Hamiltonian H$_{0}$ describes the spin system in the presence of a magnetic field and the perturbation V is linear in the electric field.
$$\begin{aligned}
\centering \label{4}
H_{0}=\frac{1}{2}\sum_{ij}J_{ij}\mathbf{S}_{i}.\mathbf{S}_{j}-D\sum_{j}\left(S_{i}^{z}\right)^{2}-\mu
\mathbf{H}.\sum_{i}\mathbf{S}_{i}\end{aligned}$$
V represent the changes of the various tensors due to the presence of an electric field: single-ion anisotropy, g-factor, symmetric exchange and Dzyaloshinskii-Moriya interactions. For the detailed expression of V, we refer the reader to the literature (see Ref. 25). It has been shown that in the presence of an electric field the changes of the g factor are predominant at low temperature in GdAlO$_{3}$ compared to the changes for the other tensors but not anymore above 1.2 K (T$_{N}$=3.78 K) [@alcantara]. It is thus difficult to determine which parameter has the most important contribution since it depends not only on the compound but also on the temperature. The change of the anisotropy energy, exchange and g value due to the electric field have been proposed for the origin of the magnetoelectric effect in Cr$_{2}$O$_{3}$ [@Cr2O3]. These various mechanisms are susceptible to play a role in the magnetoelectric effect in the fluorides that we present in the previous sections.
The main difference between the various fluorides that we present is the presence of inversion symmetry. The Dzyaloshinskii-Moriya (DM) interaction (antisymmetric exchange) is allowed only when the inversion symmetry is broken at the ligand ion mediating the exchange [@DM]. Therefore, when the crystal structure has inversion symmetry, the external electric field **E** induces the DM interaction. Thus the DM vector can be defined as **D**$_{ij}\propto$**E**$\times$**e**$_{ij}$ with **e**$_{ij}$ being the unit vector connecting the two sites i and j. This is the mechanism which has been proposed to explain the magnetoelectric effect in ZnCr$_{2}$Se$_{4}$ [@ZnCr2Se4]. One may expect that the contribution of the DM mechanism to the magnetoelectric effect is higher for compounds which allows a spontaneous DM interaction (i.e. not induced by the electric field **E**). In this perspective, we expect that the almost collinear magnetic structure of CsCoF$_{4}$ will give rise to a negligible DM contribution to the predicted magnetoelectric effect. We note that DM interactions do not result systematically in a magnetoelectric contribution as in the case of $\alpha$-Fe$_{2}$O$_{3}$ [@DM] or CoF$_{2}$ [@CoF2], which are piezomagnetic materials.
The fluorides that we present in this contribution present various symmetry properties. We can classify them in two types of crystal structures: polar and non-polar. In the first category, we count KMnFeF$_{6}$ and Ba$_{2}$Ni$_{7}$F$_{18}$. In the other category, we have KCrF$_{4}$, Ba$_{2}$Ni$_{3}$F$_{10}$ and CsCoF$_{4}$. As a consequence of their polar structure, KMnFeF$_{6}$ and Ba$_{2}$Ni$_{7}$F$_{18}$ are potentially multiferroic and thus ferroelectric at room temperature. If their ferroelectric properties can be confirmed experimentally, the mechanism for this ferroelectricity remains to be investigated. Several multiferroic fluorides have been investigated theoretically and experimentally [@BaMnF4; @fluorides]. It has been shown that the typical charge transfer towards empty d-orbitals responsible for ferroelectricity such as in BaTiO$_{3}$ is not active in BaMF$_{4}$ (M = Mn, Fe, Co, and Ni). The ferroelectric instability in the multiferroic barium fluorides arises solely due to size effects [@fluorides]. In the light of the occupied d-orbitals in KMnFeF$_{6}$ and Ba$_{2}$Ni$_{7}$F$_{18}$, it could be that the cooperative displacements of K$^{+}$ and Ba$^{2+}$ respectively would be responsible for the ferroelectric instability. Obviously, this hypothesis remains to be confirmed experimentally.
Conclusion
==========
In conclusion, we have shown from symmetry analysis that several fluorides are likely to be magnetoelectric. Several of them may present a multiferroic character coupled to an induced polarization under the application of a magnetic field. Most of them present magnetic frustration. We present here possible magnetoelectrics which are among the scarce ferrimagnetic systems. This ferromagnetism may enhance the interplay between polarization and magnetism for the case of multiferroic materials. The mechanism for potential ferroelectricity and magnetoelectric effect remain to be investigated. We expect that this work will stimulate experimental investigations of the dielectric properties of the above reported fluorides.
ACKNOWLEDGEMENTS {#acknowledgements .unnumbered}
================
The work was supported by the Dutch National Science Foundation NWO by the breedtestrategieprogramma of the Materials Science Center, MSC$^{+}$.
References {#references .unnumbered}
==========
[99]{} Corresponding author: Gwilherm Nénert, gwilherm.nenert@cea.fr, New address: CEA-Grenoble DRFMC/SPSMS/MDN; 17 rue des martyrs 38054, Grenoble Cedex France\
M. Fiebig, J. Phys. D: Appl. Phys. **38** R123 (2005)\[Fiebig\] W. Eerenstein, N. D. Mathur and J. F. Scott, Nature **442** 759 (2006)\[Eerenstein\] S.-W. Cheong and M. Mostovoy, Nature Materials **6**, 13 (2007)\[Maxim\] International Tables for Crystallography, Vol. D, Physical Properties of crystals, Edited by A. Authier, Kluwer Academic Publishers, 2003\[ITC\] The Landau Theory of Phase Transitions, J.-C. Tolédano and P. Tolédano; World Scientific Publishing 1987\[Toledano\] A. A. Nugroho, N. Bellido, U. Adem, G. Nénert, Ch. Simon, M. O. Tjia, M. Mostovoy, and T. T. M. Palstra, Phys. Rev. **75**, 174435 (2007)\[Agung\] A. B. Harris, unpublished, cond-mat/0610241)\[Harris\] H. Katsura, N. Nagaosa and A. V. Balatsky, Phys. Rev. Lett. **95**, 057205 (2005)\[Katsura\] I.A. Sergienko and E. Dagotto, Phys. Rev. B **73**, 094434 (2006)\[Sergienko\] M. Mostovoy, Phys. Rev. Lett. **96**, 067601 (2006) \[Mostovoy\] G. T. Rado, Phys. Rev. **128**, 2546 (1962); G. T. Rado, Phys. Rev. Lett. **6**, 6009 (1961); M. Date, J. Kanamori, M. Tachiki; J. Phys. Soc. Japan **16**, 2589 (1961)\[Cr2O3\] G. R. Blake *et al.*, Phys. Rev. B **71**, 214402 (2005); L. C. Chapon *et al.* Phys. Rev. Lett. **93**, 177402; N. Aliouane *et al.* Phys. Rev. B **73**, 20102 (2006); T. Goto *et al.* Phys. Rev. B **72**, 220403 (2005); T. Kimura, T. Goto, H. Shintani, K. Ishizaka, T. Arima, and Y. Tokura, Nature (London) **426**, 55 (2003); N. Hur, S. Park, P. A. Sharma, J. S. Ahn, S. Guha, and S.-W. Cheong, Nature (London) **429**, 392 (2004), I. A. Sergienko, C. Sen and E. Dagotto, Phys. Rev. Lett. **97**, 227204 (2006)\[nature\] S. Goshen, D. Mukamel, H. Shaked and S. Shtrikman, J. Appl. Phys. **40**, 1590 (1969); S. Goshen, D. Mukamel, H. Shaked and S. Shtrikman, Phys. Rev. B **12**, 4679 (1970)\[Goshen\] I. E. Dzialoshinskii, Sov. Phys. JETP, **10**, 628 (1960)\[Dzialoshinskii\] D. Kissel, R. Hoppe, Z.Natur. Teil B, Anorg. Chemie, Org. Chemie **42**, 135 (1987)\[kissel\] P. Lacorre, M. Leblanc, J. Pannetier, G. Ferey, J. Magn. Magn. Mat. **94**, 337 (1991); P. Lacorre, M. Leblanc, J. Pannetier, G. Ferey, J. Magn. Magn. Mat. **66** (1987) 219-224\[lacorre1\] P. Lacorre, J. Pannetier, G. Ferey, J. Magn. Magn. Mat. **94**, 331 (1991)\[lacorre2\] M. C. Foster, G. R. Brown, R. M. Nielson, S. C. Abrahams, J. Appl. Cryst. **30**, 495 (1997); G. C. Miles, M. C. Stennett, D. Pickthall, C. A. Kirk, I. M. Reaney and A. R. West, Powder Diffraction **20**, 43 (2005); L. E. Cross and R. R. Neurgaonkar, J. Mat. Science **27**, 2589 (1992)\[ferro\] E. F. Bertaut, Acta Cryst. A **24**, 217 (1968)\[Bertaut\] P. Lacorre, J. Pannetier, G. Ferey, J. Magn. Magn. Mat. **66**, 213 (1987)\[lacorre3\] J. Renaudin, G. Ferey, A. Kozak, M. Samouel, P. Lacorre, Solid State Commun. **65**, 185 (1988)\[lacorre4\] P. Lacorre, J. Pannetier, T. Fleischer, R. Hoppe, G. Ferey, J. Solid State Chem. **93**, 37 (1991)\[lacorre5\] D. L. Fox, D. R. Tilley, J. F. Scott, H. J. Guggenheim, Phys. Rev. B **21**, 2926 (1980)\[BaMnF4\] W. F. Brown, R. M. Hornreich and S. Shtrikman, Phys. Rev. **168**, 574 (1968)\[Brown\] O. F. de Alcantara Bonfim and G. A. Gehring, Advances in Physics, **29**, 731 (1980)\[alcantara\] T. Moriya, Phys. Rev. **120**, 91 (1960); I. Dzyaloshinskii, J. Phys. Chem. Solids **4**, 241 (1958)\[DM\] K. Shiratori and E. Kita; J. Phys. Soc. Japan **48**, 1443 (1980)\[ZnCr2Se4\] A. S. Borovik-Romanov, Sov. Phys. JETP **9**, 1390 (1959); A. S. Borovik-Romanov, Sov. Phys. JETP, **11**, 786 (1960)\[CoF2\] C. Ederer and N. A. Spaldin, Phys. Rev. B **74**, 020401(R) (2006); M. Yoshimura and M. Hidaka, J. Phys. Soc. of Japan **74**, 1181 (2005); C. Ederer and N. A. Spaldin, Phys. Rev. B **74**, 024102 (2006)\[fluorides\]
|
---
address: |
Astronomy Department, University of California\
Berkeley, California 94720, USA
author:
- LEO BLITZ
title: |
GLOBAL STAR FORMATION FROM ${\bf z = 5 \times 10^{-8}}$ to ${\bf z =
20}$
---
[¥[*Keywords*]{}: Stars: formation; ISM: general; Galaxies: evolution]{}
Introduction
============
The field of star formation exploded with the advent of millimeter-wave and infrared detectors in the 1970s. Prior to that it was a field with a few lonely but brilliant workers such as George Herbig and Adriaan Blaauw who managed to identify young stars and regions of star formation from their optically determined properties alone. Both realized that the regions of most recent star formation were always associated with dark dust clouds, and understood that the earliest stages of star formation would only be probed by penetrating the veil of dust obscuration. Since that time, the field of star formation has expanded to include not just the nearest accessible regions, but the farthest reaches of the Universe as well. Using what we’ve learned about local star formation, reasonable speculations and simulations have now been attempted to guess at what the first stars in the Universe might have been like (Abel, Bryan & Norman 2002) [@ref:abel02].
The field of star formation remains a rich area of research with many unsolved problems and thus continues to attract a coterie of young inventive scientists. In this article I give a personal view of where some parts of the field are headed, especially those areas that touch on star formation in galaxies. Clearly, in such a short space, I can cover only a few topics, and even those, rather cursorily.
What Do We Really Want to Know?
===============================
I begin by making the outrageous claim that the problem of low-mass, single star formation is essentially solved, due in large part to the work of Frank Shu, Richard Larson and a number of others. This is not to say that there aren’t still questions that are worth asking, but that the most interesting questions are more in the realm of planet formation than star formation. The remaining star formation issues are questions more of detail, rather than questions of a fundamental nature about how stars form. I illustrate this point with reference to what I believe is the most well known image in the scientific literature on the subject of star formation, which is shown here as Figure \[fig:shufig\] (Shu, Adams & Lizano 1987) [@ref:shu87]. (Jeff Hester’s beautiful image of the elephant trunk structures in the Eagle Nebula is more well known, but is reproduced primarily in the popular press).
1.0cm
This sketch represents the four stages of star formation which are now generally accepted as how low-mass, single stars form. Stars begin as gravitationally unstable condensations in cold, dense molecular clouds, forming a prestellar core observable in the near infrared as material continues to rain in on it. The higher angular momentum material forms a disk, and the system develops a bipolar outflow and jet which removes the angular momentum from the system, while initially disrupting and clearing out the infalling material. The star becomes visible as a T-Tauri star, and the disk ultimately becomes the raw material from which planets form. Magnetic fields play a central role in the dynamics, and add computational complexity, but almost surely determine the onset of collapse and the bipolar outflows. Diverse observations have a good theoretical underpinning, and little work is now done without either explicitly or implicitly invoking this picture.
On the other end of the distance scale, deep observations with the HST, Keck and SCUBA, have made it possible to determine the star formation history of the Universe, which is shown in Figure \[fig:madau\] (Steidel et al. 1999) [@ref:steidel99]. This plot, widely known as the Madau plot (Madau et al. 1996) [@ref:madau96], has been modified by others (e.g. Rowan-Robinson 2002 [@ref:rr02]), but its main features are well established: at early times there is a constant, or nearly constant star formation rate per comoving Mpc until about $z$ = 2, at which time the star formation rate steadily falls by about an order of magnitude until the present epoch. One of the great challenges is to apply what we know about the details of star formation in the nearest regions to fill in the missing pieces needed to obtain Figure \[fig:madau\]. The goal is to obtain not only the correct shape, but the correct amplitude of the Madau function.
1.0cm
Some of the Missing Pieces
--------------------------
### High Mass Star Formation
The first problem is that the star formation rate in the Universe is determined from the light of the most massive stars, but most of what we know about star formation applies only to low mass stars. Getting an evolutionary picture of high mass star formation remains difficult observationally because of the rapid destruction of their surroundings by high mass protostars. Without a good set of observations it is difficult to make progress in the theory. For example, only a small number of candidate high mass prestellar cores have been identified, and little has been written about the relationship of these cores to their surrounding molecular clouds. Nevertheless, considerable progress should be possible in the near term from observations with a new generation of millimeter-wave interferometers: CARMA and ALMA, the Spitzer Space Telescope, and SOFIA.
### The Formation of Stars in Clusters
If the problem of how individual low-mass stars form is essentially solved, and if high-mass star formation is next, we only get to the first rung of the ladder that ends at the Madau plot. Stars do not typically form as isolated objects, but rather in clusters, and little is known about clustered star formation. For example, do massive clumps form massive star clusters? What determines the star formation efficiency of a particular cluster forming clump? Are the stars that form in a cluster different from those that form in looser aggregates? Are the prestellar cores that form star clusters in the gas clumps the same as those identified with single star formation?
The study of clustered star formation is in such a primitive state that even some of the most basic questions have not yet been addressed. For example, it would seem that high mass stars are the last to form in clusters (lest they dissociate the gas from which the accompanying low mass stars form), and they appear to form in the cluster centers. But how can this be the case since high mass protostars should have the shortest dynamical times, and if formed in the centers of the clumps, should form first since the density of the gas is highest there. Although there have been several guesses at a solution (e.g. Stahler, Palla & Ho 2000 [@ref:stahler2000]; Bonnell, Bate & Zinnecker 1998 [@ref:bonnell1998]), no explanation seems compelling yet.
### Universality of the IMF
The calibration of the vertical scale in Figure \[fig:madau\] assumes that the IMF is invariant at all epochs and in all galaxies. But how universal is the IMF? To predict how it might or might not vary in other galaxies, and at other epochs, we need to know what physical or stochastic processes determine it. Very little is known about how the initial mass function is produced, though new work by Shu, Li, & Allen (2004) [@ref:shu04] promises some progress on that subject.
### The Formation of Stars in Galaxies
It may be a long time before it is possible to understand enough about the details of star formation to predict how star formation proceeds on galactic scales in GMCs. Nevertheless, it may be possible to circumvent this issue by learning how GMCs form and then determining how star formation proceeds [*on average*]{} in these GMCs. With this approach one would need to know how the physical conditions in GMCs differ in various galaxy types, in different locations within a galaxy, and with changes in metallicity. While this may seem like a daunting task, improvements in instrumentation now make it possible to survey entire galaxies at high enough resolution to make significant progress. Some early results are discussed below.
The last step in getting to the Madau plot is then extrapolating what we know about global star formation in normal galaxies to the star formation in starbursts and AGN. In other words, why do particular galaxies become starbursts, and how much star formation comes from particular galaxy or merger? This step is important because a significant fraction of the light of galaxies comes from starbursts, and the fraction of starbursts seems to change with $z$.
### Initial conditions, Initial Conditions, Initial conditions
What ties all of these points together is that to make the step from single star formation to the Madau plot, it is necessary not only to learn about the physical processes involved, which requires a combination of theory and observation, but to understand what the initial conditions are that give rise to variations in each step. For example, even if the process of isolated single star formation is essentially solved, we really have no idea how the intial conditions, the star forming cores, are produced. Furthermore, we don’t know whether the IMF reflects the mass spectrum of prestellar cores as suggested by Motte, Andre & Neri (1998) [@ref:motte98] and Testi & Sargent(1998) [@ref:testi98], or the process of star formation itself (Shu, Li, & Allen 2004 [@ref:shu04]). The beautiful work by Alves, Lada & Lada (2001) [@ref:alves01] suggests that the initial configuration for star formation may be better represented by a Bonner-Ebert sphere rather than a singular isothermal sphere (Shu 1977) [@ref:shu77]. Does this make a significant difference in the star that is produced? What are the initial conditions in a GMC that produce the difference between relatively isolated star formation (as in Taurus) and clustered star formation (as in Orion)? What are the initial conditions that give rise to the number and distribution of GMCs in a galaxy?
My own view is that we cannot know too much about typical initial conditions and how they vary. Therefore, there cannot be too much emphasis on trying to determine what the intial conditions are for forming individual low mass and high mass stars, for forming stars in clusters (why, for example, do some become globular clusters?), and for forming GMCs in the disks and centers of normal and starburst galaxies.
A Few Relevant Results
======================
Beyond Single Star Formation
----------------------------
One of the first attemps to study to study the formation of star clusters observationally was done by Elizabeth Lada (Lada 1992) [@ref:lada92], who made the first survey of dense gas in an entire GMC (Orion B) using the molecular tracer CS. She found that the embedded stars are found primarily in clusters, and that the clusters form only in the densest condensations: those identified by their CS emission. Subsequently, Phelps & Lada (1997) [@ref:phelps97] made another advance with their near IR imaging of some of the $^{13}$CO clumps in the Rosette Molecular Cloud. They were able to identify 7 embedded clusters associated with the centers of 7 massive clumps of molecular gas identified previously by Williams, Blitz & Stark (1995) [@ref:williams95]. These 7 clusters were all associated with far IR IRAS sources, and 5 were previously unknown. Thus what appeared to be single point sources in the IRAS data turned out to be embedded star clusters. Figure \[fig:ros.ps\] shows a plot of the clumps in the Rosette vs. the gravitational boundedness of the clumps, plotted as $M_{grav}/M_{lum}$ where $M_{grav} = RV^2 / G,~ M_{lum} =
X\int\int\int T_A dv dx dy$ and $X$ is the usual CO-to-H$_2$ conversion factor.
1.0cm
The clusters identified by Phelps & Lada [@ref:phelps97] are identified with the most massive, gravitationally bound clumps (Williams et al., 1995) [@ref:williams95]. But what is it about the star-forming clumps that produces a great many star-forming cores simultaneously? In other words, what is it that is communicated through a clump in a crossing time to let all parts know that they must produce stars simultaneously? What determines how many stars form within a given clump? Do the clumps even have embedded cores that are distinct, recognizable entities? The Phelps & Lada work also provides an efficient way to find embedded clusters, and, [*en passant*]{}, demonstrates that the clumps are real, long-lived entities, rather than ephemeral turbulent structures, as some authors have suggested (otherwise the star clusters would not have had enough time to form in them).
Star Formation on Galactic Scales
---------------------------------
Understanding clustered star formation will likely solve the problem of how star formation takes place within an individual molecular cloud. How then do we extrapolate to larger scales, to the scale of an entire galaxy? A reasonable question to ask is whether we need to know all of the details of the star formation process to address star formation on galactic scales. That is, since we know that star formation takes place only in molecular clouds, and that the star formation efficiency in molecular clouds tends to be small ($\sim 5\%$), with relatively little variation in normal galaxies, perhaps the question of how stars form in galaxies reduces to a question of how the molecular clouds themselves form? That is, if we can understand how the ISM turns molecular gas into GMCs, and we can understand how the different conditions within GMCs translate into different star formation efficiencies and perhaps even IMFs, then it should be possible to determine the global star formation rate from just the gas content and other physical conditions within the galaxies.
1.0cm
To this end, it is useful to have complete surveys of individual GMCs in entire galaxies not just unresolved images of the molecular gas, This has become possible only in the last few years, but for only a few galaxies; only two such maps have been published. The first was the LMC which has been nearly completely mapped by Mizuno et al. (2001) [@ref:mizuno01] using the 4m Nanten telescope (see Figure \[fig:lmc\]). More recently, Engargiola et al. (2003) [@ref:engargiola03] have used the BIMA array to make a 759 field mosaic of M33 at 15resolution ($\sim 50$ pc – see Figure \[fig:m33h1\]). Both of these images indicate the difficulty in surveying galaxies for individual GMCs: the surface filling fraction of GMCs in galactic disks is small (see Figure \[fig:lmc\]), and the resolution needed to determine the cloud properties is high, requiring either large amounts of telescope time for Local Group objects, or high sensitivity interferometric mosaics for galaxies farther away. For both the LMC and M33, followup observations at high resolution were needed to resolve the molecular clouds in each case.
1.0cm
Other galaxies that have been fully mapped to date but not yet published include the SMC (Mizuno et al.), IC10 (Leroy et al.), and M31 (Muller and Guelin); these maps were all presented at this YLU conference. Although there has been a herculean effort to map the molecular gas in M31, the resolution (90 pc) appears to be too low to resolve clouds blended in the beam in many directions (Muller, this conference). Followup interferometric observations will be needed to obtain the properties of the GMCs. The central region of M64 has also been mapped at high enough resolution to measure the molecular cloud properties in the nuclear region where the surface filling fraction of molecular gas approaches unity (Rosolowsky & Blitz 2005) [@ref:rosolowsky05]. The disk has not been observed at comparably high resolution.
The image of M33 seen in Figure \[fig:m33h1\] shows something quite striking and new: essentially all of the individual GMCs lie on filaments of HI. Note, though, that the filaments show little variation in surface density with radius, but that the GMCs become very sparse at radii more than about 12from the center. Averaged over annuli, the atomic gas surface density is nearly constant with radius, falling by only a factor of two over 7 kpc, but the molecular gas surface density is exponential with a scale length of 1.4 kpc. Because there is a great deal of HI where there is no CO, the H$_2$ must have formed from the HI, rather than the converse. But why do the GMCs become so sparse beyond about 3 kpc?
The close association of the molecular clouds with the filaments implies a maximum lifetime for the GMCs of $\sim$ 20 Myr, based on the mean velocity difference between the CO and HI along the same line of sight. A significantly longer lifetime would cause a spatial separation between the atomic and molecular gas. It thus appears that the filaments are a necessary, but not sufficient condition for the formation of molecular clouds. What, produces the radial abundance gradient of molecular gas, and thus the radial variation of the star formation rate?
One possibility is that the filaments are really the boundaries of ‘holes’, large regions relatively devoid of HI, caused by supernova explosions in a previous generation of OB associations. However, the large holes in Figure \[fig:m33h1\] are not associated with catalogued OB associations (Deul & van der Hulst 1987) [@ref:deul87]. In any event, energies of $\sim$10$^{53}$ ergs are needed to evacuate the large holes, implying that 100 or more O stars would have been formed in each, leaving bright stellar clusters and diffuse x-ray emission at the centers of the emply regions, which are not observed.
Could it be that the radial variation is due to a change in the ratio of CO/H$_2$, the so-called “X” factor produced by the known abundance gradient in M33? This possibility was investigated by Rosolowsky et al. (2003) [@ref:rosolowsky03] who showed that if $X$ is determined by equating the luminous CO mass with the virial mass of resolved clouds in M33, $X$ shows no variation with metallicity or radius. This can be seen from Figure \[fig:m33x\].
1.0cm
Wong & Blitz (2002) [@ref:wong02] have proposed that the fraction of molecular gas at a particular radius in a galaxy is the result of interstellar pressure, based on interferometric observations of six nearby spiral galaxies. Blitz & Rosolowsky (2004) [@ref:blitz04] showed that pressure modulated molecular cloud formation implies that the radius in a galaxy where the atomic/molecular surface density is unity should occur at a constant [*stellar*]{} surface density. An investigation of 30 galaxies showed this constancy to be good to within 50%. Thus it seems reasonable to conclude that hydrostatic pressure plays a significant role in the formation of molecular clouds.
But if hydrostatic pressure is the main culprit in forming GMCs, how do the GMCs vary from galaxy to galaxy where interstellar pressure might be quite varied? With current telescopes, we have data only for GMCs in the Local Group galaxies, and the published data are only available for the Milky Way, the LMC and M33. If we examine the cumulative mass distribution of GMCs for each galaxy (but separating the inner Milky Way from the outer Milky Way), we see that there are significant differences from one galaxy to another (Figure \[fig:mspec\]). In this figure, the mass distribution is normalized to the most massive cloud observed, and the distribution for M33 is significantly steeper than that of the other galaxies. The mass function is independent of resolution, and the differences in slope are significant.
1.0cm
We may also ask whether the clouds show differences, for example, in the size-linewidth relation observed for clouds in the Milky Way. Figure \[fig:rdv\] shows a plot of hundreds of clouds in the Milky Way, M33, and the LMC, with a line of slope 1/2 superimposed on the data. Evidently, the clouds in these galaxies obey the same size-linewidth relation with no zero-point offset: $\Delta V \propto R^{1/2}$.
1.0cmr
This plot suggests that if all of the clouds in these galaxies are self-gravitating, the surface density of the clouds is constant with a relative scatter given by the scatter in Figure \[fig:rdv\]. That is, since $\Delta V \propto R^{1/2}$, and $M \propto R(\Delta V)^2$/G, then $M/R^2 = const$. But the mean internal pressure of GMCs can be written: $P_{int}$ = $\alpha (\pi/2)
G {\Sigma_g}^2$, where $\Sigma_g$ is the gas surface density of the clouds and $\alpha$ is a constant near unity that depends on the cloud geometry. Thus, the GMCs that compose Figure \[fig:rdv\] have the same mean internal pressure, regardless of size, regardless of the galaxy they are in and regarless of the external pressure.
This gives us a way of understanding how the IMF might indeed be constant from galaxy to galaxy, at least for galaxies similar to those in Figure \[fig:rdv\]. That is, if the mean internal pressure of all GMCs in the disk of a galaxy is the same, then the range of pressures within a GMC might also be the same, and the star-forming cores might therefore also be quite similar. It is important to keep in mind, however, that even if true it might apply only in the disks of galaxies. In the bulge regions, the hydrostatic pressure of the gas is likely to be two to three orders of magnitude higher than that in the disk (e.g. Spergel & Blitz (1992) [@ref:spergel92]. In these regions, the external pressure can significantly exceed the mean internal pressure of a few $\times 10^5$ cm$^{-3}$ K of the clouds in the disk. In the bulge regions, the GMCs must be different from those in the disk, and may well give rise to stars with a different IMF.
Studying global star formation is only in its infancy and new instruments coming on line and being developed should provide the sensitive high resolution data needed to get from single star formation to the Madau plot. Equally important is to have those who work on local star formation interact closely with those working on global star formation on a regular basis as has happened in this conference.
Acknowledgments {#acknowledgments .unnumbered}
===============
I’d like to thank Charlie Lada who commented on an early version of this manuscript and Steve Stahler for a useful discussion. Erik Rosolowsky prepared a number of the figures in this paper.
References {#references .unnumbered}
==========
[99]{}
Abel, T., Bryan, G. L., & Norman, M. L. 2002, Science, 295, 93
Alves, J. F., Lada, C. J., & Lada, E. A. 2001, Nature, 409, 159
Blitz, L., & Rosolowsky, E. 2004, 612, L29
Bonnell, I. A., Bate, M. R., & Zinnecker, H. 1998, 298, 93
Deul, E. R., & van der Hulst, J. M. 1987, 67, 509
Engargiola, G., Plambeck, R. L., Rosolowsky, E., & Blitz, L. 2003, 149, 343
Lada, E. A. 1992, 393, L25
Madau, P., Ferguson, H. C., Dickinson, M. E., Giavalisco, M., Steidel, C. C., & Fruchter, A. 1996, 283, 1388
Motte, F., Andre, P., & Neri, R. 1998, 336, 150
Mizuno, N., et al. 2001, 53, 971
Phelps, R. L., & Lada, E. A. 1997, 477, 176
Rosolowsky, E., Engargiola, G., Plambeck, R., & Blitz, L. 2003, 599, 258
Rosolowsky, E., & Blitz, L. 2005, in press
Rowan-Robinson, M. 2001, 549, 745
Shu, F. H. 1977, 214, 488
Shu, F. H., Adams, F. C., & Lizano, S. 1987, 25, 23
Shu, F. H., Li, Z., & Allen, A. 2004, 601, 930
Spergel, D. N., & Blitz, L. 1992, Nature, 357, 665
Stahler, S. W., Palla, F., & Ho, P. T. P. 2000, Protostars and Planets IV, 327
Steidel, C. C., Adelberger, K. L., Giavalisco, M., Dickinson, M., & Pettini, M. 1999, 519, 1
Testi, L., & Sargent, A. I. 1998, 508, L91
Williams, J. P., Blitz, L., & Stark, A. A. 1995, 451, 252
Wong, T., & Blitz, L. 2002, 569, 157
|
---
abstract: 'We consider the fluctuations of the free energy of positive temperature directed polymers in thin rectangles $(N,N^{\alpha})$, $\alpha <3/14$. For general weight distributions with finite fourth moment we prove that the distribution of these fluctuations converges as $N$ goes to infinity to the GUE Tracy-Widom distribution.'
address:
- 'A. Auffinger, University of Chicago, 5734 S. University Avenue, Chicago, IL 60637, USA'
- 'J. Baik, Department of Mathematics, University of Michigan, 530 Church Street, Ann Arbor, MI 48109, USA'
- 'I. Corwin, Microsoft Research, New England, 1 Memorial Drive, Cambridge, MA 02142, USA'
author:
- Antonio Auffinger
- Jinho Baik
- Ivan Corwin
title: Universality for directed polymers in thin rectangles
---
\[section\] \[section\] \[theorem\][Conjecture]{} \[theorem\][Lemma]{} \[theorem\][Proposition]{} \[theorem\][Corollary]{} \[theorem\][Claim]{} \[theorem\][Experimental Result]{}
\#1 \#1[[\#1]{}]{} \#1
\[theorem\][Remark]{}
\[theorem\][Comment]{}
\[theorem\][Definition]{}
\[theorem\][Definitions]{}
\[theorem\][Conjecture]{}
Introduction
============
We prove a statement of KPZ universality for directed polymers in random media in thin rectangles $(N,N^{\alpha})$ for $\alpha<3/14$. We assume *general* weight distributions with at least a finite fourth moment. Our main result is the positive temperature analogue of Corollary 1.1 of [@JinhoToufic], Theorem 1 of [@BM], and the result of [@Suidan], where a similar universality for last passage percolation was proved.
The model we consider is defined as follows.
\[def:1\] Let $W_{ij}$, $i,j \in \mathbb N$, be a family of i.i.d. random variables with ${\mathbb{E}}[W_{11}]=0$, ${\mathbb{E}}[ (W_{11})^2] =1$ and ${\mathbb{E}}[(W_{11})^4] < \infty$. For each $j,k \in \mathbb N$ we define $S^j(k) = \sum_{i=1}^{k} W_{ij}$ with the convention $S^j(0)=0$. The partition function for the discrete directed random polymer from $(1,1)$ to $(N,n)$ at inverse temperature $\beta>0$ is defined as $$\label{def:discretePolymer}
Z_{N,n}(\beta) =
\sum_{1=i_0\leq i_1 \leq \ldots \leq i_n=N}
\exp \bigg(\beta \sum_{j=1}^n ( S^j(i_j)-S^j(i_{j-1}-1) ) \bigg).$$
The hypotheses that ${\mathbb{E}}[W_{11}]=0$, ${\mathbb{E}}[(W_{11})^2] =1$ are not restrictive. Indeed, a non-zero mean only changes $Z_{N,n}(\beta)$ by a (deterministic) multiplicative constant while a different second moment is just a rescaling of $\beta$. Also note that one can rewrite $Z_{N,n}(\beta)$ as the sum over all up/right lattice paths $\pi$ from $(1,1)$ to $(N,n)$ of the Boltzmann weights $\exp(\beta \sum_{(i,j)\in \pi} W_{ij})$.
The main result of the paper is the following:
\[mainthm\] For all $\alpha \in (0,\frac{3}{14})$ and $\beta>0$, if $n = \lfloor N^{\alpha}\rfloor$ then $$\lim_{N\rightarrow \infty} {\ensuremath{\mathbb{P}}}\bigg( \frac{\log Z_{N,n}(\beta) - 2\beta N^{\frac12+\frac{\alpha}2}}{\beta N^{\frac12-\frac{\alpha}6} } \leq r\bigg) = F_{GUE}(r),$$ for each fixed $r\in {\ensuremath{\mathbb{R}}}$, where $F_{GUE}$ is the GUE Tracy-Widom distribution function [@TW].
\[rem:KMT\] If we assume that the $W_{ij}$ have all moments finite, the above result holds for all $\alpha<\frac37$. See Remark \[rem:KMT2\] below.
The thinness assumption, $n=\lfloor N^{\alpha}\rfloor$ with $\alpha\in (0, \frac{3}{14})$, is only a technical condition. It is anticipated that a universality result should also hold for thicker rectangles (for all $0<\alpha\le 1$), after an appropriate modification of the centering and scaling of $\log Z_{N,n}(\beta)$. The modification should depend on the distribution of weights; however, there is presently no prediction for the exact dependence for the full rectangle case when $n=N$[^1], For $\alpha<1$, at least the leading order term of the centering was already known. Based on the earlier works of [@GW; @Sep2] and estimates similar to some of those in the proof of Proposition \[compareprop\], Moreno Flores [@Moreno Theorem (8.8)] showed a law of large numbers for the free energy by proving that with probability one: $$\label{eq:LLN}
\lim_{N\rightarrow \infty} \frac{1}{N^{\frac{1}{2}+\frac{\alpha}{2}}}\log Z_{N,n}(\beta) = 2 \beta.$$
We prove Theorem \[mainthm\] by employing the Skorohod embedding theorem to couple the discrete directed polymers to the semi-discrete directed polymer of O’Connell-Yor[@OY]. The asymptotics of the free energy of the semi-discrete polymer is then evaluated via a steepest descent analysis of the Fredholm determinant formula of [@BorCor Theorem 5.2.10]. This coupling approach is an adaptation of the previous works on directed last passage percolation by [@JinhoToufic] and [@BM], which corresponds to the case when $\beta=\infty$. We note that the same thinness condition $\alpha<3/14$ was assumed in these works under the finite fourth moment condition. Let $L_{N,n} = \lim_{\beta\to\infty} \beta^{-1} \log Z_{N,n}(\beta)$ denote the last passage percolation time. Then it was shown that $$\label{eq:LPPuniv}
\lim_{N\rightarrow \infty} {\ensuremath{\mathbb{P}}}\bigg( \frac{L_{N,n} - 2 \sqrt{Nn}}{\sqrt{N}n^{-1/6} } \leq r\bigg) = F_{GUE}(r).$$ This is consistent with our main result if we formally take the limit $\beta\to \infty$ and replace $\beta^{-1} \log Z_{N,n}(\beta)$ by $L_{N,n}$.
When we assume that the first $p$ moments are finite for $p>2$, then it was shown in [@BM] that holds for $\alpha<\frac{6}{7}(\frac12- \frac1{p})$. This is obtained by using the Komlós-Major-Tusnády theorem instead of the Skorohod embedding theorem in the couple argument. Note that the upper bound is $\frac37$ when $p=\infty$, which is in agreement with the result in this paper stated in Remark \[rem:KMT\] above. We remark that there is a very different proof of by [@Suidan] assuming that $p=3$. This proof is based on a concentration inequality of [@Chatterjee]. The condition on $\alpha$ is same as $\alpha<\frac{6}{7}(\frac12- \frac1{p})$ with $p=3$. It is an interesting question to find this approach can also be adapted to the polymer case.
The semi-discrete directed random polymer of O’Connell-Yor [@OY] is defined as follows.
Fix $t>0$ and $n\in {\ensuremath{\mathbb{N}}}$. To each ordered set $0=t_0 < t_1 < \cdots <t_{n-1} < t_n=t$, we associate an up/right semi-discrete path $\phi=\{(x, i): t_{i-1}\le x\le t_i\} $ in $[0,t] \times \{1, \ldots, n\}$. Given a family of independent standard one dimensional Brownian motions $B^i$, $1\leq i \leq n$, we define the energy of an up/right path $\phi$ as $$E(\phi) = B^1(t_1) + (B^2(t_2)-B^2(t_1)) + \cdots + (B^n(t_n)-B^n(t_{n-1})).$$ The partition function for the semi-discrete directed random polymer at inverse temperature $\beta>0$ is then defined as $$\label{def:semidiscrete}
{\ensuremath{\mathcal{Z}^{n}_{t}(\beta)}} = \int\limits_{0 =t_0< t_1 < \cdots <t_{n-1} < t_n=t} \exp \big(\beta E(\phi)\big) \; d\phi$$ where $d\phi$ denotes the Lebesgue measure in over the variables $t_1,\ldots, t_{n-1}$.
Theorem \[mainthm\] is a consequence of the following two results which we prove in Section \[sec:proofs\]. The first is a coupling result of $Z_{N,n}(\beta)$ and ${\ensuremath{\mathcal{Z}^{n}_{N}(\beta)}}$. Here we do not assume anything on the relationship between $n$ and $N$.
\[compareprop\] Given $\delta>0$, there exist positive constants $C=C(\delta, \beta)$ and $c=c(\beta)$ such that $${\ensuremath{\mathbb{P}}}\bigg( \big |\log Z_{N,n}(\beta) - \log {\ensuremath{\mathcal{Z}^{n}_{N}(\beta)}} \big|>a \bigg)
\leq \frac{C n N^{\frac14+\delta}}{a} + CNne^{-c(a-\log(n!))^2/n^2} $$ for all $n,N\in {\ensuremath{\mathbb{N}}}$ and $a>0$.
\[rem:KMT2\] If we assume that the $W_{ij}$ have all moments finite, then we can couple the discrete and semi-discrete polymers using the Komlós-Major-Tusnády theorem and replace the term $N^{\frac14+\delta}$ to $N^{\delta}$. For further detail, see Remark \[rem:KMTproof\]. It is easy to check that this improvement is enough to prove Theorem \[mainthm\] for $\alpha<\frac37$.
The second result is on the asymptotics for ${\ensuremath{\mathcal{Z}^{n}_{N}(\beta)}}$.
\[asyprop\] For each fixed $\alpha\in (0,1)$ and $\beta>0$, $$\lim_{t\to \infty} {\ensuremath{\mathbb{P}}}\bigg(\frac{\log {\ensuremath{\mathcal{Z}^{\lfloor t^{\alpha}\rfloor}_{\beta^2 t}(1)}} - 2\beta t^{1-\kappa}}{\beta t^{\mu}}\leq r\bigg)= F_{\rm{GUE}}(r)$$ for every fixed $r\in{\ensuremath{\mathbb{R}}}$, where $\kappa = \frac{1-\alpha}{2}$ and $\mu = \frac{3-\alpha}{6}$.
Note that for $\alpha\in (0,1)$ we have that $\kappa \in (0,1/2)$ and $\mu\in (1/3,1/2)$. For $\alpha=1$ an analog to the above result with $\mu=1/3$ is proved in [@BorCor Theorem 5.2.12] and [@BorCorFer Theorem 2.1]. In this case the form of the law of the large numbers term which must be subtracted is more involved. Our proof of Proposition \[asyprop\] is an adaptation of the analysis of [@BorCor; @BorCorFer]. On the other hand, for $\alpha=0$, if the number of rows is kept fixed at $n$ then the semi-discrete polymer free energy converges (after $\mu=1/2$ scaling) to a semi-discrete last passage percolation time. It is known that this last passage time is exactly distributed as the largest eigenvalue of an $n\times n$ GUE random matrix (see [@Bary; @GTW]; see also Theorem 1.1 of [@OCon]). Thus, when $\alpha=0$, if we take $t$ to infinity first and then let $n$ tend to infinity, we also obtain the $F_{\rm{GUE}}$ distribution as the limiting fluctuation.
We can now prove Theorem \[mainthm\].
We apply Proposition \[asyprop\] with $t=N$ and note that Brownian scaling implies that ${\ensuremath{\mathcal{Z}^{n}_{t}(\beta)}}$ is equal in law to $\beta^{-2(n-1)}{\ensuremath{\mathcal{Z}^{n}_{\beta^2 t}(1)}}$. This reduces the proof of Theorem \[mainthm\] to the claim that $$\frac{\log Z_{N,n}(\beta) - \log {\ensuremath{\mathcal{Z}^{n}_{N}(\beta)}} - 2(N^{\alpha}-1)\log\beta}
{N^{\frac12-\frac{\alpha}6}} \;\quad \text {converges to zero in probability.}$$ We may neglect the term $2(N^{\alpha}-1)\log\beta$ due to the fact that, $$\label{alphacomp}
N^{\alpha}\ll N^{\frac12-\frac{\alpha}6}\quad \textrm{for all } \alpha<3/7$$ as $N$ goes to infinity.
For any fixed $\epsilon>0$, we apply Proposition \[compareprop\] with $a=\epsilon N^{\frac12-\frac{\alpha}6}$ and $n=\lfloor N^{\alpha}\rfloor$ to see that for all $\delta>0$ there exist positive constants $c$ and $C$ such that $${\ensuremath{\mathbb{P}}}\bigg( \left|\frac{\log Z_{N,n}(\beta) - \log {\ensuremath{\mathcal{Z}^{n}_{N}(\beta)}}}{N^{\frac12- \frac{\alpha}{6}}}\right| > {\epsilon}\bigg) \leq \frac{C}{\epsilon} N^{\frac76\alpha-\frac14+\delta}+ CN^{1+\alpha}e^{-\eta(N)}.$$ where $\eta(N) = cN^{-2\alpha} (a-\log(N^{\alpha}!))^2$. By Stirling’s approximation, $\log(N^\alpha !) \approx N^{\alpha}\log N^{\alpha} - N^{\alpha}$. Using (\[alphacomp\]) again, we find that $a\gg \log(N^{\alpha}!)$ for $N$ large, and hence $\eta(N) \geq c' N^{1-\frac{7}{3} \alpha}$ for some other constant $c'$ depending on $c$ and $\epsilon$.
As $N$ goes to infinity, the second term converges to zero if $\alpha<\frac37$ and the first term converges to zero if $\alpha< \frac{3}{14}$ by choosing $\delta$ to be small enough (but fixed in $N$). This completes the proof of Theorem \[mainthm\].
### Acknowledgments {#acknowledgments .unnumbered}
We would like to thank Moreno Flores for bring his work [@Moreno] to our attention. The work of Jinho Baik was supported in part by NSF grants DMS1068646. The work of Ivan Corwin was funded by Microsoft Research through the Schramm Memorial Fellowship.
Proofs {#sec:proofs}
======
We prove Proposition \[compareprop\] and Proposition \[asyprop\].
Proof of Proposition \[compareprop\]
------------------------------------
We compare the discrete polymer of general weight distribution (satisfying the conditions of Definition \[def:1\]) with the semi-discrete polymer by coupling them using the Skorohod embedding theorem. For much of the proof it will be convenient to rescale our processes so as to be functions in $C([0,1], \mathbb R^n)$, the space of continuous functions from $[0,1]$ to $\mathbb R^n$. We will denote such rescaled processes with an overbar to distinguish them from the unscaled versions. For two such functions $f=(f^1,\ldots,f^n)$ and $g=(g^1, \ldots, g^n)$ in $C([0,1],\mathbb R^n)$ define the metric $$\label{metric}
d(f,g)=\sum_{j=1}^{n} ||f^j - g^j||_{\infty},$$ where $$||f^j - g^j||_{\infty} = \sup_{t \in [0,1]} |f^j(t) - g^j(t)|.$$ Set $$\label{def:F}
F_N(f)= \log \bigg[ \sum_{\substack{\frac{1}{N}=t_0 \leq t_1 \leq \cdots \leq t_{n-1}\leq t_n = 1 \\ t_i \in \frac1{N}{\ensuremath{\mathbb{Z}}}}}
\exp \bigg( \beta \sum_{j=1}^{n} (f^j(t_j)-f^j(t_{j-1}-\tfrac{1}{N}))\bigg) \bigg]$$ for $N\in \mathbb N$. A basic property of $F_N$ is the following:
\[lem:Lip\] The function $F_N:C([0,1],\mathbb R^n) \rightarrow {\ensuremath{\mathbb{R}}}$ defined in is Lipchitz continuous with Lipchitz constant less than or equal to $2\beta$ with respect to the metric .
Set $A(f)= e^{F_N(f)}$. Triangle inequality implies that $A(f) \leq A(g)e^{2\beta d(f,g)}$. Therefore, assuming without loss of generality that $\frac{A(f)}{A(g)} \geq 1$, we have $|F(f)-F(g)| = \left|\log \frac{A(f)}{A(g)}\right| \leq 2\beta d(f,g)$.
The next lemma gives us a coupling between the random walks $S^j(k)$ in Definition \[def:1\] and the Brownian motions $B^j(t)$. This is the Skorohod embedding theorem (whose proof can essentially be found in [@Durrett Theorem 8.6.1]) applied to each walk $S^{j}(\cdot)$.
\[Skorohod\] For each $j$, there exist i.i.d. stopping times $\tau^{j}_k$, $k=1,2,\ldots$, (which are measurable with respect to the Brownian filtration of $B^j$) such that ${\mathbb{E}}[\tau^{j}_k] = 1$, ${\mathbb{E}}[(\tau^{j}_k)^2]~<~\infty$, and $$B^j(\tau_1^j+\cdots + \tau_k^j) \stackrel{d}{=} S^j(k), \qquad k=1,2,\ldots.$$
Define $$\label{def:interpolation}
\bar S_N^j(t) = B^j(\tau_1^j+\cdots + \tau_{\lfloor Nt \rfloor}^j)
+ \big(Nt -\lfloor Nt \rfloor \big)
\big(B^j(\tau_1^j+\cdots + \tau_{\lceil Nt \rceil}^j) - B^j(\tau_1^j+\cdots + \tau_{\lfloor Nt \rfloor}^j)\big)$$ to be the piecewise linear interpolation of the Brownian motion sampled at times $\tau^1_1+\cdots+\tau^j_k$ for $1\leq k \leq N$, rescaled so as to be a function in $C([0,1],\mathbb R)$.
Consider the continuous vector valued function $\bar S_N(t) := (\bar{S}_N^1(t), \ldots, \bar{S}_N^n(t))\in C([0,1],\mathbb R^n)$. From Lemma \[Skorohod\], $\bar S_N^j(\frac{k}{N}) = S_N^j(k)$ for $1~\le~k~\le~N$. Therefore, from , , and , one finds that $$\label{eq:distributioneq}
\log Z_{N,n}(\beta) \stackrel{d}{=} F_N\big(\bar S_N\big).$$
We also define the rescaling of the Brownian motions by $$\bar B_N(t) = (B^1_N(Nt), \ldots, B_N^n(Nt)).$$ The proof of Proposition \[compareprop\] is now obtained by estimating probabilitistic bounds on $F_N\big(\bar S_N\big)-F_N\big(\bar B_N\big)$ and $F_N\big(\bar B_N\big)-\log {\ensuremath{\mathcal{Z}^{n}_{N}(\beta)}} $. The relevant estimates are obtained in Lemma \[firstmainlemma\] and in Lemma \[lem:estimate\] respectively.
\[firstmainlemma\] For any given $\delta>0$, there exist a positive constant $C$ such that $$\label{eq:bound}
{\ensuremath{\mathbb{P}}}\bigg(\left|F_N(\bar{S}_N)-F_N(\bar{B}_N)\right| \geq a \bigg) \leq \frac{C\beta n}{a}N^{\frac14+\delta}$$ for all $n, N\in \mathbb N$ and $a > 0$.
From Lemma \[lem:Lip\], the Markov inequality, and the union bound $$\label{eq:eq9}
\begin{split}
{\ensuremath{\mathbb{P}}}\bigg(\left|F_N(\bar{S}_N)-F_N(\bar{B}_N)\right| \geq a \bigg) &\leq {\ensuremath{\mathbb{P}}}\bigg(d(\bar S_N, \bar B_N) \geq \frac{a }{2 \beta } \bigg)
\leq \frac{2\beta n}{a} {\mathbb{E}}\| \bar{S}_N^1-\bar{B}^1_N\|_{\infty}.
\end{split}$$
Define the event $$A_N(u,\lambda) := \bigg\{ \sup_{1\leq i \leq N} \left|\sum_{\ell=1}^i \tau_\ell^1 -i\right|>u N^{1-\lambda}\bigg\}$$ for $u>0$ and $\lambda\in (0,1)$. From Doob’s inequality ([@Durrett Theorem 5.4.2 and Example 5.4.1]) and the definition of $\tau_l^1$ we have $$\label{eq:doob}
{\ensuremath{\mathbb{P}}}\big( A_N(u,\lambda) \big) \leq \frac{\text{Var} \; (\tau^1_1)}{N^{1-2\lambda}u^2}.$$ We now fix $\rho>0$ and write $$\label{eq:eq1}
\begin{split}
{\mathbb{E}}\| \bar{S}_N^1-\bar{B}^1_N\|_{\infty}
&= N^\rho \int_{0}^{\infty} {\ensuremath{\mathbb{P}}}\bigg(\| \bar{S}_N^1-\bar{B}^1_N\|_{\infty} > uN^{\rho}\bigg) \; du \\
&\le N^{\rho} + N^\rho \int_{1}^{\infty} {\ensuremath{\mathbb{P}}}\bigg(\| \bar{S}_N^1-\bar{B}^1_N\|_{\infty} > u N^{\rho} \bigg) \; du.
\end{split}$$ In order to estimate the last integral, we fix $\lambda\in (0,1)$ and divide the sample space into $A_N(u,\lambda)$ and $A_N(u,\lambda)^c$. From we find that $$\label{eq:eq2}
N^\rho \int_{1}^{\infty}
{\ensuremath{\mathbb{P}}}\bigg(\| \bar{S}_N^1-\bar{B}^1_N \|_{\infty} > uN^{\rho}, \; A_N(u,\lambda)\bigg) \; du
\le \text{Var} \; (\tau^1_1) N^{\rho- 1+2\lambda} .$$
It remains to estimate $$\label{eq:eq1-1-1}
\begin{split}
N^\rho \int_{1}^{\infty}{\ensuremath{\mathbb{P}}}\bigg(\| \bar{S}_N^1-\bar{B}^1_N \|_{\infty} > u N^{\rho}, \; A_N(u,\lambda)^c \bigg) \; d u.
\end{split}$$ For this purpose, we note the following result of P. Levy on the modulus of continuity of a Brownian motion. (See the proof of [@Durrett Theorem 8.4.2].)
\[lem:levy\] There are positive constants $K_1, K_2$ such that $$\label{scaledlevy}
{\ensuremath{\mathbb{P}}}\bigg( \sup_{\substack{s,t \in [0,1] \\ |t-s|\le r}} |B^1(s)-B^{1}(t)| > x \bigg)
\leq \frac{K_1}{r} e^{-K_2 \frac{x^2}{r}}$$ for all $r\in (0,1)$ and $x>0$.
By and the triangle inequality, $$\label{eq:eq15}
\begin{split}
&{\ensuremath{\mathbb{P}}}\bigg( \|\bar S^1_N(t) - \bar B^1_N(t)\|_{\infty} > u N^{\rho}, \; A(u,\lambda)^c \bigg) \\
& \leq {\ensuremath{\mathbb{P}}}\bigg( \sup_{t\in[0,1]} \left|B^1(\tau_1^1 + \cdots + \tau_{\lfloor Nt \rfloor}^1) - B^1(Nt)\right| > \frac12 uN^{\rho},
\; A(u,\lambda)^c \bigg) \\
&+ {\ensuremath{\mathbb{P}}}\bigg( \sup_{t\in[0,1]} \left|B^1(\tau_1^1 + \cdots + \tau_{\lceil Nt \rceil}^1) - B^1(\tau_1^1 + \cdots + \tau_{\lfloor Nt \rfloor}^1) \right| > \frac12 uN^{\rho}, \; A(u,\lambda)^c \bigg).
\end{split}$$ On $A(\lambda,u)^c$ we have $|\tau_1^1 + \cdots + \tau_{\lfloor Nt \rfloor}^1 - \lfloor Nt \rfloor |\le uN^{1-\lambda}$ for all $t\in [0,1]$. Since $\lambda\in (0,1)$, we have $uN^{1-\lambda}\ge N^{1-\lambda}\ge 1$ for all $u\ge 1$ and $N\ge 1$. Hence on $A(u,\lambda)^c$ we have for all $t \in [0,1]$ and $N\geq 1$ $$|\tau_1^1 + \ldots + \tau_{\lfloor Nt \rfloor}^1 -Nt|\le 2uN^{1-\lambda}.$$
Thus, the first term in the right side of is bounded above by $$\label{eq:eq16}
{\ensuremath{\mathbb{P}}}\bigg( \sup_{\substack{s,t \in [0,N+uN^{1-\lambda}] \\ |t-s|\le 2uN^{1-\lambda} }} |B^1(t) - B^1(s)| > \frac12 uN^{\rho} \bigg). $$
Similarly, the second term in the right side of is bounded above by $$\label{eq:eq16-1}
{\ensuremath{\mathbb{P}}}\bigg( \sup_{\substack{s,t \in [0,N+uN^{1-\lambda}] \\ |t-s|\le 3uN^{1-\lambda} }} |B^1(t) - B^1(s)| > \frac12 uN^{\rho} \bigg).$$ We must consider $s,t\in [0,N+uN^{1-\lambda}]$ because on the event $A(u,\lambda)^c$, $(\tau_1^1 + \ldots + \tau_{\lceil Nt \rceil})\leq~N+uN^{1-\lambda}$.
We wish to estimate the above probabilities using Lemma \[lem:levy\]. To do that we must rescale time by the factor $N+uN^{1-\lambda}$. Thus define $\tilde{t} = (N+uN^{1-\lambda})^{-1} t$. Then $B^{1}(t) = (N+uN^{1-\lambda})^{1/2} \tilde{B}^{1}(\tilde{t})$ where $\tilde{B}^{1}$ is another standard Brownian motion. This rescaling reduces to the left-hand side of with $$r= \frac{2 u N^{1-\lambda}}{N+uN^{1-\lambda}}, \qquad x=\frac{uN^{\rho}}{2 \sqrt{N+uN^{1-\lambda}}}.$$ Applying Lemma \[lem:levy\] shows that $$\eqref{eq:eq16} \leq \frac{K_1}{2}\left(\frac{N^{\lambda}}{u} + 1\right) e^{-\frac{K_2}{8} u N^{2\rho - 1 +\lambda}}.$$ Likewise one sees that $$\eqref{eq:eq16-1} \leq \frac{K_1}{3}\left(\frac{N^{\lambda}}{u} + 1\right) e^{-\frac{K_2}{12} u N^{2\rho - 1 +\lambda}}.$$
Integrating the sum of these bounds over the interval $u\in (1, \infty)$ and multiplying $N^{\rho}$, we find that for some other positive constants $K_3$ and $K_4$, $$\label{eq:eq17}
N^\rho \int_{1}^{\infty}{\ensuremath{\mathbb{P}}}\bigg(\| \bar{S}_N^1-\bar{B}^1_N \|_{\infty} > u N^{\rho}, \; A_N(u,\lambda)^c \bigg) \; d u
\le K_3 N^{1-\rho} e^{-K_4 N^{2\rho-1+\lambda}}$$ for all $N\ge 1$.
Combining with and , we conclude that $$\label{eq:eq18}
{\mathbb{E}}\| \bar S_N^1-\bar{B}^1_N\|_{\infty} \leq
N^\rho+ \text{Var} \; (\tau^1_1) N^{\rho- 1+2\lambda}
+ K_3 N^{1-\rho} e^{-K_4 N^{2\rho-1+\lambda}}.$$ Recall that $\text{Var} \; (\tau^1_1)<\infty$ by Lemma \[Skorohod\]. Now choosing $\lambda= \frac12$ and $\rho=\frac14+\delta$ with $\delta>0$, Lemma \[firstmainlemma\] follows from and .
\[rem:KMTproof\] If we assume finite exponential moments for the weight distributions then we can better approximate the simple random walk by a Brownian motion using the dyadic approximation of Komlós-Major-Tusnády. Indeed, using [@greg Theorem 7.1.1], one sees that in this case for any $\delta >0$ there exists a constant $C>0$ such that $${\mathbb{E}}\| \bar S_N^1-\bar{B}^1_N\|_{\infty} \leq C N^\delta.$$ Then from (\[eq:eq9\]) the term $N^{\frac{1}{4} + \delta}$ in Lemma \[firstmainlemma\] is improved to $N^{\delta}$. This in turn implies the same change in the first term of the bound in Proposition \[compareprop\]. It is easy to check that with this improved bound the proof of Theorem \[mainthm\] goes through under the weaker assumption that $\alpha<\frac{3}{7}$.
We now compare the partition function of the semi-discrete polymer with $F_N(\bar B_N )$.
\[lem:estimate\] There are positive constants $K_1$ and $K_2$ such that $${\ensuremath{\mathbb{P}}}\bigg( \left|F_N(\bar B_N) - \log {\ensuremath{\mathcal{Z}^{n}_{N}(\beta)}}\right| >a \bigg) \leq K_1nN e^{-\frac{K_2(a-\log (n!))^2}{4\beta^2n^2}}.$$ for all $n,N\in {\ensuremath{\mathbb{N}}}$ and $a>0$.
The proof follows the same strategy of Lemma \[firstmainlemma\]. We will use Lemma \[lem:levy\]. For $u >0$, we define the event $$S(u) = \bigg \{ \sup_{\substack{s,t \in [0,N] \\ 0\leq |t'-s'|<1 \\ 1 \leq j \leq n }}
|B^j(t) - B^j(s)| > u \bigg\}.$$ By Lemma \[lem:levy\], Brownian scaling, and the union bound, $$\label{eq:bogus}
{\ensuremath{\mathbb{P}}}(S(u)) \leq K_1nN e^{-K_2 u^2}.$$
Let us now observe that we can write $$\label{ineq:32}
\begin{split}
{\ensuremath{\mathcal{Z}^{n}_{N}(\beta)}} &= \sum_{\substack{1=t_0\leq t_1 \leq \cdots \leq t_{n-1}\leq t_n =N\\ t_i \in {\ensuremath{\mathbb{Z}}}}}
A(t_0, \ldots, t_n)
\end{split}$$ where $$A(t_0, \ldots, t_n) :=\int_{D(t_0,\ldots, t_n)} e^{\beta E(\phi)} \; d\phi.$$ In order to define the domain $D(t_0,\ldots, t_n)$ over which $\phi$ is integrated, recall that $\phi$ can be specified by an ordered set $0=s_0<s_1<\cdots< s_{n-1}<s_n=N$. Then $D(t_0,\ldots, t_n)$ is the set of $\phi$ whose associated ordered set also satisfies $s_{i}\in [t_i -1, t_i)$ for $1\leq i \leq n-1$.
On the event $S(u)^c$, we have $$e^{ \beta \sum_{j=1}^n (B^j(t_{j}) - B^{j}(t_{j-1}-1))- 2\beta nu }
\leq e^{\beta E(\phi)} \leq e^{ \beta \sum_{j=1}^n (B^j(t_{j}) - B^{j}(t_{j-1}-1))+ 2\beta nu}.$$
Integrating the above inequality over $\phi\in D(t_0,\ldots, t_n)$ and noting that the Lebesgue measure of $D(t_0,\ldots, t_n)$ is bounded in $[(n!)^{-1},1]$, we find that on $S(u)^c$,
$$\label{eq:eq20}
(n!)^{-1} e^{ \beta \sum_{j=1}^n (B^j(t_{j}) - B^{j}(t_{j-1}-1))- 2\beta nu }
\leq A(t_1, \ldots, t_n) \leq e^{ \beta \sum_{j=1}^n (B^j(t_{j}) - B^{j}(t_{j-1}-1))+ 2\beta nu}.$$
Hence on $S(u)^c$, $$F_N(\bar B_N) - 2\beta nu -\log (n!)\leq \log {\ensuremath{\mathcal{Z}^{n}_{N}(\beta)}} \leq F_N(\bar B_N)+ 2\beta nu .$$ Thus from we find that $${\ensuremath{\mathbb{P}}}\bigg( \left|F_N(\bar B_N) - \log {\ensuremath{\mathcal{Z}^{n}_{N}(\beta)}} \right| >2\beta n u +\log(n!) \bigg) \leq
{\ensuremath{\mathbb{P}}}(S(u)) \leq K_1nN e^{-K_2 u^2}.$$ for all $u>0$ and $n, N\in{\ensuremath{\mathbb{N}}}$. Setting $a=2\beta n u +\log(n!)$, we obtain the claimed result of this lemma.
The proof of Proposition \[compareprop\] follows immediate by combining the above Lemmas and .
Proof of Proposition \[asyprop\]
--------------------------------
This proof is based off of the proof of Theorem 4.1.46 of [@BorCor] which establishes the GUE limiting distribution for the semi-discrete polymer with a constant ratio between $n$ and $t$. Both that result and the present result rely upon Theorem 5.2.10 in [@BorCor] which (specializing that result to the case of $a_1=\cdots=a_N=0$) yields:
\[OConFredDet\] Fix $n\geq 1$ and $\tau \geq 0$, and $0<\delta<1$. Then, $${\mathbb{E}}\left[e^{-u {\ensuremath{\mathcal{Z}^{n}_{\tau}(1)}}}\right] = \det(I+ K_{u})$$ where $K_u: L^2(C_0)\to L^2(C_0)$ is the operator defined by the kernel $$\label{Kudef}
K_{u}(v,v') = \frac{1}{2\pi \iota}\int_{-\iota \infty + \delta}^{\iota \infty +\delta}ds \Gamma(-s)\Gamma(1+s) \frac{\Gamma(v)^n}{\Gamma(s+v)^n} \frac{ u^s e^{v\tau s+ \tau s^2/2}}{v+s-v'}$$ and $C_{0}$ is a positively oriented contour containing $0$ and such that $|v-v'|<\delta$ for all $v,v'\in C_{0}$.
The above result for the Laplace transform leads to the desired asymptotic probability distribution by virtue of the following result which is given as Lemma 4.1.38 of [@BorCor].
\[problemma1\] Consider a one-parameter family of functions $\{f_t\}_{t\geq 0}$ mapping ${\ensuremath{\mathbb{R}}}\to [0,1]$ such that for each $t$, $f_t(x)$ is strictly decreasing in $x$ with a limit of $1$ at $x=-\infty$ and $0$ at $x=\infty$, and for each $\delta>0$, on ${\ensuremath{\mathbb{R}}}\setminus[-\delta,\delta]$ $f_t$ converges uniformly to ${\bf 1}(x\leq 0)$. Define the $r$-shift of $f_t$ as $f^r_t(x) = f_t(x-r)$. Suppose that a one-parameter family of of random variables $X_t$ satisfies, for each $r\in {\ensuremath{\mathbb{R}}}$, $$\lim_{t\to \infty} {\mathbb{E}}[f^r_t(X_t)] = p(r)$$ for a continuous cumulative distribution function $p(r)$. Then $X_t$ converges weakly in distribution to a random variable $X$ which is distributed according to ${\ensuremath{\mathbb{P}}}(X\leq r) = p(r)$.
Let $\mu>0$ and $\kappa>0$ be defined as in the hypothesis of Proposition \[asyprop\]. Consider the functions $f_t(x) = e^{-e^{t^{\mu}x}}$. Observe that this family of functions meets the criteria of Lemma \[problemma1\]. By Lemma \[problemma1\], if for each $r\in {\ensuremath{\mathbb{R}}}$ we can prove that $$\lim_{t\to \infty} {\mathbb{E}}\left[f_t^r\bigg(\frac{\log {\ensuremath{\mathcal{Z}^{\lfloor t^{\alpha}\rfloor}_{\beta^2 t}(1)}} - 2\beta t^{1-\kappa}}{t^{\mu}}\bigg) \right] = F_{\rm{GUE}}(\beta^{-1} r),$$ then it will follow that $$\lim_{t\to \infty} {\ensuremath{\mathbb{P}}}\bigg(\frac{\log {\ensuremath{\mathcal{Z}^{\lfloor t^{\alpha}\rfloor}_{\beta^2 t}(1)}} - 2\beta t^{1-\kappa}}{t^{\mu}}\le r \bigg) = F_{\rm{GUE}}(\beta^{-1} r).$$
Observe that if we define $$\label{udef}
u=e^{-2\beta t^{1-\kappa} - rt^{\mu}},$$ then $$f_t^r\bigg(\frac{\log {\ensuremath{\mathcal{Z}^{\lfloor t^{\alpha}\rfloor}_{\beta^2 t}(1)}} - 2\beta t^{1-\kappa}}{t^{\mu}}\bigg)
=e^{-u{\ensuremath{\mathcal{Z}^{t^{\alpha}}_{\beta^2 t}(1)}}}.$$ Hence, in view of Theorem \[OConFredDet\], our proof reduces now to proving that for $u$ as in (\[udef\]), $n=\lfloor t^{\alpha}\rfloor$ and $\tau = \beta^2 t$ $$\label{prlimit}
\lim_{t\to \infty} \det(I+ K_{u}) = F_{\rm{GUE}}(\beta^{-1} r)$$ for each fixed $r\in{\ensuremath{\mathbb{R}}}$.
We prove using a steepest descent analysis. This analysis follows exactly the approach employed in the proof of Theorem 4.1.46 in [@BorCor]. As such we only include the critical point analysis here presently. The necessary contour manipulations and tail bounds can readily be found in [@BorCor] and easily adapted. In the below we replace $n=\lfloor t^{\alpha}\rfloor$ by $n=t^{\alpha}$ to make the notations simpler. It is straightforward to check that the error coming from this change is negligible
Let us consider the kernel $K_u(v, v')$. We scale the kernel by setting $v=t^{-\kappa}\tilde{v}$ and $v'=t^{-\kappa}\tilde v'$ and define $\tilde{K}_u(\tilde v,\tilde v') = K_u(v, v') t^{-\kappa}$. Then $\det(I+ K_{u})= \det(I+ \tilde{K}_{u})$. By changing the variables as $s=t^{-\kappa}(\tilde{\zeta}-\tilde{v})$ and $\tilde{v}=t^{\kappa}v$ in the formula of the kernel, we have $$\label{zetaKeqn}
\tilde{K}_{u}(\tilde v,\tilde v') = \frac{1}{2\pi \iota}\int \frac{\pi t^{-\kappa}}{\sin (\pi t^{-\kappa} (\tilde v-\tilde \zeta))}\exp\left\{t^{\alpha} \left(G(\tilde v)-G(\tilde \zeta)\right)+ r t^{\frac{\alpha}{3}}(\tilde v-\tilde \zeta)\right\}\frac{d\tilde \zeta}{\tilde \zeta-\tilde v'}$$ where $$G( z) = \log \Gamma(t^{-\kappa} z) - \beta^2 z^2/2 + 2\beta z$$ and we have also replaced $\Gamma(-s)\Gamma(1+s) = \pi / \sin(-\pi s)$. Here the contour for $\tilde\zeta$ in the integral may be taken as $\tilde v + \tilde \delta +\iota {\ensuremath{\mathbb{R}}}$ for any fixed $\tilde\delta>0$ and the operator $\tilde{K}_u$ may be defined on $L^2(\tilde{C}_0)$ where $\tilde{C}_0$ is a positively oriented contour containing $0$ and such that $|\tilde v-\tilde v'|<\tilde \delta$ for all $\tilde v,\tilde v'\in \tilde{C}_{0}$. The problem is now prime for steepest descent analysis of the integral defining the kernel above.
The idea of steepest descent is to find critical points for the argument of the function in the exponential, and then to deform contours so as to go close to the critical point. The contours should be engineered such that away from the critical point, the real part of the function in the exponential decays and hence as $t$ gets large, has negligible contribution. This then justifies localizing and rescaling the integration around the critical point. The order of the first non-zero derivative (here third order) determines the rescaling in $t$ which in turn corresponds with the scale of the fluctuations in the problem we are solving. It is exactly this third order nature that accounts for the emergence of Airy functions and hence the Tracy Widom (GUE) distribution.
The Digamma function is defined as $\Psi(z)=[\log \Gamma]'(z)$ and, along with its first two derivatives, have the following expansion for $|z|$ small: $$\Psi(z) = \frac{-1}{z} + O(1),\qquad \Psi'(z) = \frac{1}{z^2} + O(1), \qquad \Psi''(z) = -\frac{2}{z^3} + O(1).$$ Let us then record the first three derivatives of $G$ along with their large $t$ expansions (afforded by the above expansions) $$\begin{aligned}
G'(\tilde v) =& \Psi(t^{-\kappa} \tilde v)t^{-\kappa} - \beta^2 \tilde v + 2\beta& = -\frac{1}{\tilde v} - \beta^2 \tilde v +2\beta + O(t^{-\kappa})\\
G''(\tilde v) = & \Psi'(t^{-\kappa} \tilde v)t^{-2\kappa}-\beta^2 &= \frac{1}{\tilde v^2} - \beta^2 + O(t^{-2\kappa})\\
G'''(\tilde v) = & \Psi''(t^{-\kappa} \tilde v)t^{-3\kappa} &= -\frac{2}{\tilde v^3} + O(t^{-3\kappa}).\end{aligned}$$
The critical point is the value of $\tilde v=\tilde v_c$ at which $G'(\tilde v_c) =0$. However, it suffices to choose $\tilde v_c$ such that $$-\frac{1}{\tilde v_c} - \beta^2 \tilde v_c +2\beta =0$$ since then $G'(\tilde v_c) = O(t^{-\kappa})$. Solving the above gives $\tilde v_c = \beta^{-1}$. This choice of $\tilde v_c$ implies that $$\begin{aligned}
G'(\tilde v_c) &=& O(t^{-\kappa})\\
G''(\tilde v_c) &= & O(t^{-2\kappa})\\
G'''(\tilde v_c) &= & -2\beta^3 + O(t^{-3\kappa}).\end{aligned}$$ Note that by taking $\tilde\delta$ small enough, it is possible to deform the contour of to pass the critical point $\tilde \zeta= \beta^{-1}$.
Therefore we may make the final change of variables to expand around $\tilde v_c$ by setting $\tilde v = \tilde v_c + t^{-\frac{\alpha}{3}} \hat{v}$. From Taylor approximation and the above bounds on the derivatives of $G$ we find that $$G(\tilde{v}) = G(\tilde v_c) -\frac{\beta^3}{3} t^{-\alpha} (\hat{v})^3 + lot$$ where $lot$ denotes lower order terms in $t$. We also set $\tilde \zeta= \tilde v_c + t^{-\frac{\alpha}{3}} \hat{\zeta}$. We may conclude from the above argument that (note that the constants $G(\tilde v_c)$ cancel) $$\label{Gexp}
t^{\alpha} \left(G(\tilde v)-G(\tilde \zeta)\right) = -\frac{\beta^3}{3}(\hat{v})^3 + \frac{\beta^3}{3}(\hat{\zeta})^3 + o(1).$$ Now we can turn to the remaining parts of the kernel. Note that $$r t^{\frac{\alpha}{3}}(\tilde v-\tilde \zeta) = r (\hat{v}-\hat{\zeta}).$$ As for the other two terms in $K_u$ we have $$t^{-\frac{\alpha}{3}} \frac{\pi t^{-\kappa}}{\sin (\pi t^{-\kappa} (\tilde v-\tilde \zeta))} \to \frac{1}{\hat{v}-\hat{\zeta}}, \qquad \frac{d\tilde \zeta}{\tilde \zeta-\tilde v'} =\frac{d\hat{\zeta}}{\hat{\zeta}-\hat{v'}}$$ where the $ t^{-\frac{\alpha}{3}}$ comes from the Jacobian associated with the change of variables from $\tilde{v}$ to $\hat{v}$.
Putting together the pieces we have that (modulo the necessary tail estimates and uniformity which can be obtained as in the proof of Theorem 4.1.46 in [@BorCor]) $$\lim_{t\to \infty} \det(I+K_u) = \det(I+\hat{K}_r)$$ where $$\hat{K}_{r}: L^2(C_v)\to L^2(C_v)$$ for $C_{v}$ a contour given by rays from the origin at angles $\pm 2\pi/3$ oriented to have increasing imaginary part. The operator $\hat{K}_r$ is defined in terms of its integral kernel $$\label{hatKrdef}
\hat{K}_{r}(\hat{v},\hat{v'}) = \frac{1}{2\pi \iota} \int_{C_{\zeta}} \frac{e^{ - \frac{\beta^3}{3} \hat{v}^3 +r\hat{v}}}{e^{ - \frac{\beta^3}{3} \hat{\zeta}^3 +r\hat{\zeta}}} \frac{d\hat{\zeta}}{(\hat{v}-\hat{\zeta})(\hat{\zeta}-\hat{v'})},$$ where $C_{\zeta}$ is a contour given by rays from $d$ (for some $d>0$) at angles $\pm \pi/3$ oriented to have increasing imaginary part. But it is known that (see Proof of Theorem 4.1.46 in [@BorCor]) $$\det(I+\hat{K}_r) = F_{\rm{GUE}}(\beta^{-1} r).$$ This proves and hence completes the proof of the proposition.
[alpha]{} J. Baik, T. Suidan A GUE central limit theorem and universality of directed first and last passage site percolation , [**6**]{}:325–338, 2005.
Y. Baryshnikov. GUEs and queues. , [**119**]{}:256–274, 2001.
T. Bodineau, J. B. Martin. A universality property for last-passage percolation paths close to the axis. , [**10**]{}: 105–112, 2005.
A. Borodin, I. Corwin. Macdonald processes. arXiv:1111.4408.
A. Borodin, I. Corwin, P. L. Ferrari. Free energy fluctuations for directed polymers in random media in $1+1$ dimension. arXiv:1204.1024.
S. Chatterjee. A simple invariance theorem. arXiv:math/0508213v1.
I. Corwin, N. O’Connell, T. Seppäläinen, N. Zygouras. Tropical combinatorics and Whittaker functions. arXiv:1110.3489.
R. Durrett 4th edition, [*Cambridge U. Press*]{}, 2010.
P. Glynn, W. Whitt. Departures from many queues in series. , 4:546–572, 1991.
J. Gravner, C.A. Tracy and H. Widom. Limit theorems for height fluctuations in a class of discrete space and time growth models. , [**102**]{}:1085–1132, 2001.
G. Lawler, V. Limic. Random Walk: A Modern Introduction. , 2010.
G. Moreno Flores. Modéles de polyméres dirigés en milieux aléatoires. Doctoral thesis, 2010.
J. Moriarty, N. O’Connell. On the free energy of a directed polymer in a Brownian environment. , [**13**]{}:251–266, 2007.
N. O’Connell. Directed polymers and the quantum Toda lattice. , [**40**]{}:437–458, 2012.
N. O’Connell, M. Yor. Brownian analogues of Burke’s theorem. , [**96**]{}:285–304, 2001.
T. Seppäläinen. A scaling limit for queues in series. , 855–872, 1997.
T. Seppäläinen. Scaling for a one-dimensional directed polymer with boundary conditions. , [**40**]{}:19–73, 2012.
T. Suidan. A remark on a theorem of Chatterjee and last passage percolation. , [**39**]{}:8977–8981, 2006.
C.A. Tracy and H. Widom. Level-spacing distributions and the Airy kernel. , [**159**]{}:151–174, 1994.
[^1]: There is presently only one known [*exactly solvable*]{} positive temperature discrete polymer. It has weights which are distributed as the logarithm of inverse Gamma random variables. Introduced by Seppäläinen [@Sep], this polymer has an explicit product invariant measure which allowed him to compute the law of large numbers for the free energy as well as a tight upper bound on the free energy fluctuation scaling exponent for this model. Due to the tropical RSK correspondence [@COSZ], this polymer also fits into the hierarchy of solvable models studied in [@BorCor]. Though it has not yet been done, this should enable a rigorous proof of the GUE Tracy-Widom limit theorem for any size rectangle as long as both $n$ and $N$ go to infinity. Due to the work of [@OCon], another model which fits into the hierarchy of [@BorCor] is the semi-discrete directed polymer model, introduced by O’Connell-Yor [@OY] (see discussion below Proposition \[asyprop\]).
|
---
abstract: 'We report on the universality of height fluctuations at the crossing point of two interacting $1+1$-dimensional Kardar-Parisi-Zhang interfaces with curved and flat initial conditions. We introduce a control parameter $p$ as the probability for the initially flat geometry to be chosen and compute the phase diagram as a function of $p$. We find that the distribution of the fluctuations converges to the Gaussian orthogonal ensemble Tracy-Widom (TW) distribution for $p<0.5$, and to the Gaussian unitary ensemble TW distribution for $p>0.5$. For $p=0.5$ where the two geometries are equally weighted, the behavior is governed by an emergent Gaussian statistics in the universality class of Brownian motion. We propose a phenomenological theory to explain our findings and discuss possible applications in nonequilibrium transport and traffic flow.'
author:
- Abbas Ali Saberi
- 'Hor Dashti-N.'
- Joachim Krug
bibliography:
- 'refs.bib'
title: ' Competing Universalities in Kardar$-$Parisi$-$Zhang Growth Models '
---
Scale invariant fluctuations play a central role in the emergence of universal properties in complex random systems interconnecting various areas of physics, mathematics and statistical mechanics. Whereas the concept of universality classes is well established in the theory of equilibrium phase transitions [@Binder1981], our understanding of systems driven out of equilibrium is much less complete [@Henkel2008]. The Kardar$-$Parisi$-$Zhang (KPZ) equation [@KPZ1986] governing the evolution of the surface height $h$(**x**,t), \[Eq1\]\_t h(**x**,t)=\^2 h+(h)\^2+(**x**,t), is a prototypical model for describing nonequilibrium growing interfaces with a wide range of theoretical and experimental applications [@Krug1991; @Stanley1995; @HHZ1995; @Krug1997]. The first term in (\[Eq1\]) represents relaxation of the interface caused by a surface tension $\nu$, the second describes the nonlinear growth locally normal to the surface, and the last term is uncorrelated Gaussian white noise in space and time with zero average $
\langle\eta(\textbf{x},t)\rangle=0 $ and $\langle
\eta(\textbf{x},t)\eta(\textbf{x}',t')\rangle=2D\delta^d(\textbf{x}-\textbf{x}')\delta(t-t')$, representing the stochastic nature of the growth process. One recovers the Edwards$-$Wilkinson equation for $\lambda=0$.
The universality class of randomly growing interfaces is usually characterized by the scaling exponents defined by Family$-$Vicsek scaling [@Family1985] i.e., $w^2(t,l)\sim t^{2\beta}
f(l/t^{\beta/\alpha})$, in terms of the second moment $w^2(t,l)$ of the height fluctuations at a measurement scale $l$ at time $t$, where $f(x)\rightarrow$ const as $x\rightarrow\infty$ and $f(x)\sim
x^{2\alpha}$ as $x\rightarrow 0$. Thus $w^2$ grows with time like $t^{2\beta}$ until it saturates to $l^{2\alpha}$ when $t\sim l^{\alpha/\beta}$. The universality class is characterized by the exponents $\alpha$ and $\beta$ (the roughness and the growth exponents, respectively), whose exact values for the KPZ equation are known only in 1+1 dimensions (1+1 D) as $\alpha=1/2$ and $\beta=1/3$.
In a series of pioneering works, it has been shown that the universality in various growth models belonging to the KPZ class holds beyond the second moment [@Krug1992; @Krug2010RMT; @Takeuchi2011]. Unexpectedly, the height fluctuations of the 1+1 D single-step model (SSM) [@Meakin1986SSM] grown from a point seed were found to be governed [@Johansson2000] by the Tracy-Widom (TW) distribution of the Gaussian unitary random matrix ensemble (GUE) [@Mehta2004]. Thereafter, it was reported [@prahofer2000statistical; @prahofer2000universal] that the radial 1+1 D polynuclear growth (PNG) model also follows the TW GUE distribution, and in addition, the Gaussian orthogonal ensemble (GOE) determines the universality of the 1+1 D KPZ growth models on a flat substrate [@prahofer2000universal]. Recently, exact solutions of the 1+1 D KPZ equation have confirmed the TW GUE distribution for the height fluctuations on the curved (wedge-like) [@sasamoto2010one; @amir2011probability] and the TW GOE distribution on the flat geometries [@calabrese2011exact]. The key question of interest in this Letter is how these two GOE and GUE universalities compete when two different 1+1 D KPZ growth models adopting the flat and curved geometries meet each other at a single common point (Fig. \[fig:fw\]).
![\[fig:fw\] (color online) Schematic of the crossing flat$-$wedge geometry with a single common site in the middle. ](fig1.pdf){width="0.8\columnwidth"}
The SSM is a solid on solid growth model in the KPZ class in which at each time step on a 1 D (flat or wedge-like) lattice of size $L$, one site $-L/2 \leq j <
L/2 $ is randomly chosen, and if it is a local minimum the height $h(j)$ is increased by $2$. The initial conditions at $t=0$ are $h^f_0(j) = [1-(-1)^j]/2$ and $ h^w_0(j)=|j|$ for the flat and wedge geometries, respectively. This definition guarantees that at each step, the height difference between two neighboring sites is $\pm 1$. The SSM is the growth model representation [@Rost1981] of the totally asymmetric simple exclusion process (TASEP) in 1 D, a paradigmatic model for driven transport of a single conserved quantity [@Krug2010RMT].
![\[fig:fw\_w2\] (color online) Main: Second moment of the height fluctuations at the crossing point of the flat-wedge geometry as a function of time for several $p$ from bottom to top. The dashed line shows the scaling prediction $w^2\sim t^{2\beta}$ for the 1+1 D KPZ equation with growth exponent $\beta=1/3$. All curves are shifted by a constant for ease of comparison. Inset: The crossover from 1+1 D KPZ scaling at earlier times to the Brownian motion (BM) statistics at long time limit for $p=0.5$. In order to clearly observe the crossover to the BM regime, the simulations for $p=0.5$ were carried out up to time $t=10^6$. ](fig2.pdf){width="0.8\columnwidth"}
Here we consider growth on two crossing flat-wedge substrates subject to the same growth rules but with an exception at the origin $\textbf{x}=\textbf{0}$, where the two geometries meet. The origin is the only site with four nearest neighbors, the heights of which have to exceed the height at $\textbf{0}$ by one for growth to take place. This Letter studies the statistics of the fluctuations of the height $h(\textbf{0},t)$ at the crossing point at time $t$. Here time is defined in terms of the number of deposition trials per lattice site, either successful or not. The initial conditions are set as mentioned above for each geometry so that $h^f_0(\textbf{0})=h^w_0(\textbf{0})=0$. Periodic boundary conditions are applied along both geometries. At each time step, one of the two flat or wedge crossing geometries is chosen with probability $p$—the only parameter in our study— and then a site $j$ is randomly chosen for the growth process. The flat geometry is chosen with probability $p$ and the wedge geometry with probability $1-p$. In the TASEP representation this corresponds to two single-lane exclusion processes which meet at an intersection. The growth rule at the origin implies that the particles on the two lanes are forced to cross the intersection simultaneously. TASEP-like traffic flow models with intersections have been studied before, but with different crossing rules and without considering the current fluctuations at the intersection [@Nagatani1993; @Ishibashi1996; @Foulaadvand2004; @Foulaadvand2007; @Belbasi2008; @Embley2009; @Hilhorst2012; @Raguin2013].
Let us first examine the Family$-$Vicsek scaling for the second moment of the height fluctuations at the origin i.e., $w^2(t)=\langle
h^2(\textbf{0},t)\rangle-\langle h(\textbf{0},t)\rangle^2$, for different values of $p$. As Fig. \[fig:fw\_w2\] demonstrates, all curves for $p\ne0.5$ follow the scaling law $w^2\sim t^{2\beta}$ with the growth exponent $\beta=1/3$ predicted for the $1+1$ KPZ equation. A remarkable observation is that for $p=0.5$ when both geometries are picked with equal probability, the variance of the height at earlier times behaves as in the KPZ class, but later it crosses over to the universality of the Brownian motion (BM) i.e., $w^2\sim t$, with Gaussian statistics (see below).
![\[fig:v\_inf(p)\] (color online) $v_\infty$ (main panel) and $\Gamma_n$ (inset) for the flat$-$wedge geometry as a function of $p$. ](fig3.pdf){width="0.8\columnwidth"}
Until now our analysis has revealed two interesting facts: First, the point with $p=0.5$ acts as a distinguished fixed point with a characteristic Gaussian statistics in the universality of Brownian motion, and, second, for $p\ne 0.5$ the statistics of the height fluctuations at the crossing point—despite the existence of four nearest neighbors—is compatible with that of the 1+1 D KPZ equation whose long time statistics converges to the TW GUE/GOE distribution depending on the narrow-wedge/flat initial condition. One might naively expect that for $p>0.5$ for which the flat geometry is chosen with higher probability, the height fluctuations would converge to the GOE statistics and for $p<0.5$ where the wedge geometry is more likely to be picked, they should be compatible with the GUE distribution. As we will show in the following, our results unveil exactly the opposite behavior. The local height of an 1+1 D KPZ interface is asymptotically given by the following relation [@Takeuchi2011], $$\label{eq:h}
h = v_\infty t + s_\lambda (\Gamma t)^{1/3} \chi,$$ where $s_\lambda=\operatorname{sgn}(\lambda)$ is the sign of the nonlinear parameter $\lambda$ in the KPZ Eq. (\[Eq1\]), $ v_\infty $ and $ \Gamma $ are non-universal parameters and $\chi$ is a stochastic variable with a universal TW distribution depending on the flat/wedge growth geometry. We estimate the parameter $v_\infty$ by extrapolating ${\langle h \rangle}/t $ versus $t^{-2/3}$, as an intercept in a linear regression in the $ t \to \infty $ limit, i.e., ${\langle h \rangle}/t = v_\infty +
s_\lambda \Gamma^{1/3} {\langle \chi \rangle} t^{-2/3}$ [@Krug1990]. We carried out extensive simulations to generate height profiles of SSM on the flat$-$wedge geometry of linear size $L=2^{13}$ up to time $t=2\times 10^{4}$ for several values $p=0.2$, $0.3$, $0.4$, $0.45$, $0.48$, $0.49$, $0.5$, $0.51$, $0.52$, $0.55$, $0.6$, $0.7$, $0.8$. For each dataset, an ensemble of $7\times10^5$ independent realizations have been generated.
As shown in Fig. \[fig:v\_inf(p)\], we numerically find a simple relation for $v_\infty$ as a function of the parameter $p$, $$\label{eq:min}
v_\infty(p)=\min(p, 1-p).$$ Contrary to the naive expectation, this implies that the substrate with the *smaller* growth probability dominates the coupled process. To see why this is so, recall that the asymptotic growth rate of a single 1+1 D SSM interface with periodic boundary conditions is given by $v_\infty = \frac{\gamma}{2} (1-u^2)$, where $\gamma$ is the rate of deposition attempts and $u \in [-1,1]$ is the surface slope [@Krug1992; @Krug2010RMT]. Because the growth rate is maximal at $u=0$, an SSM interface can lower its growth rate by developing a nonzero slope, but it cannot increase its growth rate beyond $\gamma/2$ [@Wolf1990; @Krug1997]. In the present setting $\gamma = 2p$ for the flat geometry and $\gamma = 2(1-p)$ for the wedge geometry, respectively. To accomodate a common growh rate at the origin, for $p < 0.5$ the flat interface grows at maximal speed $v_\infty = p$ whereas the wedge interface maintains a nonzero tilt $u = \sqrt{\frac{1-2p}{1-p}}$. For $p > 0.5$ the roles of the two substrates are interchanged and the initially flat interface becomes wedge-shaped (Fig. \[fig:ht\]).
![\[fig:ht\] (color online) Snapshots for the time evolution of the height profiles on the flat$-$wedge geometry for $t=0$ (left column), $t=200$ (second column), and $t=2000$ (right column) for $p=0.3$ (first row), $p=0.5$ (second row) and $p=0.7$ (third row) corresponding to the GOE, Gaussian (BM) and GUE universality classes, respectively. ](fig6.pdf){width="1\columnwidth"}
We next show that the dominance of the slower geometry extends also to the height fluctuations at the origin. In order to estimate the parameter $\Gamma$ in Eq. (\[eq:h\]) we define $g_n \equiv {\langle h^n \rangle}_c/s^n_\lambda t^{n/3} = \Gamma^{n/3} {\langle \chi^n \rangle}_c$, where $ {\langle \chi^n \rangle}_c $ denotes the $n$th cumulant of the random variable $\chi$. We write $\Gamma_n = [g_n/{\langle \chi^n \rangle}_c]^{3/n}$ for the value of $\Gamma$ estimated from the $n$th cumulant. All estimates have to give rise to the same value assuming that the cumulants of $\chi$ are those of the corresponding TW GOE or GUE distributions. To find the possible TW distributions, we use two dimensionless $\Gamma$-independent measures, i.e., the skewness $S=g_3/g_2^{3/2}$ and the kurtosis $K=g_4/g_2^2$, and compare them with those of the TW distributions. Figure \[fig:fw\_SK\] represents the most remarkable finding of our study: For $p<0.5$ the statistics of the height fluctuations of the crossing point in the wedge$-$flat geometry is determined by the TW GOE distribution, and, for $p>0.5$ it is governed by the TW GUE distribution. Therefore we adopt the corresponding cumulants of the TW distributions into the above relations to extract $\Gamma_n$. We find that all $\Gamma_n$ follow the same simple relation with $p$ as we found for $v_\infty(p)$, i.e., $\Gamma(p)=\min(p, 1-p)$—see the inset of Fig. \[fig:v\_inf(p)\]. The relation $\Gamma = v_\infty$ is a known property of the SSM [@Krug1992].
![\[fig:fw\_SK\] (color online) Skewness (main panel) and kurtosis (inset) for the flat$-$wedge geometry as a function of $p$. ](fig4.pdf){width="0.8\columnwidth"}
Now we can directly check for universality by comparing the height fluctuation distribution with the analytic TW predictions. For this, we define a new variable $q=(h-v_\infty
t)/s_\lambda (\Gamma t)^{1/3},$ and plot the rescaled distribution functions $P(q)$ for several values of $p$. Figure \[fig:fw\_P\_q\] shows an excellent agreement with the corresponding TW distributions for $p\ne 0.5$. The figure also shows the distribution function of height fluctuations for $p=0.5$ which is in perfect agreement with the Gaussian distribution.
![\[fig:fw\_P\_q\] (color online) Rescaled distribution functions of the height fluctuations for the crossing point of the flat$-$wedge geometry for several values of $p$ (symbols), compared with the TW GOE distribution for $p<0.5$, TW GUE distribution for $p>0.5$, and Gaussian distribution for $p=0.5$ (solid lines). ](fig5.pdf){width="0.9\columnwidth"}
The fact that the fluctuations at the crossing point are determined by the slowly growing interface can be most easily understood in the TASEP representation. The growth rule at the origin implies that a particle on the fast lane has to wait for a particle on the slow lane to appear before it can cross the intersection. Therefore the statistics of the crossing events is determined by the slower lane, and follows TW-GOE (TW-GUE) statistics for $p < 0.5$ ($p > 0.5$), respectively. Whereas the dynamics on the slow lane is asymptotically unaffected by the intersection, the particles on the fast lane effectively experience a blockage, which leads to the buildup of a density discontinuity across the origin. In the interface representation this implies the formation of a wedge (Fig. \[fig:ht\]).
The physics of inhomogeneous growth processes [@Wolf1990] and exclusion processes with a blockage [@Janowsky1994; @Basu2014; @Schmidt2015] is also key to understanding the emergent Gaussian statistics that we observe at $p=0.5$. Consider first a single, initially flat SSM interface where deposition attempts occur at unit rate at all sites except a single defect site with deposition rate $r$. This corresponds to a TASEP with a single slow ($r < 1$) or fast ($r > 1$) bond. Recent work has established that the defect induces a macroscopic inhomogeneity for any $r < 1$, whereas it is asymptotically irrelevant when $r > 1$ [@Basu2014; @Schmidt2015]. We have numerically studied the height fluctuations at the defect site, finding TW-GOE statistics for $ r > 1$ but Gaussian BM statistics for $r < 1$. The latter behavior can be rationalized within the directed polymer (DP) representation of the process, where the defect site extends to a defect line in space-time which pins the polymer when $r< 1$ [@HHZ1995; @Krug1997; @Basu2014; @Tang1993]. In the pinned phase the energy of the polymer, which translates into the height of the SSM surface, is the sum of uncorrelated contributions accumulated along the one-dimensional defect line, which satisfies a central limit theorem and therefore displays Gaussian statistics.
The crossing geometry at $p=0.5$ is similar to the SSM with a defect site, in the sense that deposition occurs at the same rate at all sites except for the origin, where it is enhanced by a factor of $r = 2$. By analogy with the 1+1 D SSM, one might anticipate the existence of a critical value $r_c$, such that the fluctuations display Gaussian BM statistics for $r < r_c$ and KPZ TW statistics for $r > r_c$. However, our simulations of a crossing flat-flat geometry with a variable deposition probability $r$ at the crossing point indicate that the critical point, which is at $r_c=1$ for the single lane problem, is shifted to large $r_c\rightarrow\infty$, introducing the BM statistics as the dominant process in the long-time limit for any $r$. This may reflect the dynamic nature of the defect: Even when $r$ is very large, a TASEP particle attempting to cross the intersection still has to wait for a particle on the second lane to arrive, which happens at unit rate irrespective of $r$. In marked contrast to the 1+1 D SSM, however, we observe BM statistics in the absence of a macroscopically tilted, wedge-like surface profile. To clarify the origin of this behavior, a DP representation of the crossing growth geometry would be needed.
To conclude, we have considered 1+1 D KPZ growth models on a weighted flat$-$curved geometry and analyzed the statistics of the height fluctuations at the crossing point. We found a rich and unexpectedly non-trivial phase diagram comprising, in addition to the known TW GUE/GOE phases, an emergent Gaussian BM phase at $p=\frac{1}{2}$. It is important to note that the dominance of the more slowly growing geometry in the SSM is linked to the fact that the coefficient $\lambda$ of the KPZ nonlinearity is negative in this case [@Krug1992; @Wolf1990]. When $\lambda > 0$, the argument based on the slope-dependence of the asymptotic growth rate $v_\infty$ predicts that the faster geometry determines the behavior, which implies that the phase diagram is reflected around the point $p=\frac{1}{2}$. We have indeed verified that simulations of the restricted-solid-on-solid (RSOS) model, which also has $\lambda < 0$, lead to the same phase diagram.
At the critical point $p=\frac{1}{2}$, the TASEP representation of the model relates to previous work on exclusion processes with intersections [@Foulaadvand2007; @Embley2009; @Raguin2013], with the seemingly innocuous modification that particles are forced to cross the intersection in a correlated manner. Our results suggest that this makes the transport across the intersections much more efficient, in that macroscopic density discontinuities do not appear, while a signature of the intersection is retained in the form of anomalously large, BM-type current fluctuations. Importantly, the correlated hopping of particles moving along perpendicular directions is a fundamental feature of any particle representation of higher-dimensional growth processes, which is enforced by the integrability condition on the height field [@Krug1991; @Odor2009]. As such, by introducing a single site with a two-dimensional growth environment into an otherwise one-dimensional setting, the model may provide an inroad for progress towards an understanding of the elusive 2+1 D KPZ problem [@HH2012].
*Acknowledgment.* We thank Andreas Schadschneider for useful discussions. A.A.S. would like to acknowledge support from the Alexander von Humboldt Foundation and partial financial support from the research council of the University of Tehran. J.K. was supported by the German Excellence Initiative through the UoC Forum *Classical and Quantum Dynamics of Interacting Particle Systems.*
|
---
author:
- 'E.M. Xilouris'
- 'I.E. Papadakis'
date: 'Received 30 January 2002 / Accepted 20 March 2002'
title: 'A morphological comparison between the central region in AGN and normal galaxies using HST data[^1]'
---
Introduction
============
An open question in current AGN research is how the central black hole is being fed with matter. The interstellar medium of the host galaxy seems to be a good fuel source since gas and dust is found in large quantities inside galaxies. What is not clear though, is how matter is transported from galactic scales (kiloparsec scales) to the small scales (parsec scales) near the nucleus.
One mechanism that has been proposed for the fueling of the active nucleus is the existence of galactic bars (Shlosman 1990). Many statistical studies have been performed in order to examine this possibility. Among others, Xanthopoulos (1996) analyzed optical (V, R, I) observations of a sample of 27 AGN and concluded that only half of them are barred. Observations of AGN in the near-infrared (NIR) revealed that the fraction of galaxies that host bars is larger than previously thought (e.g. Mulchaey [et al. ]{}1997; Márquez [et al. ]{}1999). This is mainly because dust and star formation effects can mask the bar structures and thus make them invisible at optical wavelengths while these effects are not very important in the NIR wavelengths. Making use of NIR data, Mulchaey & Regan (1997) concluded that the incidence of bars in Seyfert and normal galaxies is similar, suggesting that Seyfert nuclei do not occur preferentially in barred galaxies.
Another mechanism that has been proposed for the fueling of AGN involves interactions between the host galaxy and companion galaxies. It is argued that a dynamical instability, caused by the tidal field of the companion during a merger may drive a large fraction of the gas into the inner regions of the galaxy (Hernquist 1989). Statistical studies have been made to examine the environments of Seyfert and normal galaxies and the percentage of companions around them. The results have been inconclusive and rather ambiguous. For example, Fuentes–Williams & Stocke (1988) and Bushouse (1986) concluded that there is not a detectable difference in the environments of Seyfert and normal galaxies. Dahari (1984) and Rafanelli et al. (1995) on the other hand, found that Seyfert galaxies have an excess of companions relative to normal galaxies.
Consequently, the observations so far have shown that bars and interactions are not responsible for fueling the AGN in all cases. However, irrespective of the mechanism that transports material close to the central source, there should be enough potential fuel (in the form of interstellar dust and gas) in the circumnuclear region of Seyfert galaxies. Indeed, recent HST observations have shown that the central regions in Seyfert galaxies are rich in gas and dust. Malkan [et al. ]{}(1998) found fine-scale structures of various morphological types in the central regions of Seyfert 1 and 2 galaxies. Regan & Mulchaey (1999) and Martini & Pogge (1999), examined the central morphology of Seyfert galaxies with the use of color maps. They found significant structure, with gas and dust organized mainly in nuclear spiral dust lanes on scales of a few hundred parsec. These spiral dust lanes could be the channels by which the material from the host galaxy is transported into the central engine. Recently, Pogge & Martini (2002) presented archival HST images of the nuclear regions of Seyfert galaxies from the CfA Redshift Survey sample. They found that essentially all the Seyfert galaxies in their sample have circumnuclear structures which are connected with the large-scale bars and spiral arms in the host galaxies and could be related to the fueling of AGN by matter inflow from the host galaxy disks.
The studies mentioned above investigated the circumnuclear morphology in AGNs mainly. In the present work we compare the morphology of the central regions in AGN and non-AGN galaxies in order to search for signatures of the AGN fueling through differences that may exist. This comparison is necessary in order to understand the main reason that makes a galaxy host an active nucleus. There are two obvious reasons that could cause this effect, the first being the presence of a supermassive black hole in the center of the galaxy, and the second is the presence of matter near the central region which could provide the central black hole with the necessary fuel. Since there is now sufficient observational evidence which suggests that the majority (if not all) of the galaxies contain a central black hole (e.g. Kormedy & Gebhardt 2001), it is important to investigate whether the lack of the fuelling material is the main reason for the presence of nuclear activity.
We make use of HST observations of a group of nearby AGN and “normal" (i.e. non-AGN) galaxies which have similar distributions of distance, morphological classification, as well as inclination. Using the “ellipse fitting" technique, we uncover their central, overall morphology. Any localised excesses or deficits of emission (indicative of the presence of significant amount of gas/dust) should show up as deviations from the smooth isophotes. By measuring the average amplitude of these deviations we can investigate, in a quantitative way, whether the central regions of AGNs are significantly more “irregular” than those in normal galaxies.
The paper is organized as follows. In Sect. 2 we give information on the sample of the galaxies, and in Sect. 3 we describe the method that we follow in order to uncover the nuclear structure in the galaxies. In Sect. 4 we present our results. A discussion follows in Sect. 5, while a summary of our work is presented in Sect. 6.
The sample
==========
For the purposes of this study we chose objects from the Palomar optical spectroscopic survey of nearby galaxies (Ho [et al. ]{}1995). These authors surveyed a nearly complete sample of 486 bright ($B_{T}\leq 12.5$ mag), northern ($\delta > 0^{0}$) galaxies using the Palomar 5m telescope and derived a catalogue of emission-line nuclei, including a comprehensive list of nearby AGNs (Ho [et al. ]{}1997a). The selection criteria of the survey ensure that the sample is a fair representation of the local galaxy population. Furthermore, the proximity of the objects enables fairly good spatial resolution to be achieved, which is crucial for the objectives of the present work.
Ho [et al. ]{}(1997a) have classified the galaxies in their sample into various subclasses of emission-line nuclei: H II nuclei, Seyfert nuclei, LINERs and “transition” objects (i.e. composite LINER/H II nuclei). Although a significant fraction of LINERs or transition objects could be genuine AGNs (e.g. Ho 2001 and references therein) we considered only the classical Seyfert 1s and 2s as representatives of the local AGN population. As representatives of the non-AGN population we considered only the H II galaxies and the galaxies that show no emission lines in their spectra. In total, there are 52 Seyfert nuclei and 263 non-AGN galaxies in the Palomar sample.
From this list of 315 objects we chose the galaxies that fulfiled the following criteria: (1) inclination smaller than $70^{\circ}$, (2) they were observed with the WFPC2 instrument onboard the HST (before the end of 1999) and their central region was mapped with the Planetary Camera (PC), and (3) their central region was not overexposed. The small inclination criterion was imposed because we are interested in studying the innermost region of the galaxies which, in inclined systems, can be obscured due to projection and/or obscuration effects. Most of the HST observations of the Palomar sample have been performed with the WFPC2 instrument, hence, the choice to study the galaxies which are observed with this instrument. Finally, the requirement of PC observations of the central region was enforced in order to maximize the available spatial resolution (the pixel size of this camera is $0.0455\arcsec$, while the field of view is $36\arcsec \times 36\arcsec$).
There are 58 galaxies (23 active and 35 non-active) that meet our criteria. Most of them were observed with the F555W and F606W filters (13 and 34, respectively), 10 objects were observed with the F547M filter, and 1 with the F569W filter. The difference between the effective wavelength of the various filters is small and should not influence our results. Table 1 lists the observational details (i.e. filter, exposure times and ID program number of the original observing program) of the 58 galaxies.
------- -------- ---------- -------------
NAME Filter Exposure Proposal ID
(NGC) (sec)
1058 F606W 80 5446
1068 F547M 300 5479
1358 F606W 500 5479
1667 F606W 500 5479
2273 F606W 500 5479
2300 F555W 350 6099
2639 F606W 500 5479
2655 F547M 300 5419
2748 F606W 400 6359
2775 F606W 400 6359
2903 F555W 400 5211
2964 F606W 400 6359
3031 F547M 100 5433
3227 F547M 160 7403
3310 F606W 500 5479
3344 F606W 80 5446
3504 F606W 500 5479
3516 F547M 70 6416
3810 F606W 80 5446
3982 F606W 500 5479
4062 F606W 80 5446
4102 F606W 400 6359
4138 F547M 200 6837
4152 F606W 500 5479
4168 F547M 230 6837
4212 F606W 80 5446
4245 F606W 80 5446
4365 F555W 900 5920
4371 F606W 80 5446
4378 F606W 80 5446
4379 F555W 160 5999
4380 F606W 80 5446
4382 F555W 700 7468
4405 F606W 80 5446
4406 F555W 500 5454
4414 F606W 80 8400
4473 F555W 600 6099
4477 F606W 80 5446
4478 F555W 400 6587
4501 F547M 230 6837
4536 F555W 300 5375
4567 F606W 80 5446
4578 F606W 80 5446
4612 F606W 80 5446
4621 F555W 140 5512
4639 F547M 230 5381
4649 F555W 1100 6286
4660 F555W 230 5512
4694 F606W 500 5479
4698 F606W 400 6359
4800 F606W 80 5446
4900 F606W 80 5446
5033 F547M 230 5381
5194 F555W 600 5777
5273 F606W 400 8597
6217 F606W 500 5479
7479 F569W 600 6266
7743 F606W 500 5479
------- -------- ---------- -------------
: Observational information of the galaxies
------- --------- ----- ------- --------- ------- ------------
NAME Type $T$ D $M_{B}$ $i$ Scale
(NGC) (Mpc) (deg) (pc/pixel)
1058 A(S2) 5 9.10 -18.25 21 2.00
1068 A(S1.8) 3 14.40 -21.32 32 3.17
1358 A(S2) 0 53.60 -20.95 38 11.80
1667 A(S2) 5 61.20 -21.52 40 13.46
2273 A(S2) 0 28.40 -20.25 41 6.25
2300 NA -2 31.00 -20.69 44 6.82
2639 A(S1.9) 1 42.60 -20.96 54 9.37
2655 A(S2) 0 24.40 -21.12 34 5.37
2748 NA 4 23.80 -20.29 70 5.25
2775 NA 2 17.00 -20.34 40 3.74
2903 NA 4 6.30 -19.89 63 1.39
2964 NA 4 21.90 -20.11 58 4.82
3031 A(S1.5) 2 1.40 -18.34 60 0.31
3227 A(S1.5) 1 20.60 -20.39 48 4.53
3310 NA 4 18.70 -20.41 40 4.11
3344 NA 4 6.10 -18.43 24 1.34
3504 NA 2 26.50 -20.61 40 5.83
3516 A(S1.2) -2 38.90 -20.81 40 8.56
3810 NA 5 16.90 -20.19 46 3.72
3982 A(S1.9) 3 17.00 -19.47 30 3.74
4062 NA 5 9.70 -18.65 67 2.13
4102 NA 3 17.00 -19.54 56 3.74
4138 A(S1.9) -1 17.00 -19.05 50 3.74
4152 NA 5 34.50 -20.34 40 7.59
4168 A(S1.9) -5 16.80 -19.07 .. 3.70
4212 NA 4 16.80 -19.78 53 3.70
4245 NA 0 9.70 -17.92 41 2.13
4365 NA -5 16.80 -20.64 .. 3.70
4371 NA -1 16.80 -19.51 57 3.70
4378 A(S2) 1 35.10 -20.51 21 7.72
4379 NA -2 16.80 -18.60 32 3.70
4380 NA 3 16.80 -19.06 58 3.70
4382 NA -1 16.80 -21.14 40 3.70
4405 NA 0 31.50 -19.63 51 6.93
4406 NA -5 16.80 -21.39 .. 3.70
4414 NA 5 9.70 -19.31 57 2.13
4473 NA -5 16.80 -20.10 .. 3.70
4477 A(S2) -2 16.80 -19.83 24 3.70
4478 NA -5 16.80 -18.92 .. 3.70
4501 A(S2) 3 16.80 -21.27 59 3.70
4536 NA 4 13.30 -20.04 67 2.93
4567 NA 4 16.80 -19.34 48 3.70
4578 NA -2 16.80 -18.96 43 3.70
4612 NA -2 16.80 -18.75 38 3.70
4621 NA -5 16.80 -20.60 .. 3.70
4639 A(S1) 4 16.80 -19.28 48 3.70
4649 NA -5 16.80 -21.43 .. 3.70
4660 NA -5 16.80 -19.06 .. 3.70
4694 NA -2 16.80 -19.08 63 3.70
4698 A(S2) 2 16.80 -19.89 53 3.70
4800 NA 3 15.20 -18.78 43 3.34
4900 NA 5 17.30 -19.10 21 3.81
5033 A(S1.5) 5 18.70 -21.15 64 4.11
5194 A(S2) 4 7.70 -20.76 53 1.69
5273 A(S1.5) -2 21.30 -19.26 24 4.69
6217 NA 4 23.90 -20.23 34 5.26
7479 A(S1.9) 5 32.40 -21.33 41 7.13
7743 A(S2) -1 24.40 -19.78 32 5.37
------- --------- ----- ------- --------- ------- ------------
: General and photometric properties of the galaxies
In Table 2 (columns 3, 4, 5 and 6) we list the global and photometric properties of the 58 galaxies, i.e. the numerical Hubble type index ($T$), their distance, the absolute $B$ band magnitude ($M_{B}$), corrected for intrinsic and galactic absorption, and the inclination ($i$) (the data listed in this table were taken from Ho [et al. ]{}, 1997a). The parameter $T$ ranges from -5 to 5 with 25 galaxies being early-type ($T\le0$) and 33 galaxies being late-type ($T>0$). Column 2 in Table 2 lists the galaxy “activity type”: “A” and “NA” stand for AGN and non-AGN, according to the classification of Ho [et al. ]{}(1997a) while for AGNs the Seyfert type is also given. Finally, the last column in the same table lists the projected scale in parsecs per PC pixel at the distance of the galaxy.
Although the original sample of Ho [et al. ]{}(1995) is an almost complete sample of nearby galaxies (see discussion in Ho [et al. ]{}, 1997b), this is not true for the present sample. Since the main selection criterion is the availability of WFPC2 observations, it is important to examine if sample biases and selection effects are introduced in this way.
Table 3 lists the median global properties of the objects in our sample and Fig. \[f1\] shows the distribution of morphological type, distance, absolute magnitude and inclination for the AGN and non-AGN group of galaxies (filled and open histograms, respectively). The average properties of the galaxies are similar for both groups (Table 3). The exception is the absolute magnitude, with the AGNs being brighter than the non-AGNs by $\sim 0.7$ mags on average.
Application of the Kolmogorov–Smirnov (K–S) test (Press [et al. ]{}1992) confirms that the distributions plotted in Fig. \[f1\] are similar, apart from the distributions of $M_{B}$. The probability that the sample distributions of $T$, distance and inclination are drawn from the same parent population is $78\%$, $15\%$, and $37\%$, respectively (hereafter, when we compare different distributions or compute correlation coefficients, we consider as “statistically significant” these differences or correlations with a probability to appear by chance being less than $10\%$). In the case of the distributions of $M_{B}$, the K–S test gives a probability of only $5\%$. We conclude that the distributions of distance, morphological classification and inclination of the non-AGN sample match those of the AGN sample. The distribution of the galaxy luminosity is different between the two samples, however, as we discuss in Sect. 4 this does not affect our conclusions.
------- ----- ------- --------- ------- ------------
Group $T$ D $M_{B}$ $i$ Scale
(Mpc) (deg) (pc/pixel)
A 1 17.8 $-20.5$ 40 3.74
NA 2 16.8 $-19.8$ 44 3.70
------- ----- ------- --------- ------- ------------
: Median Properties of the active and non-active group of galaxies
The Data and method of analysis
===============================
All the WFPC2 images that we used were already processed by the standard STScI reduction pipeline (Biretta [et al. ]{}2000). Since we have chosen images with no saturation effects, the only additional processing step required is the removal of cosmic rays. For that reason, we used the [*filter/cosmic*]{} task of MIDAS astronomical package. Any residual cosmic ray events as well as bright foreground stars were removed by hand. For those galaxies with more than one image, we chose to study the one with the largest exposure time which did not cause saturation effects in the central region. If there were two or more images of the same integration time, we analyzed the images (as explained below) and then combined the resulting “variance” values (see below).
Our main aim is to study, in a quantitative way, the irregularities in the morphology of the central region of the galaxies. Our methodology consists of two steps. In the first step, we use the ellipse fitting technique to recover the axisymmetric isophotes around the center of the galaxies. If there are localized regions with excess emission (for example regions of star clusters or H II regions) or deficits (caused by dust absorption), they will show as deviations from the smooth isophotes. Based on this idea, in the second step, we compute the scatter of the pixel values around their mean (i.e. their variance) and use this value as a measure of the amplitude of the central structures. Although it is hard to estimate the significance of the derived amplitudes for each individual galaxy, the comparison of the distribution of the amplitudes for various groups of galaxies can provide us with useful information. For example, one would expect that AGNs, hosting a black hole and a large amount of gas in their nuclear region should show larger amplitude structure (and thus larger variance), on average, when compared with the normal galaxies. We describe below the two steps in more detail.
As mentioned above, first, we perform ellipse fitting to the isophotes of the galaxy image. This choice is motivated by the fact that the isophotes of galaxies, especially elliptical (E) and lenticular (S0) as well as the bulge of spiral galaxies, are not far from ellipses. This technique has been widely used in the past by various authors, mainly as a method of retrieving embedded galaxy structures that are hidden by the large-scale distribution of light of the main body of the galaxy. Descriptions of ellipse fitting techniques and their applications to the surface photometry of galaxies can be found in Kent (1983), Jedrzejewski (1987), Bender & Möllenhoff (1987), Wozniak et al. (1995) and Milvang–Jensen & Jørgensen (1999). Using the [*fit/ell3*]{} task of MIDAS, which is based on the formulas of Bender & Möllenhoff (1987), we fit the isophotes of the galaxy image and construct an artificial image from the fitted ellipses.
As an example, in Fig. \[f2\] we show the central region of NGC 5273 (left panel) together with the “artificial” image from the fitted ellipses (middle panel). In order to recover the morphological features in the innermost part of the galaxy we then divide the galaxy image by the artificial image from the fitted ellipses. In doing so, we normalise the pixel values of the central regions in all galaxies to unity so that the amplitude of the structures in one galaxy can be compared with the amplitude in other galaxies, irrespective of their brightness or exposure time. The resulting image for NGC 5273, which we call the “structure” image, is also shown in Fig. \[f2\] (right panel). If there were no structure in the central region, we would expect a smooth image with all the pixels having a value around $\sim 1$. On the contrary, the image on the right panel of Fig. \[f2\] shows positive and negative deviations from the smooth isophotes, indicative of localised structures. As an example of a galaxy with no deviations from the underlying galactic isophotes, in Fig. \[f3\] we show the HST image of NGC 4612, together with the image from the fitted ellipses and the resulting image, after dividing the HST image with the image from the elliptical isophotes (left, middle and right panel, respectively). The structure image is almost completely smooth, with an average value of $1.00 \pm 0.05$ and no obvious deviations from the fitted isophotes.
In the second step, we choose two regions around the center of each galaxy with size of 100 pc and 1 kpc. The regions are defined as orthogonal boxes centered on the galaxy and with their largest side parallel to the major axis of the galaxy. The ratio of the small-to-large side of the box is taken as the cosine of the inclination angle of the galaxy. First, we measure the mean of the pixel values in each region and then their variance by using the [*statistics/image*]{} task of MIDAS. There are two mechanisms that contribute to the variance that we measure. One is the photon noise process and the other is the presence of features/irregularities in the region. Since we are interested in measuring the amplitude of the deviations that are caused by the galaxy micro-structures, we have to correct the estimated variance for the contribution of the photon noise statistics. Let us denote with $\sigma^{2}_{R}$, $\sigma^{2}_{PN}$, and $\sigma^{2}_{S}$ the total variance of a region, say $R$, the variance that is introduced by the photon noise process, and the variance that is due to micro-structures, respectively. Since the photon noise variations and the variations due to morphological irregularities contribute to $\sigma^{2}_{S}$ in an independent way, then $\sigma^{2}_{R}=\sigma^{2}_{PN}+\sigma^{2}_{S}$, or $\sigma^{2}_{S}=\sigma^{2}_{R}-\sigma^{2}_{PN}$. Therefore, we have to subtract the variance that is caused by the photon noise statistics from the total variance of each region in order to compute the variance that is due to real galactic structures. This is an important correction, since the flux (and hence the photon noise statistics) is different for each galaxy due to differences in the exposure time and their magnitude.
The contribution of each pixel to the total variance in each region is $(x_{i}-\bar{x})^{2}/N$, where $x_{i}$ is the pixel value, $\bar{x}$ is the mean value of all the pixels in the region and $N$ is the total number of pixels. The photon noise process contribution for that pixel is $(x_{i}g+RON^{2})/N$, where $g$ and $RON$ are the gain and read-out noise of WFPC2 ($7e^-/DN$, and $5e^-$), respectively. Therefore, the contribution to the total variance of the photon noise statistics is $\sigma^{2}_{PN}=\sum_{i}(x_{i}g+RON^{2})/N$, where the summation is over the $N$ the pixels of the region. In order to estimate this, we constructed an “error image”, i.e. we multiplied the original image with $g$, added the $RON^2$ value, and then divided with $(m_{i}g)^2$ in order to take account for the fact that we have also divided the original image with the fitted ellipses ($m_{i}$ is the value of the elliptical isophot at each pixel $x_{i}$). The mean value of the regions in the “error" image which have the same dimensions as the respective regions in the structure image is a good approximation of $\sigma^{2}_{PN}$.
Results
=======
The estimated values of $\sigma^{2}_{S}$ for the inner 100 pc and 1 kpc regions of all the galaxies in our sample are listed in Table 4 (columns 2 and 3, respectively). In effect, the variance that we estimate gives a measure of the average amplitude of the deviations from the smooth isophotes in each galaxy. The average $\sigma^{2}_{S}$ in the innermost 100 pc and 1 kpc regions is $0.06\pm 0.02$ and $0.025\pm 0.006$, respectively (note that for the nearest galaxies, the 1 kpc region is larger than the field of view of the PC camera; for that reason we could not estimate their $\sigma^{2}_{S1kpc}$). Consequently, the average amplitude of the localised structures in the innermost 100 pc and 1 kpc regions are $\sqrt{\sigma^{2}_{S}}\times100\% \sim 25\%$ and $\sim 16\%$ of the underlying galaxy emission, respectively. The fact that the average structure amplitude [*decreases*]{} with distance from the center (i.e. $\sigma^{2}_{S1kpc} < \sigma^{2}_{S100pc}$) implies that most of the structure is concentrated at the center of the galaxies. As a result, consideration of a larger region will tend to dilute the signal, i.e. decrease $\sigma^{2}_{S}$.
Before comparing the structure amplitudes for the various groups of galaxies, we have to examine whether our results are biased by any observational or global characteristics of the galaxies. First of all, if the central region isophotes are not elliptical then the residuals that we detect could be the result of the failure of the elliptical isophotes to fit properly the underlying starlight distribution. An indication that the “ellipse fitting" method works successfully in suppressing the underlying galaxy distribution and revealing real structures in the central regions of the galaxies is given by the fact that most of the dust/emission structures that we detect are certainly visible in the original images as well. In order to investigate further this possibility, we plotted the fitted ellipses superimposed on the galactic isophotes for all the galaxies in our sample. For galaxies with $T\le0$, the agreement between the overall shape of the isophotes and the fitted ellipses is very good. In many cases the isophotes are not smooth and small-scale departures from the elliptical shape are apparent, indicative of the localised structures that we want to study. For the $T>0$ galaxies, in some cases, we do observe systematic deviations of the isophotes from ellipses at large distances from the galactic centers. They are caused mainly by the presence of spiral arms, while in a few cases the presence of an inclined disk causes the isophotes to become more elongated than the fitted ellipses. However, at small radii, the isophotes are well approximated by ellipses, with any deviations being localised and suggestive of small-scale structures.
As a final test of how well the ellipse fitting method detects the underlying morphological signatures, we compared our “structure” images with images obtained by using other techniques. In Fig. \[maps\], we plot the “structure" image of the Seyfert 2 galaxies NGC 5033, NGC 5273 and NGC 3982 (top left, top right and bottom left respectively; note that the structure image of NGC 5273, also presented in Fig. 2, is now fliped and scaled to different brightness levels so that it can be compared with the respective color map image). The central micro-structure in these galaxies have been studied by Martini & Pogge (1999) with the use of ($V-H$) color maps. These maps are shown in Fig. \[maps\] as well. In the same figure, we also plot the “structure" image of NGC 3516 (a Seyfert 1 galaxy; bottom right panel) together with another structure image which was constructed with the use of the “Richardson - Lucy” (R–L) image restoration technique (Pogge & Martini, 2002). In all panels, the left hand side images are those taken from the literature (downloaded in electronic form from the respective journal site) and the right hand side images are the “structure” images created with the ellipse fitting method presented in this work. The brightness levels and the image scales were adjusted in such a way so that they are roughly comparable with the ones taken from the literature.
Comparison of the “structure" images with the color maps in Fig. \[maps\] shows clearly that the ellipse fitting technique reveals successfully most of the features that appear in the color maps (in some cases, the features appear more enhanced in the “structure" images). There are no additional features that could be identified with ellipse fitting residuals. It is also evident that the bright cores seen in all the galaxies in the color maps are fitted quite well with ellipses. As a result, they do not appear and do not cause any artifacts in the “structure” images. This is true even in the case of NGC 3516. In order to avoid the overexposed bright nucleus which appears in the image of Pogge & Martini (2002), we have used an image of shorter exposure, taken with a different filter. The nucleus is successfully removed, and the same features appear in both images. We conclude that the “ellipse fitting” method works successfully in suppressing the underlying smooth galaxy distribution and revealing real structures in the central regions of the galaxies. However, it is hard to judge whether the $\sigma^{2}_{S}$ values (of the $T>0$ galaxies mainly) are indicative of the real structure amplitudes [*only*]{}, or whether the amplitude of any fitting method residuals contributes significantly as well. Because of this reason, the $\sigma^{2}_{S}$ values should be considered as a rough estimate of the galactic micro-structure amplitudes.
Furthermore, Fig. \[f4\] (upper panel) shows a plot of $\sigma^{2}_{S}$ as a function of exposure time, for both the 100 pc and 1 kpc regions. Although we have normalised the structure images to the underlying galaxy isophotes, hence the estimation of $\sigma^{2}_{S}$ is not affected by differences in the brightness of the galaxies, there is still the possibility that if the signal to noise is small in some cases (due to short exposure time) we may not be able to estimate accurately the amplitude of the central structures. As Fig. \[f4\] shows, this is not the case. There is no correlation between $\sigma^{2}_{S100 pc}$ or $\sigma^{2}_{S1kpc}$ with exposure time. This result is verified when we use Kendall’s $\tau$ nonparametric statistic (Press [et al. ]{}1992) in order to investigate, quantitatively, whether there is a significant correlation between the two variables. We find $\tau=0.05$ and $\tau=-0.15$ for the \[$\sigma^{2}_{S100pc}$, exposure\] and \[$\sigma^{2}_{S1kpc}$, exposure\] variables, respectively. The probability that we would obtained these values by chance, if the two variables were uncorrelated, is $\sim 15\%$ in both cases.
On the other hand, both $\sigma^{2}_{S100pc}$ and $\sigma^{2}_{S1kpc}$ are correlated with the inclination and distance of the galaxies. Looking at Fig. \[f4\] (second panel from top) we can see a positive correlation between $\sigma^{2}_{S}$ and inclination: as the inclination increases, so does $\sigma^{2}_{S}$. The $\sigma^{2}_{S}$ vs distance plot (second panel from bottom in Fig. \[f4\]) shows that $\sigma^{2}_{S}$ is also correlated with distance. In fact, in this case, an anti–correlation is observed. As the distance decreases, $\sigma^{2}_{S}$ increases. Computation of Kendal’s $\tau$ yielded 0.28, 0.24, -0.16 and -0.23 for the $\sigma^{2}_{S100pc, S1kpc}$ vs inclination, and the $\sigma^{2}_{S100pc,
S1kpc}$ vs distance correlations, respectively. The probability that these values would appear by chance if the variables \[$\sigma^{2}_{S}$, inclination\] and \[$\sigma^{2}_{S}$, distance\] were uncorrelated is $0.2\%$ and $8\%$ (in the case of the $\sigma^{2}_{S100pc}$ plots) and $2\%$, $3\%$ (in the case of the $\sigma^{2}_{S1kpc}$ plots) respectively. The dependence of $\sigma^{2}_{S}$ on distance is easy to interpret. The median distance of all the galaxies is 16.7 Mpc. However, the distance of the nearest galaxy is only 1.4 Mpc, while the most distant galaxy is located at 61.2 Mpc. Any small scale structures will be smoothed out in the more distant galaxies, hence the increase of $\sigma^{2}_{S}$ with decreasing distance. The dependence of the variance on the inclination is rather unexpected. One would expect that small scale structure would be diminished in inclined systems, while the opposite effect is observed. Visual inspection of the respective structure images shows that the increase of $\sigma^{2}_{S}$ with increasing inclination is caused by the obscuration effects due to dust, which become more pronounced in inclined systems.
Finally, we also examined whether $\sigma^{2}_{S}$ depends on the absolute $B$ band magnitude of the galaxies (Fig. \[f4\], lower panel). As expected, there seems to be no correlation between the two variables. Indeed, $\tau=-0.07$ and $-0.15$ for the \[$\sigma^{2}_{S100 pc, 1kpc},
M_{B}$\] variables. The probability that the correlation of two uncorrelated variables would yield these values by chance is $41\%$ and $15\%$ respectively.
------- ------------------ ------------------
NAME $\sigma^{2}_{S}$ $\sigma^{2}_{S}$
(NGC) 100 pc 1 kpc
1058 0.00300 ..
1068 0.00810 0.03900
1358 0.00920 0.00200
1667 0.01300 0.02400
2273 0.02260 0.02900
2300 0.00008 -0.00570
2639 0.00510 0.00320
2655 0.01500 0.01700
2748 0.13700 0.00060
2775 0.00070 0.01700
2903 1.16400 ..
2964 0.14900 0.08100
3031 -0.00440 ..
3227 0.04800 0.03900
3310 0.04200 0.16400
3344 0.00130 ..
3504 0.18600 0.13200
3516 0.06200 ..
3810 0.00420 0.01100
3982 0.00600 0.03100
4062 0.09340 ..
4102 0.18900 0.03500
4138 0.02000 0.04000
4152 0.05200 0.01400
4168 0.00250 0.00510
4212 0.11200 ..
4245 -0.00120 ..
4365 -0.00008 -0.00050
4371 -0.00120 -0.01200
4378 0.00080 0.00160
4379 -0.00020 -0.01460
4380 0.00450 ..
4382 0.00026 -0.00036
4405 0.00130 0.00900
4406 0.00090 -0.00090
4414 0.05200 ..
4473 -0.00006 -0.00060
4477 0.00310 -0.00630
4478 0.00180 0.00070
4501 0.00930 0.02900
4536 0.19600 ..
4567 0.04100 0.09000
4578 -0.00070 -0.02280
4612 0.00100 0.00400
4621 0.00017 -0.01370
4639 -0.00400 -0.05000
4649 -0.00006 -0.00020
4660 -0.00040 -0.01400
4694 0.08700 0.01100
4698 0.00080 -0.00300
4800 0.00090 0.01000
4900 0.91000 ..
5033 0.00140 0.06900
5194 0.05900 ..
5273 0.01780 0.00300
6217 0.02500 0.02700
7479 0.00260 0.02300
7743 0.01420 0.00492
------- ------------------ ------------------
: The microstructure variance of the central regions of the galaxies
Active vs non-Active galaxies
-----------------------------
In Fig. \[f5\] we plot the distribution of $\sigma^{2}_{S100pc}$ and $\sigma^{2}_{S1kpc}$ for the AGN and non-AGN galaxies in our sample (filled and open histograms, respectively). The two distributions appear to be similar, although the non-AGN galaxies show an excess of larger $\sigma^{2}_{S100pc}$ values. This effect is more pronounced in the plots of the cumulative distribution functions (CDF) of $\sigma^{2}_{S}$ (shown in the lower panel of Fig. \[f5\]). The $\sigma^{2}_{S100pc}$ CDF of the non-AGN galaxies shows clearly an extended tail towards larger values. The K–S test shows that the differences between the two distributions are not statistically significant. The probability that they are drawn from the same population is $11\%$ and $12\%$ for the $\sigma^{2}_{S100pc}$ and $\sigma^{2}_{S1kpc}$ distributions respectively. We note that the dependence of $\sigma^{2}_{S}$ on distance and inclination cannot influence our results since the distributions of inclination and distance are statistically similar for the active and non-active groups of galaxies.
In order to examine whether $\sigma^{2}$ depends on the galaxy Hubble type and the galaxy activity class, in Fig. \[f6\] we plot the mean $\sigma^{2}_{S100pc}$ and $\sigma^{2}_{S1kpc}$ for the AGN and normal galaxies as a function of $T$. In the $\sigma^{2}_{S1kpc}$ vs $T$ plot we observe a systematic increase of the structure amplitude with the Hubble type, which is roughly similar for both the AGNs (filled circles) and non-AGNs (open diamonds). This trend is expected, for three reasons. At a distance of 1 kpc, as $T$ increases, we start to detect the increasing amplitude of the spiral arms (with respect to the underlying galactic/bulge component). Furthermore, early-type galaxies are less gas–rich than late-type galaxies (Young & Scoville 1991). Therefore, obscuration effects and/or bright H II regions will become more prominent (and hence the variance will increase) as $T$ increases. Finally, as we mentioned in Section 4, in late-type galaxies, the presence of an exponential disk can cause fitting residuals to appear, and hence increase the $\sigma^{2}_{S1kpc}$ values for these galaxies.
The $\sigma^{2}_{S100pc}$ vs $T$ plot shows a similar trend (the variance increases with increasing $T$), however there are two important differences as well. Firstly, AGNs appear to have similar $\sigma^{2}_{S100pc}$ values, irrespective of their Hubble type. On the other hand, the early and late normal galaxies show a large difference in their $\sigma^{2}_{S100pc}$, with the late-type galaxies being much more irregular in their central regions than the early-type galaxies. We would like to emphasize that the differences in the variance of the early and late-type galaxies can not be caused by differences in their distance or inclination, since the distributions of the distance and inclination for the two groups of galaxies are statistically similar (the probability that they are drawn from the same parent population is $27\%$ and $12\%$, respectively), or by any fitting method residuals, since the overall shape of the galactic isophotes at small radii in both early and late-type galaxies are well approximated by ellipses as we mentioned in Section 4.
Active vs non-Active early and late-type galaxies
-------------------------------------------------
We investigated further the comparison between the $\sigma^{2}$ distributions of the AGN and non-AGN groups of galaxies taking into account the differencies between the early and late-type of galaxies that Fig. \[f6\] revealed. In Fig. \[f7\], we plot the distributions of $\sigma^{2}$ for AGN and non-AGN considering not the whole samples but only late or early-type galaxies (upper and second from bottom panel, respectively). In the case of late-type galaxies, the distribution of $\sigma^{2}_{S}$ for normal galaxies extends to much larger values, when compared to the distribution of the AGN. The opposite is true in the case of early-type galaxies. There, the AGN distribution of $\sigma^{2}_{S}$ shows a tail towards larger values, while the distribution of the normal galaxies is centered around $\sim 0$.
These results become more pronounced in the plots of the cumulative distribution functions. The CDF plot of $\sigma^{2}_{S100pc}$ for late-type AGN and non-AGN shows clearly that, on average, normal galaxies have $\sigma^{2}_{S}$ values larger than the AGN (second from top panel in Fig. \[f7\]). On the other hand, the CDF of the early-type AGN is shifted to larger values when compared to the CDF of the early type non-AGN galaxies (bottom panel in Fig. \[f7\]). Application of the K–S test shows that the distributions shown in Fig. \[f7\] are significantly different. The probability that they are drawn from the same parent population is less than $0.1\%$ in the case of early-type galaxies (for both $\sigma^{2}_{S100pc}$ and $\sigma^{2}_{S1kpc}$), and $0.5\%$ in the case of the distributions of $\sigma^{2}_{S100pc}$ for late-type AGN and non-AGN. On the other hand, the $\sigma^{2}_{S1kpc}$ distributions for the same group of galaxies, are statistically similar, with a probability of being drawn from the same parent population of $37\%$.
The difference between the distributions of $\sigma^{2}_{S}$ of the non-AGN/AGN early and late type galaxies cannot be caused from differences in the distributions of their morphological type or inclination. For example, application of the K–S test shows that the Hubble type index and the inclination distributions of the early type, AGN and non-AGN galaxies are [*not*]{} significantly different. The probability of being drawn from the same parent polution is $48\%$ and $31\%$ respectively. The only significant difference that we find is between the distributions of distance for the early-type AGN/non-AGN galaxies. AGN have on average larger distance compared to the non-AGN, early-type galaxies. However, this result implies that the difference in the variance between the two groups is actually [*larger*]{} than what is observed in Fig. \[f7\].
Discussion
==========
We have used archival WFPC2 HST images to study the morphology of 58 nearby galaxies in a quantitative way, i.e. by measuring the variance (that is to say the average amplitude of the deviations from the smooth galactic isophotes), in two regions around the center of each galaxy, one with a radius of 100 pc and the other with a radius of 1 kpc. Our main results are as follows:
1\) Taken as a whole, the galaxies show considerable structure in their central regions. The amplitude of the nuclear features is roughly $\sim
25\%$ and $\sim 15\%$ of the underlying galactic emission in the inner region of 100 pc and 1 kpc, respectively.
2\) When we consider the whole AGN and non-AGN group of galaxies, the distributions of the variances are statistically similar.
3\) The central structure tends to increase from early-type towards late-type galaxies.
4\) The $\sigma^{2}_{S1kpc}$ increases “smoothly” for both the AGN and non-AGN galaxies with increasing $T$ (see Fig. \[f6\]). Consequently, the spiral arm structure and large scale dust lane morphology (these two factors contribute most of the large scale structure that we observe) are similar in both groups of galaxies.
5)However, the $\sigma^{2}_{S100pc}$ values show a large, discontinuous increase between the non-AGN late and early-type of galaxies, while they remain roughly constant for active galaxies, both of early and late-type.
Consistent with previous studies (e.g. Malkan [et al. ]{}1998, Regan & Mulchaey 1999, Martini & Pogge 1999), we find that all AGN show evidence for significant central structure, irrespective of the host galaxy type. The deviations are caused by localised regions of excess emission or deficits, probably caused by dust absorption. The difference in the morphology of the central region between AGN and non-AGN galaxies is particularly strong in the case of early-type galaxies. The “structure” image of [*all*]{} the early-type, non-AGN galaxies (except from NGC 4649, see below) looks like the structure image of NGC 4612 in Fig. \[f3\] (an early-type galaxy itself). No significant deviations, of any kind, from the fitted ellipses appears in the images of the other normal, early-type galaxies. On the contrary, the structure images of [*all*]{} the early-type, AGN galaxies show significant structures in their central regions. An example is shown in Fig. \[f2\] (NGC 5273 is an early-type, AGN galaxy). The central region (i.e. the innermost $100$ pc) shows significant deviations from the isophotes, with many bright and dark regions appearing, in a rather “chaotic” pattern. Large-scale dark “lanes”, which extend up to the innermost region are also evident. The only exception among the early-type, non-AGN galaxies is NGC 4694, an H II galaxy according to Ho [et al. ]{}(1997a), which shows large amplitude structure in its central regions (in fact, its $\sigma^{2}_{S100pc}$ value is the largest among the early-type galaxies), similar to the structure that is seen in the central region of AGNs. We suspect that this galaxy may have a misclassified Hubble type (indeed it has a much more peculiar shape than the typical $T=-2$ galaxies) or a misclassified nuclear activity type.
Due to the small size of the early type, AGN and non-AGN galaxy samples, the significant differences that we find in the morphology of their central regions should be considered with caution. For example, although we find no statistically significant difference in the distribution of host type for the two samples, approximately $44\%$ of the early-type non-AGN are ellipticals ($T=-5$), while only $11\%$ of the early-type AGN are ellipticals. This could explain in part the observed difference in the distributions of $\sigma^{2}_{S}$ values. However, only one of the remaining nine non-AGN galaxies (with $T>-5$ shows central micro-structure (NGC 4694), while all eight early-type, AGN galaxies with $T>-5$ show significant structure in their central regions. If confirmed with the use of larger samples, this result is consistent with the hypothesis that the presence of an active nucleus in early-type galaxies is associated with the presence of material in them. It is possible that all early-type galaxies host a supermassive black hole, but only in those cases where a sufficient quantity of interstellar material has managed to reach the innermost region (and fuel the central engine), an active nucleus is exhibited.
We find a different picture in the case of late-type galaxies. Almost all of them show large amplitude structure in their central regions. In fact, the average amplitude of the deviations in the 100 pc innermost region of the late-type non-AGN galaxies is [*larger*]{} than the amplitude in the AGN of the same morphological type (see Fig. \[f6\]). This is a puzzling result, which implies that the presence of significant structure, and hence of material in the inner region of these galaxies, does not result in the presence of an active nucleus in them.
One possibility is that the mass of the black hole in late-type normal galaxies is small and hence the luminosity of the nucleus is not large enough to be detected. We investigated this possibility by computing the mass of the putative black hole in the center of the galaxies according to the formula
$$M_{BH} = 0.78 \times 10^8 M_{\sun} (\frac{L_{bulge}}{10^{10}
L_{\sun}})^{1.08}$$
given in Kormendy & Gebhardt (2001). This relation between the black hole mass ($M_{BH}$) and the $B$ band luminosity of the bulge component of the galaxy ($L_{bulge}$) is established the last few years from stellar, ionized gas, and maser dynamics observations of many, mainly inactive or weakly active galaxies. Using the values of the absolute bulge magnitude ($M_{bulge}$) for our sample of galaxies (taken from Ho [et al. ]{}1997a) we calculated the luminosity of the bulge ($L_{bulge} = 10^{0.4(4.79 -
M_{bulge})}$) and thus the black hole mass $M_{BH}$ for each galaxy. We found that the black hole mass for the AGNs in our sample ranges from $7.4
\times 10^6 M_{\sun}$ to $6.8 \times 10^8 M_{\sun}$ with a median value of $2.0 \times 10^8 M_{\sun}$. For the non-AGNs, assuming that they also host an “invisible” black hole, the mass range would be from $1.0 \times 10^7
M_{\sun}$ to $2.2 \times 10^9 M_{\sun}$ with a median value of $8.2 \times
10^7 M_{\sun}$. Hence, on average, the active galaxies host a black hole with a mass which is $\sim$ 2.5 times larger than the mass of the black hole in the non-AGN galaxies in our sample. However, there is also a considerable overlap between the black hole mass values that we compute for the AGN and non-AGNs. Therefore, we conclude that most of the late-type, non-AGN galaxies in our sample either do [*not*]{} host a supermassive black hole, or, for some reason, although there is enough material in the central region (i.e. within the innermost 100 pc), it cannot fuel the central engine.
In order to investigate possible reasons that could prevent the fueling of the central engine in the galaxies that show the largest amplitude structure in their innermost 100 pc region, we compared the morphology of the circumnuclear structures that we observe in the late-type, AGN and non-AGN galaxies, in trying to find whether there exist any systematic differencies. We could not identify any clear patterns that appear exclusively in one of the two groups of galaxies. There are galaxies in both groups which show nuclear dust spiral formations, which some times appear to connect to larger scale dust lanes. In other cases, irrespective of the galaxy’s activity type, the distribution of the structures follows a chaotic pattern, with no clear, overall formation.
Therefore, the only significant difference that appears to exist between late-type, AGN and non-AGNs is the amplitude of the nuclear structures. Perhaps there exists an active nucleus in the late-type, non-AGN in our sample but, if the larger amplitude structure that we find in these galaxies implies the existence of a larger amount of gas in their central region, then this material, apart from fueling the central black hole, may also obscure the central active nucleus (including the Narrow Line Region) from our sight. At the same time, the large amounts of gas could result in the formation of a large number of star forming regions which make these galaxies look like H II galaxies. As the central gas content decreases, the active nucleus will be revealed and, at the same time, the central black hole will have increased its mass, rendering the galaxy a “normal" AGN.
Summary
=======
Using archival [*Hubble Space Telescope*]{} images, we examined the central morphology of 58 galaxies (23 AGN and 35 non-AGN). Using the “ellipse fitting" technique, we “uncovered” hidden structures in the innermost parts of the galaxies. In order to compare, in a quantitative way, the structure seen in the samples of AGN and normal galaxies we calculated their variance (a quantity that is proportional to their amplitude normalised to the underlying galactic emission) and compared its distribution for different subgroups of the galaxies. We found that [*all*]{} AGNs show significant structure in their central 100 pc region. The amplitude of the structures is more or less independent of their Hubble type. When grouping the galaxies according to their Hubble type we found that, contrary to early-type AGNs, early-type non-AGN galaxies show no structure at all. This result is consistent with the hypothesis that all early-type galaxies host a supermassive black-hole, but in only those cases where there is significant amount of material in their central regions they host an active nucleus. On the other hand, late-type galaxies show significant nuclear structures irrespective of whether they are AGNs or not. This implies that the presence of material in the inner region of these galaxies does not result in the presence of an active nucleus as well. Either not all late-type galaxies host a central black hole, or, for some reason, contrary to what happens in AGN, the significant amount of material on the scales of tens-of-parsecs does not make it down to the scales of the central black hole. Another possibility is that the large amount of gas and dust in them obscure the nucleus from our sight.
Our results are based on the use of small size samples, because the the number of current HST, WFPC2 images of nearby galaxies that satisfy the criteria listed in Section 2 is small. Obviously, larger samples are needed in order to confirm our results. We plan to repeat the analysis that we presented in this work in the future when a larger number of observations of AGN and non-AGN galaxies will be available, and investigate in greater detail the differencies/similarities in the morphology of the central region in these galaxies.
We wish to thank the referee (J. Mulchaey) for useful comments and suggestions on the improvement of this paper. This paper has also benefited a lot from discussions with N. Kylafis, J. Papamastorakis, V. Charmandaris and K. Xilouris.
Bender, R., Möllenhoff, C., 1987, A&A, 177, 71
Biretta, J.A., [et al. ]{}2000, WFPC2 Instrument Handbook, Version 5.0 (Baltimore, STScI)
Bushouse, H.A., 1986, AJ, 91, 255
Dahari, O., 1984, AJ, 89, 966
Fuentes–Williams, T., Stocke, J.T., 1988, AJ, 96, 1235
Hernquist, L., 1989, Nat, 340, 687
Ho, L.C., Filippenko, A.V., Sargent, W.L.W., 1995, ApJS, 98, 477
Ho, L.C., Filippenko, A.V., Sargent, W.L.W., 1997a, ApJS, 112, 315
Ho, L.C., Filippenko, A.V., Sargent, W.L.W., 1997b, ApJ, 487, 568
Ho, L.C., 2001, astro-ph/0110438
Jedrzejewski, R.I., 1987, MNRAS, 226, 747
Kent, S.M., 1983, AJ, 266, 562
Kormendy, J., Gebhardt, K., 2001, astro-ph/0105230
Malkan, M.A., Gorjian, V., Tam, R., 1998, ApJS, 98, 477
Márquez, I., Durret, F., Gonzalez Delgado, R.M., et al., 1999, A&AS, 140, 1
Márquez, I., Durret, F., Masegosa, J., et al., 2000, A&A, 360, 431
Martini, P, Pogge, R.W., 1999, AJ, 118, 2646
Milvang-Jensen, B., Jørgensen, I., 1999, Baltic Astron., 8, 535
Mulchaey, J.S., Regan, M.W., Kundu, A, 1997, ApJS, 110, 299
Mulchaey, J.S., Regan, M.W., 1997, AJ, 482, L135
Pogge, R., & Martini, P. 2002, ApJ, (in press) (astro-ph/0201185)
Press, W.H., Tenkolsky, S.A., Vetterling, W.T., Flannery, B.P., 1992, Numerical Recipies. Cambridge Univ. Press, Cambridge
Rafanelli, P., Violato, M., Baruffolo, A., 1995, AJ, 109, 1546
Regan, M.W., Mulchaey, J.S., 1999, AJ, 117, 2676
Schmitt, H.R., 2001, AJ, 122, 2243
Shlosman, I., Begelman, M.C., Frank, J., 1990, Nat, 345, 679
Wozniak, H., Friedli, D., Martinet, L., et al., 1995, A&AS, 111, 115
Xanthopoulos, E., 1996, MNRAS, 280, 6
Young, J.S., Scoville, N.J., 1991, ARA&A, 29, 581
[^1]: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under the NASA contract NAS5–26555.
|
---
abstract: 'Long-lived isotopes of plutonium were studied using two complementary techniques, high-resolution resonance ionisation spectroscopy (HR-RIS) and collinear laser spectroscopy (CLS). Isotope shifts have been measured on the $5f^67s^2\ ^7F_0 \rightarrow 5f^56d^27s\ (J=1)$ and $5f^67s^2\ ^7F_1 \rightarrow 5f^67s7p\ (J=2)$ atomic transitions using the HR-RIS method and the hyperfine factors have been extracted for the odd mass nuclei $^{239,241}$Pu. Collinear laser spectroscopy was performed on the $5f^67s\ ^8F_{1/2} \rightarrow J=1/2\; (27523.61\text{cm}^{-1})$ ionic transition with the hyperfine $A$ factors measured for $^{239}$Pu. Changes in mean-squared charge radii have been extracted and show a good agreement with previous non-optical methods, with an uncertainty improvement by approximately one order of magnitude. Plutonium represents the heaviest element studied to date using collinear laser spectroscopy.'
author:
- 'A. Voss'
- 'V. Sonnenschein'
- 'P. Campbell'
- 'B. Cheal'
- 'T. Kron'
- 'I.D. Moore'
- 'I. Pohjalainen'
- 'S. Raeder'
- 'N. Trautmann'
- 'K. Wendt'
bibliography:
- 'paper.bib'
title: 'High-Resolution Laser Spectroscopy of Long-Lived Plutonium Isotopes'
---
Introduction
============
Laser spectroscopy is an established technique at radioactive ion beam (RIB) facilities for the study of nuclear shape, size, moments and spins of short-lived radioactive nuclei [@Blaum2013; @Campbell2016]. Thus far, the heaviest isotopic chain for which nuclear moments and mean-squared charge radii have been extracted from on-line experiments is that of Ra ($Z=88$) [@Ahmad1988], above which lie the actinide elements covering a range from Ac ($Z=89$) to Lr ($Z=103$). Such elements are not available at on-line isotope separator facilities and can only be produced via fusion reactions in heavy-ion collisions, transfer reactions using radioactive targets, or, for the study of long-lived isotopes of transuranium elements, bred in sufficient quantities in nuclear reactors and safely transported to facilities equipped for the study of nuclear structure. The combination of low production cross-sections, the lack of stable isotopes and correspondingly only a limited number of reliably determined optical transitions available from literature adds to the challenge of performing laser spectroscopy on these heaviest elements. Due to the scarcity of ground state nuclear structure information in this region of the nuclear chart, efforts are under way to develop suitable techniques which provide the required sensitivity to efficiently make use of the limited quantity of isotopes which can be produced [@Ferrer2013; @Backe2015; @Ferrer2017].
Current studies at RIB facilities predominantly use two main techniques of optical spectroscopy. The first, collinear laser spectroscopy, has been applied in a number of variants to the majority of elements, possessing a high resolution which routinely provides measurements of optical frequency splittings to $1-10\text{MHz}$ precision, and a high sensitivity, with minimum fluxes of $\sim 100$ particles per second quoted for systems with hyperfine structure [@Bissell2007], even lower for even-even isotopes. The second, resonance ionisation spectroscopy (RIS), has been successfully applied directly within the ion source [@Fedosseev2012] allowing the study of even more exotic nuclei, with minimum half-lives approaching $\sim 1\text{ms}$ and with spectroscopic information extracted from fluxes of below 1 ion per second [@deWitte2007].
The disadvantage of the in-source RIS method arises from the effect of the different broadening mechanisms which limits the resolution to typically a few GHz. It remains a challenge to analyse lower resolution RIS spectra which exhibit either fully or partially overlapping hyperfine structures and to reliably assign systematic uncertainties to such measurements. Nevertheless, the complementarity of both spectroscopic methods has been demonstrated in recent studies of the nuclear structure of exotic Cu isotopes, where the lower resolution in-source RIS method was often used to greatly reduce the scanning range for high-resolution collinear laser spectroscopy [@Stone2008; @Cocolios2009; @Cocolios2010; @Flanagan2009; @Vingerhoets2011; @Koster2011].
In recent years, in-source spectroscopy has improved with the development of advanced cavity designs for pulsed lasers. These new resonators combine the features of high output powers required for the saturation of atomic transitions in the resonant ionisation process, as well as a reduction in the laser linewidth permitting a higher spectroscopic resolution for studies of isotope shifts and hyperfine structures. A ring design of Ti:Sapphire laser cavity developed at Mainz University resulted in output powers of up to $1\text{W}$ in single-mode operation, with a laser linewidth of below $50\text{MHz}$ [@deGroote2015]. Injection-locked systems on the basis of a similar ring cavity Ti:Sapphire laser have demonstrated linewidths of pulsed high-repetition-rate radiation of below $20\text{MHz}$, while maintaining an impressive output power in the range of a few Watts [@Kessler2008; @Sonnenschein2015].
In this work, a new programme of heavy element research at the <span style="font-variant:small-caps;">Igisol</span> facility in the Accelerator Laboratory of the University of Jyväskylä was initiated in collaboration with the Institut für Physik, Johannes-Gutenberg Universität, Mainz. Plutonium ($Z=94$) was chosen to be a suitable candidate having a number of long-lived isotopes, $^{238-244}$Pu, for which sufficiently large sample sizes (ng) have been supplied by the Institut für Kernchemie, Mainz, for studies both in Mainz and at Jyväskylä. Earlier optical emission studies of Pu include a measurement of the isotope shift for $^{239-240}$Pu with modest resolution in a number of atomic levels and transition lines [@ActinideTables], whereas for a larger set of isotopes, $^{238-242}$Pu, the shift was determined with a precision of approximately $100\text{MHz}$ in selected levels in neutral Pu [@LandoltBornstein]. Motivated by trace analysis applications [@Trautmann2004; @Raeder2012], resonance ionisation has been applied to quantify the plutonium amount in environmental samples. To enable isotope selectivity in these studies, resonance ionisation spectroscopy (RIS) was used in combination with a time-of-flight mass spectrometer to resolve the isotope shifts in $^{238-242,244}$Pu [@Gruning2001] to a precision of about $600\text{MHz}$, with later refinements using narrow linewidth continuous-wave (CW) lasers resulting in a precision of $15-30\text{MHz}$ for $^{239,240,242,244}$Pu [@Kunz2004].
The focus of the current article is the first comparison of the experimental techniques of collinear laser spectroscopy and high-resolution RIS in the actinide region. A measurement of optical isotope shifts in Pu has been performed on three transitions, two atomic transitions using an injection-locked pulsed laser system at Mainz and an ionic transition via collinear laser spectroscopy in Jyväskylä. The King plot method [@King1984] is used to extract changes in mean-squared charge radii in order to assess the accuracy of both techniques. Establishing a good agreement is of importance such that either method can be used in the future to explore the properties of heavier actinides or lighter refractory elements which have thus far been challenging to access.
The article is structured as follows: in section \[sec:exp\] details of the two techniques are presented. The results and data analysis for the two experiments are presented in section \[sec:data\], with the extraction of changes in mean-squared charge radii in section \[sec:king\]. Section \[sec:sys\_errors\] contains a discussion of the systematic errors assigned to both techniques. The final conclusions are drawn in section \[sec:conc\].
Experimental Technique {#sec:exp}
======================
As this work utilises two very different laser spectroscopic techniques, resonance ionisation spectroscopy using pulsed lasers and collinear laser spectroscopy using a continuous-wave laser, both techniques are described separately.
High-Resolution Resonance Ionisation Spectroscopy
-------------------------------------------------
High-resolution resonance ionisation spectroscopy (HR-RIS) was performed using a Ti:sapphire laser system composed of one conventional $10\text{kHz}$ high repetition rate laser together with a dedicated injection-locked Ti:Sapphire laser [@Sonnenschein2015] which ensured a specifically narrow bandwidth. These were operated at the Mainz Atomic Beam Unit (<span style="font-variant:small-caps;">Mabu</span>) which comprises a quadrupole mass filter (QMF), as schematically depicted in Fig. \[fig:mabu\] [@Rossnagel2012]. A well-collimated atomic beam was formed by resistively heating a graphite oven to approximately $1300\text{K}$. In order to reduce the large Doppler broadening due to the thermal velocity distribution of the atomic ensemble expected in co-/counter-propagating laser irradiation, the laser beam from the injection-locked Ti:Sapphire laser was introduced in a perpendicular geometry to the effusing atomic beam and expanded to generate a uniform intensity distribution within the interaction volume. The ionisation laser was introduced counter-propagating to the atomic beam and focused to a spot size comparable to the oven dimension with an inner diameter of $2\text{mm}$. Following resonant ionisation, the ion beam was shaped and deflected by $90^\circ$ in a quadrupole deflector to enter the QMF (mass resolving power $M/\Delta M \sim 200$) and subsequently detected using a channeltron electron multiplier operating in single-ion counting mode. Due to the pulsed nature of the lasers, a time-gating method was employed to minimise the background from surface ions.
\
The ring cavity Ti:Sapphire laser was injection-locked to an external cavity diode laser (<span style="font-variant:small-caps;">Ecdl</span>) via a single-mode optical fibre providing $5-20\text{mW}$ seed input. The spectral linewidth of the Ti:Sapphire laser was analysed with a commercial scanning Fabry-Pérot interferometer (FPI) with a free spectral range (FSR) of $~300\text{MHz}$, resulting in a measured linewidth of $13.4(8)\text{MHz}$. This may be compared with the measured linewidth of the master (<span style="font-variant:small-caps;">Ecdl</span>) laser of $10.1(2)\text{MHz}$. The <span style="font-variant:small-caps;">Ecdl</span> was stabilised via a quadrature interferometer (<span style="font-variant:small-caps;">iScan</span>, <span style="font-variant:small-caps;">Tem Messtechnik</span>) for fast frequency control in combination with fringe-offset locking for long-term stability [@Fischbach2012; @*Hakimi2013]. By locking to a confocal FPI ($\text{FSR}=299.782(5)\text{MHz}$) and using a frequency-stabilised HeNe laser as reference, a frequency calibration of better than $1\text{MHz}$ could be attained. The frequency of the injection-locked laser was scanned in a stepwise manner by driving the <span style="font-variant:small-caps;">Ecdl</span> to fixed setpoints. Data acquisition of the ion signal took place whenever the <span style="font-variant:small-caps;">Ecdl</span> laser frequency was within a $\pm5\text{MHz}$ locking interval of the setpoint.
Spectroscopy on neutral Pu was performed on two different atomic transitions illustrated in Fig. \[fig:scheme\]. The first transition at $385.210\text{nm}$ proceeds from the atomic ground state ($J=0$) to an excited state at $25959.849\text{cm}^{-1}$ ($J=1$), whereas the second transition at $387.965\text{nm}$ proceeds from a thermally populated state at $2203.606\text{cm}^{-1}$ ($J=1$) to a level at $27929.161\text{cm}^{-1}$ ($J=2$). At an oven temperature of $\sim 1300\text{K}$ the metastable state is expected to have a 20% population with respect to the ground state. The wavelengths were obtained by single-pass frequency doubling of the laser radiation from the injection-locked Ti:Sapphire laser using a $\beta$-barium borate (BBO) non-linear crystal. The ionisation step for both transitions proceeded via auto-ionising states above the ionisation potential (IP) and was provided by an intra-cavity frequency doubled broadband Ti:Sapphire laser with a fundamental linewidth of approximately $4-5\text{GHz}$. Typical laser powers available for the ionisation step were up to $1\text{W}$. Lower laser powers of the order of $2-10\text{mW}$ were used for the spectroscopy step to minimise saturation broadening.
Collinear Laser Spectroscopy
----------------------------
Collinear laser spectroscopy was performed at the <span style="font-variant:small-caps;">Igisol</span> facility of the Accelerator Laboratory at the University of Jyväskylä. Samples containing Pu isotopes ($^{238-242,244}$Pu) were electrolytically deposited onto a tantalum substrate which was electrothermally heated inside a gas-cell filled with helium. The Pu atoms were selectively ionised via two-step resonant laser ionisation utilising two intra-cavity frequency doubled, broadband Ti:Sapphire lasers operating at a repetition rate of $10\text{kHz}$ and a linewidth of $\sim100\text{GHz}$. Further details concerning the gas-cell designed for such heavy element studies as well as the in-gas-cell resonant laser ionisation process have been published elsewhere [@Pohjalainen2016].
The ions were extracted from the gas-cell via gas flow, guided through a sextupole ion guide (SPIG) [@Karvonen2008] and accelerated to $30\text{keV}$ towards a mass separator with a typical resolving power $M/\Delta M \sim 350$. Following mass separation, a continuous ion beam of a single $A/q$ was injected into a gas-filled radio-frequency Paul trap (RFQ) for cooling and bunching [@Nieminen2001]. The use of an RFQ in conjunction with collinear laser spectroscopy was pioneered at the <span style="font-variant:small-caps;">Igisol</span> facility in order to suppress the laser-scattered background by gating the data acquisition with respect to the arrival of an ion bunch at the light collection region [@Nieminen2002]. The bunched ion beam was overlapped in a collinear geometry with a counter-propagating laser beam. A scanning voltage applied to the light collection region Doppler shifted the ions into resonance with the laser light and the resulting fluorescent photons were imaged and detected on a photomultiplier tube. A schematic diagram of the collinear laser spectroscopy beamline is given in Fig. \[fig:laserline\]. In this work, the tantalum filaments were heated to a temperature of approximately $1300-1500\text{K}$, depending on the abundance of the isotope of interest to be evaporated from the filament, such that a Pu$^{+}$ ion yield of approximately $30,000/\text{s}$ were detected on a set of microchannel plates at the end of the collinear laser spectroscopy beamline.
{width="\textwidth"}
Laser spectroscopy was performed from the PuII ionic ground state on the $5f^67s\ ^8F_{1/2} \rightarrow J=1/2\; (27523.61\text{cm}^{-1})$ transition [@ActinideTables] at $363.324\text{nm}$. The laser light was generated by a <span style="font-variant:small-caps;">SpectraPhysics 380D</span> dye laser operating with Pyridine2 dye pumped by a <span style="font-variant:small-caps;">Coherent Verdi V5</span> diode-pumped solid-state (DPSS) laser at $532\text{nm}$. Long-term frequency stabilisation was achieved employing a “top-of-fringe” locking to an iodine absorption line with a $3\text{MHz}$ accuracy. Intra-cavity frequency doubling using a BBO crystal allowed the generation of up to $0.35\text{mW}$ UV light which was injected into the beamline through a Brewster window.
Data Analysis and Results {#sec:data}
=========================
Resonance Ionisation Spectroscopy
---------------------------------
The resonance ionisation spectra obtained using the two transitions on $^{238-242,244}$Pu atoms are shown in Fig. \[fig:RIS-spectra\]. A frequency jitter of the <span style="font-variant:small-caps;">Ecdl</span> of typically $5-10\text{MHz}$ and the nature of the data acquisition selection ($\pm5\text{MHz}$ around the setpoint for the fundamental light) led to small shifts of the resonance positions between the two scan directions. In order to consider this effect, the data from scans with increasing and decreasing frequency were summed for the analysis process. The fit results for the hyperfine coefficients and isotope shifts with respect to the reference isotope $^{240}$Pu are given in Table \[tab:RIS-results\].
\
[c c | r@[.]{}l r@[.]{}l r@[.]{}l | r@[.]{}l r@[.]{}l r@[.]{}l r@[.]{}l r@[.]{}l]{} & & &\
Isotope & $I$ & & & & & & & &\
$^{244}$Pu & $0$ & $-$1955&2(55)\[80\] & & & $-$6288&1(60)\[80\] & & & &\
$^{242}$Pu & $0$ & $-$969&4(54)\[80\] & & & $-$3099&3(90)\[80\] & & & &\
$^{241}$Pu & $5/2$ & $-$379&5(117)\[80\]$^\ast$ & $-$278&1(56) & $+$100&8(180) & $-$1228&3(120)\[80\] & $-$2&2(48) & $+$37&7(34) & $+$1058&6(82) & $-$175&4(143)\
$^{240}$Pu & $0$ & 0&0{ref} & & & 0&0{ref} & & & &\
$^{239}$Pu & $1/2$ & $+$757&8(60)\[80\] & $+$402&3(32) & & $+$2438&7(120)\[80\] & & $-$62&6(33) & &\
$^{238}$Pu & $0$ & & $+$4126&7(130)\[80\] & & & &\
\
\
\
The data were fitted with a Voigt lineshape whereby the Gaussian component was considerably smaller than the Lorentzian component. The oven temperature was adjusted to optimise the release of the isotope of interest, which in turn may provide a different Gaussian contribution to the linewidth of each isotope. Free parameters in the fit were the background, the lineshape (FWHM, Gaussian and Lorentzian contributions), the peak intensities and its centroid as well as hyperfine parameters where applicable. For each isotope, one lineshape was assumed for all resonances. The FWHM of the resonances varied from $100-150\text{MHz}$ of which approximately $20\%$ was attributable to the Gaussian component. With respect to the hyperfine parameters, some additional constraints were included in the fitting procedure as outlined in the following. An example fit of the $^{241}$Pu spectrum for the $387.965\text{nm}$ atomic transition is given in figure \[fig:RIS-fit\] using free intensities to show the close correspondence of the fitted spectrum with regards to the experimental errorbars.
In the metastable transition at $387.965\text{nm}$, the peaks for the $F=7/2 \rightarrow F'=7/2$ and $F=7/2 \rightarrow F'=9/2$ hyperfine transitions (see Figure \[fig:hfs\] for an expanded level scheme) overlap closely in the $^{241}$Pu hyperfine spectrum. The relative intensities for these peaks were therefore fixed to the theoretical value of the corresponding Racah coefficients in the fit. Since the hyperfine $A$ coefficient of the lower state is close to and consistent with zero, as determined from the $^{241}$Pu spectrum, it was kept fixed at zero for the evaluation of the collapsed hyperfine structure in $^{239}$Pu.
The atomic ground state exhibits no splitting due to a spin $J=0$. Additionally, $^{238}$Pu was not observed using the $385.210\text{nm}$ transition which may indicate that despite a higher thermal population in the atomic ground state, the ionisation scheme starting from the metastable state was more efficient. As the <span style="font-variant:small-caps;">Mabu</span> was optimised for high transmission, the obtained mass resolving power of the QMF was not sufficient to fully discriminate between the Pu isotopes and thus the more abundant $^{242}$Pu was also present in scans of the hyperfine structure of $^{241}$Pu. As the hyperfine component of lowest frequency overlaps with the resonance of $^{242}$Pu, the extraction of the isotope shift was more demanding. It was also noticed during the data analysis, that the third component of the $^{241}$Pu hyperfine spectrum of highest frequency was missed during the laser scans. By using the ratio of field shifts between the two transitions for the even isotopes, the hyperfine structure centroid of $^{241}$Pu for the ground state transition can be estimated as $$\delta \nu_{385}^{240,241} = \frac {F_{385}}{F_{388}} \times \delta \nu_{388}^{240,241}\;\text{,}$$ where $F_{385}/F_{388}=0.309(7)$ from a King plot. Included in the fitting of $^{241}$Pu were the contributions from the abundant $^{242}$Pu mass peak as well as a weaker $^{240}$Pu component using the independently determined centroid positions.
Collinear Laser Spectroscopy
----------------------------
As the iodine absorption line at $\sim727\text{nm}$ used to stabilise the dye laser is untabulated, the laser frequency had to be calculated from the resonance spectra of $^{240}$Pu wherefore the transition frequency $\nu_\text{tran}$ is known [@ActinideTables]. This was accomplished by fitting the raw data of fluorescent photons versus DAQ channel number. The resonance channel number $x$ and the non-relativistic equation for a frequency offset $\Delta\nu$ for counter-propagating laser beams,
$$\begin{aligned}
\Delta\nu &= \nu_\text{laser}\left(1+\frac{v}{c}\right)-\nu_\text{tran}\\
&= \nu_\text{laser}\left(1+\sqrt{\frac{2[eV_\text{RFQ}-(mx+b)]}{m_\text{ion}c^2}}\right)-\nu_\text{tran},\end{aligned}$$
where $V_\text{RFQ}$ is the bias voltage of the RFQ and $m$, $b$ are the slope and intercept calibration parameters of the scanning voltage, was used. For the reference isotope $^{240}$Pu, $\Delta\nu=0$ on resonance.
The collinear laser spectroscopic work was performed on singly-ionised species and thus a correction to the atomic masses $m_\text{atom}$ from [@Wang2012] has been included to account for the mass difference due to the missing electron. Here, $$m_\text{ion} = m_\text{atom} - m_\text{electron} + m_\text{IP},\label{eq:m_ion}$$ where $m_\text{electron}$ was taken from [@codata2014] and $m_\text{IP}$ represents the ionisation potential converted into mass units from [@Koehler1997]. The effect of the latter is negligible, however, has been included in equation for completeness. The calculated ionic masses used for evaluation of the optical spectra are tabulated in Table \[tab:ion\_masses\].
The corresponding wavenumber of the ring dye laser was determined to be $\bar{\lambda}_\text{laser} = 13754.625\text{cm}^{-1}$ from the weighted mean of all $^{240}$Pu scans taking into account the frequency doubling. Any statistical errors related to the fits of the optical spectrum do not contribute at this scale. The fractional systematic uncertainty of the voltages due to the readback of the RFQ and scanning voltage of the light collection region corresponds to $0.1\%$ [@Charlwood2009]. The effect of a systematic uncertainty arising due to the voltage readback is considered to be $\sim0.0035\text{cm}^{-1}$. An uncertainty for the transition wavenumber $\bar{\lambda}_\text{transition}$ is not given in the literature [@ActinideTables], however, is assumed to be $0.01\text{cm}^{-1}$. The uncertainty of $\bar{\lambda}_\text{laser}$ is dominated by the uncertainty of $\bar{\lambda}_\text{transition}$. An overall systematic uncertainty of $0.01\text{cm}^{-1}$ for $\bar{\lambda}_\text{laser}$ is therefore assumed.
$A$
----- ----- --------------- ----- ---------------
244 244 06420526(557) 244 06365669(557)
242 242 05874281(196) 242 05819424(196)
240 240 05381375(192) 240 05326518(192)
239 239 05216359(192) 239 05161502(192)
: Comparison of the atomic masses from [@codata2014] and the ionic masses used in this work.[]{data-label="tab:ion_masses"}
The optical fluorescence spectra of singly-charged $^{244,242,240,239}$Pu ions measured in this work are shown in Fig. \[fig:CLS-spectra\]. A purely Lorentzian lineshape was used in the data analysis reflecting a zero Gaussian contribution to the spectra. This can be expected when using cooled ion beams in which the energy spread is typically $<0.6 \text{eV}$ [@Campbell2002]. The fit parameters included the background, the FWHM of the resonance, the centroid and the intensity of the resonance for the $I=0$ isotopes. The FWHM of the resonances was $\sim30\text{MHz}$ for all isotopes. For isotopes with a non-zero nuclear spin, the hyperfine $A$ coefficients of the atomic ground and excited states were allowed to vary and the FWHM of all resonances was constrained to be common. An example of such a fit is shown in figure \[fig:CLS-fit\]. The hyperfine structure of $^{239}$Pu was fitted with free intensities with the relative intensities from the best fit parameters corresponding to those from weak field coupling estimates (“Racah intensities”). Due to the choice of optical transition, $J=1/2 \rightarrow J'=1/2$, there is no sensitivity to the electric quadrupole moments. The extracted parameters and isotope shifts with respect to $^{240}$Pu are summarised in Table \[tab:CLS-results\].
![(Colour online) Optical fluorescence spectra for $^{244,242,240,239}$Pu$^{+}$ isotopes determined by collinear laser spectroscopy on the $^8F_{1/2} \rightarrow J=1/2$ ($363.324\text{nm}$) ionic transition. The centre of gravity (C.o.G.) in $^{239}$Pu is marked as a vertical dashed line.[]{data-label="fig:CLS-spectra"}](Pu-spectra-summed){width="\columnwidth"}
\
Isotope $I$
------------ ------- --------- ------------- --------- ------- --------- -------
$^{244}$Pu $0$ $+$2160 6(48)\[25\]
$^{242}$Pu $0$ $+$1056 3(42)\[12\]
$^{240}$Pu $0$ 0 0{ref}
$^{239}$Pu $1/2$ $-$872 7(55)\[9\] $+$7445 5(32) $-$1421 0(37)
: Summary of the extracted hyperfine $A$ parameters and isotope shifts $\delta\nu^{240,A}$ (all in MHz) for the collinear laser spectroscopy work on the $363\text{nm}$ ionic transition. Statistical uncertainties arising from hyperfine structure fits to the data are denoted by round brackets whereas systematic uncertainties (see section \[sec:sys\_errors\_CLS\]) due to the conversion from scanning voltages into frequencies are given in square brackets.[]{data-label="tab:CLS-results"}
Comparison of techniques using the King plot method {#sec:king}
===================================================
Information on the changes in mean-squared charge radii between nuclei with atomic masses $A$ and $A^{'}$ may be extracted from optical isotope shifts, $\delta\nu$, as $$\begin{aligned}
\delta\nu^{A',A} &= \nu^{A} - \nu^{A'}\\
&= \left(\frac 1 {m_{A'}} - \frac 1 {m_A}\right) M + F K(Z) \delta \langle r^2 \rangle^{A',A}. \label{eq:IS}\end{aligned}$$ Here, $M$ and $F$ are the transition-dependent atomic factors for the mass and field shift, respectively. $K(Z)$ is an element-dependent factor to correct for higher order (Seltzer) moments which contribute a few percent in heavier nuclei [@Seltzer1969; @Torbohm1985]. The atomic factors are to be determined either theoretically or empirically through the King plot technique [@Cheal2012; @King1984].
The King plot allows a direct determination of the atomic mass and field shift factors by examining optical isotope shifts either with respect to changes in mean-squared charge radii obtained from non-optical methods, $\delta \langle r^2 \rangle$, or via a transfer of known atomic factor information from one transition to another provided that each isotope pair has been studied using at least two different transitions. Multiplying equation with a modification factor $\kappa$ $$\kappa^{A,A'} = \frac{m_A m_{A'}}{m_A-m_{A'}}\times\frac{m_{A_\text{ref}}-m_{A'_\text{ref}}}{m_{A_\text{ref}}m_{A'_\text{ref}}},$$ removes the dependence of nuclear masses and includes a standard reference pair $A_\text{ref}=244$ and $A'_\text{ref}=240$ for presentation purposes. The modified isotope shifts are then written as $$\kappa^{A,A'}\delta\nu_{i}^{A',A} = \frac{m_{244}-m_{240}}{m_{244}m_{240}} \times M_{i} + F_{i} K(Z) \kappa^{A,A'}\delta\langle r^2 \rangle^{A',A}\text{,}$$ where $i$ (and likewise for $j$) denotes the transition. The modified isotope shifts of two optical transitions $i$ and $j$ may be plotted against each other and should yield a straight line with the atomic factor information contained in the gradient and intercept, $$\kappa^{A,A'}\delta\nu_{i}^{A',A} = \frac{F_i}{F_j} \kappa^{A,A'}\delta\nu_{j}^{A',A} + \frac{m_{244}-m_{240}}{m_{244}m_{240}} \times \left(M_i - \frac{F_i}{F_j}M_j\right).$$ A plot of the atomic isotope shifts determined by the HR-RIS method compared with the ionic shifts from collinear laser spectroscopy is shown in figure \[fig:KP-RIS\]. This serves as a consistency check of the measurements using the two techniques. The field shift ratio $F_\text{atomic}/F_\text{ionic}$ is $-0.799(39)$ for the ground state atomic $385.210\text{nm}$ transition and $-2.588(69)$ for the atomic metastable $387.965\text{nm}$ transition. As expected for heavy elements, the $y$-axis intercepts related to the mass shifts $M$ are small, $-232(86)\text{MHz}$ and $-701(151)\text{MHz}$, respectively.
\
Absolute charge radii for $^{239,240,242}$Pu have been determined from $X$-ray studies of muonic atoms [@Zumbro1986]. Plotting the modified isotope shifts $\delta\nu^{240,A}$ against the changes in mean-squared charge radii $\delta\langle r^2 \rangle^{240,A}$ allows for a direct evaluation of the atomic factors (see Fig. \[fig:KP\] for all transitions in this work). As a fit curve a linear fit through the origin (assuming a negligible mass shift) was used to determine the effective field shift gradients $F_\text{eff} = F \times K(Z)$. The assumption of $M=0$ is based on experimental work in Th [@Sonnenschein2012] where the intercept was consistent with zero. Theoretical work carried out for Fr indicates that the absence of a mass shift contribution causes an uncertainty of $\sim1\%$ in the extracted $\delta\langle r^2 \rangle$ [@MartenssonPendrill2000]. The effective field shift for the HR-RIS transitions were determined to be $F_{\text{eff, }385\text{nm}}=-7.1(7)\text{GHz/fm}^2$ and $F_{\text{eff, }388\text{nm}}=-22.8(23)\text{GHz/fm}^2$ from linear fits to the King Plot. The value for the effective field shift factor for the ionic collinear transition was extracted as $F_{\text{eff, }363\text{nm}}=+7.9(6)\text{GHz/fm}^2$. The negative $F$ factors reflect a decrease in $s$-electron density when promoting one electron from either the atomic ground state or metastable state to the excited state, as expected from the atomic configuration. In contrast, the positive $F$ factor in the transition in Pu$^+$ indicates an increase of the electron density at the nucleus agreeing with the assumption of one $f$-electron being transferred to an orbital with lower angular momentum.
The extracted changes in mean-squared charge radii are presented in Table \[tab:radii\]. Both spectroscopic techniques, collinear laser spectroscopy with fluorescence detection and high-resolution resonance ionisation spectroscopy utilising a narrow linewidth injection-locked laser, provide values for $\delta\langle r^2 \rangle^{240,A}$ with similar sized statistical uncertainties of $\sim5\times10^{-4}\text{fm}^2$. Systematic uncertainties arising from the $F_\text{eff}$ factors are of the order of $10\%$ of $\delta\langle r^2 \rangle^{240,A}$ and therefore dominate any other uncertainties. A graphical comparison is given in Fig. \[fig:radii\]. The extracted values for $\delta\langle r^2 \rangle^{240,A}$ from the optical isotope shifts in this work are consistent with those from muonic $X$-ray measurements [@Zumbro1986], however, have uncertainties approximately one order of magnitude smaller when only comparing statistical uncertainties. A comparison of $\delta\langle r^2 \rangle^{240,A}$ with respect to the average change in mean-squared charge radius per isotope, is provided in the bottom panel of Fig. \[fig:radii\]. The relative change in relation to the average is defined as $$\Delta = \frac 1 N \left(\sum_i^N \delta \langle r^2 \rangle^{240,A}_i\right) - \delta\langle r^2 \rangle^{240,A}_j\;\text{,}$$ where the summation runs over all transitions $i$ studied in this work, $N$ reflects the number of transitions studying a particular isotope and $j$ refers to the transition of interest.
A deviation in $\delta\langle r^2 \rangle^{240,A}$ values compared to [@Angeli2013] might partially be explained by different assumptions of the mass shift constant $M$. In this work, $M=0$ was used for all three investigated transitions. The arc discharge spectra of Pu [@Gerstenkorn1987] were evaluated in [@Angeli2013] against the muonic $X$-ray data from [@Zumbro1986] using $M=+391\text{GHz u}$ assuming an alkali-like transition. For comparison, $\delta\langle r^2 \rangle^{240,A}$ from [@Angeli2013] were plotted against $\Delta$ obtained from the three optical transitions to highlight the influence of the atomic factors on $\delta\langle r^2 \rangle$ as indicated in the middle panel of Fig. \[fig:radii\].
![(Colour online) King plots of the isotope shifts of all transitions in this work vs. changes in mean-squared charge radii evaluated from muonic $X$-ray measurements [@Zumbro1986]. The error bars for $\kappa\times\delta\nu^{240,A}$ lie within the data points. The best fits are shown as the dashed lines with the $68\%$ confidence bands included as the shaded areas.[]{data-label="fig:KP"}](KP){width="\columnwidth"}
---------------- ------- ------- -------- --------- ------------------- -------------------- ------------------- -------------------- ------------------ -------------------- ------------------- -------------------- -------------------
$385\text{nm}$ Pu I $-$7 1(7) $-$0 1067(9)\[11\]{105} $+$0 0538(14)\[11\]{52} $+$0 1365(8)\[11\]{134} $+$0 2754(8)\[11\]{271}
$388\text{nm}$ Pu I $-$22 8(22) $-$0 1810(5)\[3\]{174} $-$0 1070(5)\[3\]{103} $+$0 0565(4)\[3\]{51} $+$0 1359(4)\[3\]{131} $+$0 2758(2)\[3\]{266}
$363\text{nm}$ Pu II $+$7 9(5) $-$0 1105(7)\[1\]{69} $+$0 1337(5)\[1\]{84} $+$0 2735(6)\[3\]{173}
$-$0 120(66) $+$0 125(68)
$-$0 204(5) $-$0 122(3) $+$0 054(5) $+$0 151(5) $+$0 304(8)
---------------- ------- ------- -------- --------- ------------------- -------------------- ------------------- -------------------- ------------------ -------------------- ------------------- -------------------- -------------------
![(Colour online) (a) Changes in mean-squared charge radii extracted from optical isotope shifts in this work with respect to $^{240}$Pu and compared with literature values from [@Zumbro1986] and [@Angeli2013]. A small horizontal offset was included for display purposes. The error bars for this work lie within the data points. (b) The middle panel highlights the relative change in $\delta\langle r^2\rangle$ for the optical work from Table \[tab:radii\] with respect to the average change in mean-squared charge radius for each isotope as obtained from this work; see text for details. (c) In the lower panel, a zoom of the middle panel is provided without [@Angeli2013].[]{data-label="fig:radii"}](charge-radii){width="\columnwidth"}
Systematic Error Budget {#sec:sys_errors}
=======================
As the statistical uncertainties on the isotope shifts (and therefore also for the extracted changes in mean-squared charge radii) are comparably small for both the HR-RIS and CLS laser spectroscopy techniques, a further investigation of systematic uncertainties was considered. These are of experimental nature and arise from the conversion of some detected variable to frequency changes. Naturally, the frequency conversion mechanisms differ for both methods but nevertheless can independently introduce uncertainties on the isotope shifts. These will directly affect the extracted $\delta\langle r^2 \rangle$, however, slight changes in the King plots are also expected. For the sake of the arguments presented in this section, the previously mentioned values for the atomic factors will be used.
In addition, the influence of atomic factors is also shortly discussed as those also directly affect the extraction of mean-squared charge radii.
Resonance Ionisation Spectroscopy {#sec:sys_errors_RIS}
---------------------------------
For the resonance ionisation studies, the master <span style="font-variant:small-caps;">Ecdl</span> laser was stabilised against long-term drifts via a scanning Fabry-Pérot interferometer to a frequency stabilised HeNe laser. By sequentially changing the lockpoint of the master laser, the frequency of the injection-locked laser was scanned over a $10\text{GHz}$ frequency range in total. The change in frequency of the master/slave laser was then determined to be $\Delta\nu = N \times \text{FSR}_\text{FPI} \times \lambda_\text{HeNe}/\lambda_\text{master}$ where $N$ is the number of FSRs scanned over, a maximum of $41$ for the $10\text{GHz}$ scan range. Combining this with the uncertainty of the FSR, this results in a negligible effect of $0.1\text{MHz}$ on the frequency scale.
The scanning of the laser and the setpoint interval of $\pm5\text{MHz}$ (thus $\pm10\text{MHz}$ for frequency-doubled laser light) has significantly more influence. Due to the data acquisition mode of recording data whenever the master laser frequency had reached the set interval point, an offset in resonance centroids is observed between the two scanning directions of the master laser. This effect has been corrected for by summing the individual spectra together before analysis, however, bias effects may still be present. A systematic uncertainty of $\sim8\text{MHz}$ is therefore attributed to the isotope shifts.
Accounting for a slight asymmetry in the HR-RIS resonances in the fitting process by implementing an asymmetric Lorentzian contribution in the Voigt profile, the effect on the centroids of the hyperfine structures is of the order of $0.5\text{MHz}$. In total, the isotope shifts are affected by less than $1\text{MHz}$ being much smaller compared to the contribution from other uncertainties.
Only the frequency of the <span style="font-variant:small-caps;">Ecdl</span> laser was determined and recorded. In principle, the injection-locked laser should be lasing at a frequency very close to that, however effects such as a non-optimal lock, cavity mode-pulling and frequency chirps caused by the pump laser pulse may introduce inaccuracies [@Hannemann2007; @Hori2009]. The chirp effect, however, is a constant offset on the frequency axis and therefore valid for all isotopes. Any effect on the isotope shift is negligible.
Collinear Laser Spectroscopy {#sec:sys_errors_CLS}
----------------------------
Systematic uncertainties to the measurement of isotope shifts may be introduced from the conversion of scanning voltages to frequencies and may be evaluated using [@Mueller1983]
$$\begin{aligned}
\Delta_\text{sys}\left(\delta\nu^{240,A}\right) &=& \nu_\text{laser}\sqrt{\frac{e V_\text{RFQ}}{2 m_{240} c^2}} \left[\frac 1 2 \left(\frac{\delta V_\text{LCR}}{V_\text{RFQ}}+\frac{\delta m}{m_{240}}\right)\underbrace{\frac{\Delta V_\text{RFQ}}{V_\text{RFQ}}}+\frac{\delta V_\text{LCR}}{V_\text{RFQ}}\underbrace{\frac{\Delta\delta V_\text{LCR}}{\delta V_\text{LCR}}}+\frac{\Delta m_{240} + \Delta m_A}{m_{240}} \right] \\
&=& \nu_\text{laser}\sqrt{\frac{e V_\text{RFQ}}{2 m_{240} c^2}} \left[\frac 1 2 \left(\frac{\delta V_\text{LCR}}{V_\text{RFQ}}+\frac{\delta m}{m_{240}}\right)\times 10^{-3}+\frac{\delta V_\text{LCR}}{V_\text{RFQ}}\times 10^{-4}+\frac{\Delta m_{240} + \Delta m_A}{m_{240}} \right],\end{aligned}$$
with $V_\text{RFQ}$ being the bias voltage of the RFQ, $\delta V_\text{LCR}$ the difference in post-acceleration voltage of the light collection region for $A=240$ and $A=A'$ when on resonance, $\delta m = \left\vert m_A - m_{240} \right\vert$ with all masses being the ionic masses and their uncertainties $\Delta m$ according to Table \[tab:ion\_masses\]. According to [@Charlwood2009] and [@Campbell2002], $\Delta V_\text{RFQ}/V_\text{RFQ} = 10^{-3}$ and $\Delta\delta V_\text{LCR}/\delta V_\text{LCR} = 10^{-4}$, respectively. The bias of the RFQ is read out on a scan-by-scan basis via a $1:10^4$ resistor stack [@Campbell2002] and its weighted mean is used. In order to obtain $\delta V_\text{LCR} = \left \vert V_\text{LCR}^{240} - V_\text{LCR}^A \right\vert$, the (hyperfine) spectra were fitted as a function of post-acceleration voltage after calibration of the scanning power supply. The absolute errors on $\delta V_\text{LCR}$ arising from the linear calibration fit were typically $<0.1 \text{V}$ and therefore consistent with [@Campbell2002]. The calculated systematic uncertainties for the isotope shifts are consistent with zero for the reference isotope $^{240}$Pu and increase with increasing or decreasing neutron number, $\Delta N$. The systematic uncertainties obtained for the isotope shifts are transferred to the changes in mean-squared charge radii and have been included in Table \[tab:radii\].
Influence of the atomic factors
-------------------------------
Atomic factors play an important role in the extraction of changes in mean-squared charge radii such that slight changes in $F$ and/or $M$ can influence $\delta\langle r^2 \rangle$ dramatically. The effective field shift factors as extracted from the slopes of the King plots in Fig. \[fig:KP\] possess an uncertainty of approximately $10\%$, stemming from the large uncertainty of the muonic $X$-ray data [@Zumbro1986]. As such, a systematic uncertainty of the order of $10\%$ is introduced on the values of $\delta\langle r^2 \rangle$ in Table \[tab:radii\].
In the extraction of $\delta\langle r^2 \rangle^{240,A}$, a zero mass shift contribution was assumed. This leads to the assumption of the specific mass shift constant $S$ being of identical value but opposite in sign to the normal mass shift constant $N$. In the absence of alkali-like transitions and theoretical work on this complex system, no predictions for $S$ are available; $N$, however, may be calculated using $N = \nu m_e/m_u$ where $\nu$ corresponds to the frequency of the transition, $m_e$ to the mass of the electron and $m_u$ to the atomic mass unit. Incorporating $M=N$ and thus $S=0$ as a fixed intercept into the King plot (as done in [@Angeli2013]) has no influence on the extracted $F_\text{eff}$ values on the precision quoted in Table \[tab:radii\]. The difference in the $\delta\langle r^2 \rangle^{240,A}$ compared to the values presented in Table \[tab:radii\] is less than two percent.
Inclusion of Systematic Uncertainties
-------------------------------------
As the experimental effects discussed in this section are correlated, the total systematic uncertainty to be attributed to the isotope shifts is taken as a direct sum of the individual values. In case of HR-RIS this amounts to $\delta\nu_\text{sys}^\text{HR-RIS} = 8\text{MHz}$ and in case of CLS to $\delta\nu_\text{sys}^\text{CLS} \leq 2.5\text{MHz}$. Ultimately, this yields an additional error to $\delta\langle r^2 \rangle _\text{sys,exp}^{\text{HR-RIS}, 385\text{nm}} = 0.0011\text{fm}^2$ and $\delta\langle r^2 \rangle _\text{sys,exp}^{\text{HR-RIS}, 388\text{nm}} = 0.0003\text{fm}^2$ for the high-resolution resonance ionisation spectroscopy measurements as a result of the experimental technique. Similarly, a systematic error of $\delta\langle r^2 \rangle _\text{sys,exp}^\text{CLS} \approx 0.0004\text{fm}^2$ is attributed to the results from the collinear laser spectroscopy investigations. Relative systematic uncertainties originating from the experimental methods are $<0.5\%$ for $\delta\langle r^2 \rangle^{\text{HR-RIS}, 385\text{nm}}$ and $\delta\langle r^2 \rangle^\text{CLS}$, whereas they are of the order of $2\%$ for $\delta\langle r^2 \rangle^{\text{HR-RIS}, 388\text{nm}}$. The difference emerges from the different $F_\text{eff}$ of the three transitions.
The relative uncertainties of the effective field shift factors are of the order of $10\%$, translating to systematic uncertainties $\delta\langle r^2 \rangle _\text{sys,theo}$ of up to $0.03\text{fm}^2$ on the changes in mean-squared charge radii for the isotopes investigated. Taking such uncertainties in $F_\text{eff}$ into account leads to the conclusion that the two experimental methods are in agreement with one another.
Conclusions {#sec:conc}
===========
Long-lived Pu isotopes have been studied using two complementary laser spectroscopic methods, resonance ionisation spectroscopy using the Mainz atomic beam unit and collinear laser spectroscopy at the <span style="font-variant:small-caps;">Igisol</span> facility of the University of Jyväskylä. The measurements using HR-RIS included the use of an injection-locked pulsed Ti:Sapphire laser with an intrinsic linewidth of $\sim13\text{MHz}$ affording a direct comparison of the two techniques.
Isotope shifts have been measured on the ground state $5f^67s^2\ ^7F_0 \rightarrow 5f^56d^27s\ (J=1)$ and metastable state $5f^67s^2\ ^7F_1 \rightarrow 5f^67s7p\ (J=2)$ atomic transitions using the HR-RIS method and the hyperfine factors have been extracted for the odd mass nuclei $^{239,241}$Pu. Collinear laser spectroscopy was performed on the $5f^67s\ ^8F_{1/2} \rightarrow J=1/2\; (27523.61\text{cm}^{-1})$ ionic transition with the hyperfine $A$ factors measured for $^{239}$Pu. The King plot method was used to perform a consistency check of the two techniques as well as providing an empirical extraction of the field shift factors for all three optical transitions. Changes in mean-squared charge radii are consistent with those determined by non-optical muonic $X$-ray studies, however, have a precision approximately one order of magnitude greater when only comparing statistical uncertainties.
A thorough analysis of experimental systematic uncertainties has been performed. Unforeseen systematic errors in either the wavelength determination, method of data acquisition or possible perturbations to the excited state caused by the high laser power used in the ionisation step may account for any discrepancy in absolute value between the two techniques. Within the dominating uncertainty of $\sim10\%$ on $\delta \langle r^2 \rangle^{240,A}$ due to the effective field shift factors the changes in mean-squared charge radii extracted from the different transitions and techniques are consistent.
This work will hopefully stimulate future theoretical efforts in calculating the field shift and mass shift factors for actinide elements, where especially the specific mass shift constant is of importance. Such calculations would provide invaluable input for the current King plots. Furthermore, this work would benefit from additional measurements of absolute charge radii, e.g. through electronic $K$- or muonic $X$-ray isotope shifts, in order to provide constraints for future calculations. In addition, it is of high interest to probe the isotope shifts of a single transition which can be accessed by both experimental techniques.
To date, Pu is now the heaviest element studied using the collinear laser spectroscopic technique. In the immediate future, efforts are under way to expand the high resolution studies to other actinide elements, notably thorium and uranium.
We thank P. Thörle-Pospiech and J. Runke for preparing the Pu filaments. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 654002, the Academy of Finland under the Finnish Centre of Excellence Programme 2012–2017 (Project No. 251353, Nuclear and Accelerator-Based Physics Research at <span style="font-variant:small-caps;">Jyfl</span>), the Sciences and Technology Facilities Council (<span style="font-variant:small-caps;">Stfc</span>) of the United Kingdom, the <span style="font-variant:small-caps;">Fwo</span>-Vlaanderen (Belgium), <span style="font-variant:small-caps;">Goa/2010/010</span> (<span style="font-variant:small-caps;">Bof</span> KU Leuven), the <span style="font-variant:small-caps;">Iap</span> Belgian Science Policy (<span style="font-variant:small-caps;">BriX</span> network P7/12) and a Grant from the European Research Council (<span style="font-variant:small-caps;">Erc-2011-Adg-291561-Helios</span>).
|
---
abstract: 'A formal framework is given for the characterizability of a class of belief revision operators, defined using minimization over a class of partial preorders, by postulates. It is shown that for partial orders characterizability implies a definability property of the class of partial orders in monadic second-order logic. Based on a non-definability result for a class of partial orders, an example is given of a non-characterizable class of revision operators. This appears to be the first non-characterizability result in belief revision.'
author:
- |
György Turán[^1]\
University of Illinois at Chicago\
MTA-SZTE Research Group on Artificial Intelligence
- |
Jon Yaggie$^*$\
University of Illinois at Chicago
bibliography:
- 'MSDb.bib'
title: |
Non-characterizability in belief revision:\
an application of finite model theory
---
Introduction
============
The main approach to belief change is the AGM approach pioneered by [@AGM]. It provides many characterization results for belief change operators in terms of rationality postulates [@H1999]. *Are there cases where no characterization can be given?* Answering this non-axiomatizability question presupposes a formal definition of a postulate. However, as noted in the survey paper [@AGM25]
> “theories of belief change developed in the AGM tradition are not logics in a strict sense, but rather informal axiomatic theories of belief change. Instead of characterizing the models of belief and belief change in a formalized object language, the AGM approach uses a natural language (ordinary mathematical English) to characterize the mathematical structures under study.”
Fermé and Hansson then proceed to describe modal and dynamic logic approaches to modeling belief change (see, e.g., [@DHWKB]). As far as we know, the question of non-characterizability by postulates has not been considered before in those frameworks either.
In this note we provide a formal framework for studying characterizability, based on the approach of Katsuno and Mendelzon [@KM]. A revision operator $*$ in [@KM] is considered to assign a revised knowledge base $K * \varphi$ to every knowledge base $K$ and every revising formula $\varphi$. However, the results remain valid if one considers $*$ to act on a fixed knowledge base $K$ and an arbitrary revising formula $\varphi$. We are not discussing iterated revision here, and so there is no interaction between the revisions of different knowledge bases. Katsuno and Mendelzon prove the following results.
[@KM] \[th:km\]
*a)* There is a finite set of postulates such that a revision operator satisfies these postulates iff there is a faithful *total preorder* representing it with minimization.
*b)*There is a finite set of postulates such that a revision operator satisfies these postulates iff there is a faithful *partial preorder* representing it with minimization.
*c)*There is a finite set of postulates such that a revision operator satisfies these postulates iff there is a faithful *partial order* representing it with minimization.
Part *a)* is a finite version of Grove’s characterization of the AGM postulates in terms of systems of spheres [@G]. The postulates for parts *b)* and *c)* are the same, and different from *a)*.
We consider the following general question.
\[pr:gen\] Let ${\cal R}$ be a family of partial preorders. Is there a finite set of postulates such that a revision operator satisfies these postulates iff there is a faithful *partial preorder from ${\cal R}$* representing it with minimization?
Our goal is to prove a negative answer for a particular family ${\cal R}$. The formal definition of characterizability aims at providing a formal framework for proving this negative result. Formulating frameworks for other types of non-characterizability results appears to be an interesting topic for future work.
Non-characterizability is proved using a translation from postulates to universal monadic second-order formulas over the language of partial preorders. It follows from this translation that for partial orders postulate characterization of the class of revision operators implies universal monadic second-order definability of the class of partial orders considered. Thus, non-definability of the class of partial orders implies non-characterizability by postulates. Non-definability in monadic second-order logic is a well-studied topic in finite model theory [@EF; @L]. We give such a non-definability result for a particular class of partial orders. This class is constructed to give a first example of non-characterizability. It remains an interesting problem to find more natural examples. A candidate is the class of 2-dimensional partial orders.
Preliminaries
=============
We consider propositional logic knowledge bases $K$ over a fixed finite set of variables. We write $K_n$ to indicate that $K$ is over $n$ variables. Truth assignments (or interpretations) are assignments of truth values to the variables. The set of truth assignments satisfying a formula $\varphi$ is denoted by $|\varphi|$. Given a set $A$ of truth assignments, $\langle A \rangle$ is some formula $\varphi$ such that $|\varphi| = A$. A knowledge base is represented by a single formula [^2].
Given a knowledge base $K$, a belief revision operator $*$ assigns a formula $K * \varphi$ to every formula $\varphi$. Here $\varphi$ is called the revising formula, and $K * \varphi$ is called the revised knowledge base.
A partial preorder is $R = (U, \le)$, where $U$ is a finite ground set and $\le$ is a reflexive, transitive binary relation. A partial order is, in addition, antisymmetric. We write $a \sim b$ if $a$ and $b$ are incomparable. The comparability graph of $R$ is the undirected graph over $U$ such that for any pair of vertices $(a, b)$ is an edge iff $a \le b$ or $b \le a$. An element $a$ is minimal if there is no $b$ such that $b < a$, where $b < a$ iff $b \le a$ but $a \not\le b$. If $U' \subseteq U$ then $a$ is minimal in $U'$ if $a \in U'$ and there is no $b \in U'$ such that $b < a$. The set of minimal elements of $U'$ is denoted by $\min\nolimits_\le U'$.
(Faithful partial preorder) A *faithful partial preorder* for a knowledge base $K_n$ is a pair $F = (R, t)$, where $R = (U, \le)$ is a partial preorder on $2^n$ elements and $t : U \to \{0, 1\}^n$ is a bijection between the elements of $U$ and truth assignments, such that
1. $a \in U$ is minimal iff $t(a)$ satisfies $K_n$,
2. if $t(a)$ satisfies $K_n$ and $t(b)$ falsifies $K_n$ then $a < b$.
In the standard definition the partial preorder is defined over the set of truth assignments. For our discussion it is more convenient to separate the partial preorder and the labeling of its elements by truth assignments.
The basic construction used in Theorem \[th:km\] is that of a revision operator determined by a faithful partial preorder using minimization.
(Revision using minimization) \[def:min\] The *revision operator $*_F$ for $K$, determined by a faithful partial preorder $F$ for $K$, using minimization* is $$\label{eq:min} \nonumber
K *_F \varphi = \langle \min\nolimits_{\le} t^{-1}(|\varphi|)\rangle.$$
Thus the revised knowledge base is satisfied by the minimal satisfying truth assignments of the revising formula. Faithfulness implies that if the revising formula is consistent with the knowledge base then the revised knowledge base is the conjunction of the knowledge base and the revising formula.
We will use some notions from finite model theory. General introductions to the topic are given in [@EF; @L]. The notions used are introduced in the later sections, so our discussion is essentially self-contained.
Postulates
==========
Consider the AGM postulates $$\label{eq:agm1}
\textrm{if $K$ is satisfiable then $K * \varphi$ is also satisfiable}$$ and $$\label{eq:agm2}
\textrm{if $(K * \varphi) \wedge \psi$ is satisfiable then
$K * (\varphi \wedge \psi) \vdash (K * \varphi) \wedge \psi$.}$$ Here $K, K * \varphi, \varphi$ and $\psi$ can be considered as unary predicates over the set of interpretations, and thus (\[eq:agm1\]) can be rewritten as $$\label{eq:agm3} \nonumber
[\exists x K(x)] \rightarrow [\exists x (K * \varphi)(x)]$$ and (\[eq:agm2\]) can be rewritten as $$\label{eq:agm4}
[\exists x ((K * \varphi)(x) \wedge \psi(x))]
\rightarrow [\forall y((K * (\varphi \wedge \psi))(y) \rightarrow ((K * \varphi)(y) \wedge \psi(y))].$$
Postulates refer to a fixed knowledge base $K$, and are implicitly universally quantified over formula symbols such as $\varphi, \psi$. They express general requirements that are supposed to hold for all revising formulas. Generalizing these examples, a postulate is defined as follows.
(Postulate) \[def:post\] A *postulate* $P$ is a first-order sentence with unary predicate symbols $K, \varphi_1, \ldots, \varphi_\ell$ and $K * \mu_1, \ldots, K * \mu_m$, where $\mu_1, \ldots, \mu_m$ are Boolean combinations of $\varphi_1, \ldots, \varphi_\ell$.
A revision operator satisfies a postulate for a knowledge base $K$ if the postulate holds for all $\varphi_1, \ldots, \varphi_\ell$, with the variables ranging over the set of truth assignments.
This definition covers all postulates in [@KM] and in Section 7.3 of [@H1999].
Characterizability
==================
As we consider partial preorders that are faithful for a knowledge base, we introduce the following property of partial preorders.
(Regular partial preorders) \[def:reg\] A partial preorder is *regular* if
1. every minimal element is smaller than any non-minimal element
2. the number of elements is a power of 2.
An example of a non-regular partial preorder is the 4-element partial order with $a < b, c < d$ and no other comparability. Condition 1 is satisfied, for example, if there is a unique minimal element.
([R]{}-revision operator) Let ${\cal R}$ be a family of regular partial preorders. Let $K$ be a knowledge base and $*$ be a revision operator for $K$. Then $*$ is an ${\cal R}$-revision operator iff there is a faithful partial preorder $F = (R, t)$ for $K$, with $R \in {\cal R}$, representing $*$ using minimization.
Using this definition, a formal definition is given of characterizability.
(Characterization, characterizability) A finite set of postulates ${\cal P}$ *characterizes* ${\cal R}$-revision operators if for every knowledge base $K$ and every revision operator $*$ for $K$ the following holds: $*$ satisfies the postulates in ${\cal P}$ iff it is an ${\cal R}$-revision operator.
The family of ${\cal R}$-revision operators is *characterizable* if there is a finite set of postulates characterizing ${\cal R}$-revision operators.
It may be assumed *w.l.o.g.* that ${\cal P}$ consists of a single postulate.
A non-characterizable class
===========================
A partial order is a *crown* if it has elements $a_1, \ldots, a_s$ and $b_1, \ldots, b_s$ for some $s$, and the comparabilities are $a_i > b_i$ and $a_i > b_{i+1}$ for every $i$, where the indices are meant cyclically. A partial order is a *double crown* if it consists of two crowns with pairwise incomparable elements. An *extended double crown* has additional elements that are smaller than all other elements. An *extended crown* is a crown with additional elements that are smaller than all other elements. Thus extended double crowns and extended crowns satisfy the first condition of Definition \[def:reg\] and if their size is a power of 2 then they are regular.
Let ${\cal R}_0$ be the family of extended double crowns with size a power of 2, and ${\cal R}_1$ be the family of extended crowns with size a power of 2. The following theorem gives a sufficient condition for non-characterizability. Its proof is given in the next two sections.
\[th:main\] Let ${\cal R}$ be any family of regular partial orders containing ${\cal R}_0$ and disjoint from ${\cal R}_1$. Then the family of ${\cal R}$-revision operators is not characterizable.
As a special case of the theorem, we formulate one specific non-characterizable family of revision operators. A partial preorder $R$ is *regular-disconnected* if it is regular, and the partial preorder obtained from $R$ by removing its minimal elements has a disconnected comparability graph.
Let ${\cal R}$ be the family of regular-disconnected partial orders. Then the family of ${\cal R}$-revision operators is not characterizable.
The corollary follows directly from Theorem \[th:main\] and the definitions.
Translation lemma
=================
Now we define a translation of postulates into sentences over an extension of the language of partial preorders. The language of partial preorders contains a binary relation symbol $\le$ and equality.
The translated sentences also contain additional unary predicate symbols $A_1, \ldots, A_\ell$. These correspond to propositional formulas $\varphi_1, \ldots, \varphi_\ell$ occurring in the postulates. Given a Boolean combination $\mu$ of $\varphi_1, \ldots, \varphi_\ell$, we denote by $\hat{\mu}$ the first-order formula obtained by replacing the $\varphi$’s with $A$’s. For instance, for $\mu(x) = (\varphi_1 \wedge \varphi_2)(x)$ becomes $\hat{\mu}(x) = A_1(x) \wedge A_2(x)$.
Given a formula $\nu$ over the language $\le, A_1, \ldots, A_\ell$ with a single free variable $x$ we write $\min\nolimits_{\le}^{\nu}$ for a formula expressing that $x$ is a minimal element satisfying $\nu$, i.e., $$\min\nolimits_{\le}^{\nu}(x) \,\, \equiv \,\, \nu (x) \wedge \forall y (\nu(y) \rightarrow \neg (y < x)).$$ Minimal elements in the partial preorder are defined by $$\min\nolimits_{\le}(x) \,\, \equiv \,\, \forall y (\neg (y < x)).$$
(Translation) \[def:tr\] The *translation* $\tau(P)$ of a postulate $P$ is the sentence obtained from $P$ by replacing
1. every occurrence of $K(x)$ with $\min\nolimits_{\le}(x)$
2. every occurrence of $\varphi_i(x)$ and $\mu_i(x)$ with their “hat" versions
3. every occurrence of $K * \mu_i$ with $\min\nolimits_{\le}^{\hat{{\mu}_i}}(x)$
The translation is a first-order sentence over the predicate symbols $\le, A_1, \ldots, A_\ell$.
(Translation of postulate (\[eq:agm4\])) Let us replace $\varphi$ and $\psi$ with $\varphi_1$ and $\varphi_2$ to be consistent with the general notation: $$\nonumber
[\exists x ((K * \varphi_1)(x) \wedge \varphi_2(x))]
\rightarrow [\forall y((K * (\varphi_1 \wedge \varphi_2))(y) \rightarrow ((K * \varphi_1)(y) \wedge \varphi_2(y))].$$ Applying Definition \[def:tr\] we get $$[\exists x (\min\nolimits_{\le}^{A_1}(x) \wedge A_2(x))]
\rightarrow [\forall y(\min\nolimits_{\le}^{A_1 \wedge A_2}(y) \rightarrow (\min\nolimits_{\le}^{A_1}(y) \wedge A_2(y))].$$
Given $K, \varphi_1, \ldots, \varphi_k$ and a faithful partial preorder $F$ for $K$, the $(\varphi_1, \ldots, \varphi_k)$-extension of $F$ is determined in the standard way, by interpreting the unary predicate symbols $A_1, \ldots, A_k$ by $A_i(a) = \varphi_i(t(a))$. The following proposition is a direct consequence of the definitions.
\[pr:tech\] Let $K$ be a knowledge base, $F = (R, t)$ be a faithful partial preorder for $K$ and let $*_F$ be the revision operator determined by $F$ using minimization. Let $\varphi_1, \ldots, \varphi_\ell$ be propositional formulas and $P$ be a postulate. Then $P$ is satisfied by $K$ for $\varphi_1, \ldots, \varphi_\ell$ iff the $(\varphi_1, \ldots, \varphi_\ell)$-extension of $F$ satisfies $\tau(P)$.
Now we formulate the connection to definability. In the formulation of we restrict ourselves to partial orders.
A *universal monadic second-order sentence* is of the form $$\Phi = \forall A_1, \ldots, A_\ell \Psi,$$ where $A_1, \ldots, A_\ell$ range over unary predicates (or subsets) over the universe, and $\Psi$ is a first-order sentence using the unary predicate symbols $A_1, \ldots, A_\ell$ in addition to the original language (in our case $\le$ and equality). An existential second-order sentence is of the form $\Phi = \exists A_1, \ldots, A_\ell \Psi$.
(Universal monadic second-order definability) A family ${\cal R}$ of regular partial orders is universal monadic second-order definable if there is a universal monadic second-order sentence $\Phi$ such that for every regular partial order $R$ it holds that $R \in {\cal R}$ iff it satisfies $\Phi$.
\[lem:uni8\] Let ${\cal R}$ be a family of regular partial preorders. If the family of ${\cal R}$-revision operators is characterizable then ${\cal R}$ is universal monadic second-order definable.
*Proof* Let ${\cal R}$ be a family of regular partial orders such that ${\cal R}$-revision operators are characterized by a postulate $P$. We claim that ${\cal R}$ is defined by the universal monadic second-order sentence $$\Phi = \forall A_1, \ldots A_\ell \,\, \tau(P).$$
Assume that the regular partial order $R = (U, \le)$ is in ${\cal R}$. Let the number of its elements be $2^n$. Let $t : U \to \{0, 1\}^n$ be an arbitrary bijection between $U$ and the set of truth assignments. We get a faithful partial preorder $F = (R, t)$ for some knowledge base $K_n$, and thus the corresponding revision operator $*_F$ is an ${\cal R}$-revision operator. Therefore $*_F$ satisfies $P$. Consider arbitrary unary relations $A_1, \ldots, A_\ell$ over the elements. Applying Proposition \[pr:tech\] to the propositional formulas $\varphi_1, \ldots, \varphi_\ell$ corresponding to $A_1, \ldots, A_\ell$, it follows that $\le, A_1, \ldots, A_\ell$ satisfy $\tau(P)$. Thus $R$ satisfies $\Phi$.
Now assume that the regular partial order $R$ is not in ${\cal R}$. Again, let $t : U \to \{0, 1\}^n$ be an arbitrary bijection between $U$ and the set of truth assignments. We get a faithful partial preorder $F = (R, t)$ for some knowledge base $K_n$. This determines a revision operator $*_F$.
We claim that $*_F$ is not an ${\cal R}$-revision operator. This follows if we show that $F$ is the only faithful partial order determining $*_F$. Assume that for $F' = (R', t')$ with $R' = (U', \le')$ the revision operator $*_{F'}$ is the same. Then, as revision operators are defined using minimization, for every pair of truth assignments $u, v$ it holds that
$t^{-1}(u) < t^{-1}(v) \,\, \textrm{iff} \,\, K_n *_F \langle u, v \rangle = \langle u \rangle
\,\, \textrm{iff} \,\, (t')^{-1}(u) < (t')^{-1}(v)$
and
$t^{-1}(u) \sim t^{-1}(v)$ iff $K_n *_F \langle u, v \rangle = \langle u, v \rangle$ iff $(t')^{-1}(u) \sim (t')^{-1}(v)$.
Thus $(t')^{-1} \circ t$ is an isomorphism from $R$ to $R'$. Hence $*_F$ does not satisfy $P$. So there are propositional formulas $\varphi_1, \ldots, \varphi_\ell$ such that the corresponding instance of $P$ is false. By Proposition \[pr:tech\] the corresponding unary predicates $A_1, \ldots, A_\ell$ falsify $\tau(P)$. Hence $R$ falsifies $\Phi$. $\Box$
Proof of Theorem \[th:main\]
============================
Two first-order structures are $q$-equivalent (denoted by $\equiv_q$) if they satisfy the same first-order sentences of quantifier rank at most $q$. The $q$-round Ehrenfeucht - Fraissé game over two structures is played by two players, Spoiler and Duplicator. In each round Spoiler picks one of the structures and an element of that structure. Duplicator responds by picking an element in the other structure. After $q$ rounds Duplicator wins if the substructures of picked elements in the two structures are isomorphic. Otherwise Spoiler wins. Duplicator has a winning strategy iff the two structures are $q$-equivalent.
The following result uses the notion of neighborhood in a general relational structure. We only use this result for undirected graphs with colored vertices, where the $r$-neighborhood of a vertex $v$ is the set of vertices reachable from $v$ by paths of length at most $r$.
(Hanf-locality of first-order logic, see [@L]) \[lem:hanf\] Let $\Psi$ be a first-order sentence with quantifier rank $q$, and let $r = (3^q - 1)/2$. Let $S_1$ and $S_2$ be two structures with a bijection $f$ between their elements such that for every element $a$ of $S_1$ it holds that the $r$-neighborhoods of $a$ in $S_1$ and of $f(a)$ in $S_2$ are isomorphic. Then $S_1$ satisfies $\Psi$ iff $S_2$ satisfies it.
The proof of Theorem \[th:main\] is based on the undefinability of graph connectivity by existential monadic second-order sentences [@F; @H]. The basic fact is that every existential monadic second-order sentence satisfied by all cycles is also satisfied by some graph which is the union of two cycles. The following lemma is a variant of this result, based on the presentation in [@L], with the required modifications. The modifications are needed as a crown is somewhat different from a cycle and due to the presence of bottom elements Hanf-locality cannot be applied directly.
\[lem:mainle\] Let $\Phi$ be a universal monadic second order sentence over the language of $\le$ and equality. If every partial order in ${\cal R}_1$ falsifies $\Phi$ then some partial order in ${\cal R}_0$ also falsifies $\Psi$.
*Proof* Given an extended crown or extended double crown $M$, we define a structure $G_M$ over the language $E, L_1, L_2, L_3$, where $E$ is a binary relation and $L_1, L_2, L_3$ are unary relations. The ground sets of $M$ and $G_M$ are the same. Relation $E$ is the comparability relation of the crown or double crown, and $L_1, L_2, L_3$ correspond to the maximal and minimal elements of the extended crown or extended double crown, resp., the bottom elements. Thus $G_M$ is an undirected graph with vertices colored by the three colors $L_1, L_2, L_3$. We refer to $G_M$ as the colored graph of $M$. The underlying undirected graph for extended crowns (resp., extended double crowns) is complete bipartite graph between a cycle (resp. two cycles) and an independent set. The structures $M$ and $G_M$ are inter-definable by simple first-order sentences. In one direction $$(a < b) \equiv [E(a, b) \wedge ((L_2(a) \wedge L_1(b)) \vee (L_3(a) \wedge L_2(b)) \vee (L_3(a) \vee L_1(b)))].$$ In the other direction $E(a, b) \equiv (a < b) \vee (b < a)$, $L_3(a) \equiv min_\le (a)$ and $L_2, L_3$ can be defined similarly.
For the rest of the argument it is more convenient to switch to existential sentences. We show that if an existential monadic second-order sentence $$\Phi = \exists A_1, \ldots, A_\ell \Psi$$ over the language $E, L_1, L_2, L_3$ is satisfied by the colored graph $G_{M_1}$ of every extended crown $M_1$ then it is also satisfied by the colored graph $G_{M_2}$ of some extended double crown $M_2$. The lemma then follows directly.
Let $q$ be the quantifier rank of $\Psi$. Let $r = (3^q - 1)/2$ and $T = 2 \cdot 2^{\ell (2 r + 1)}$.
\[lem:exis\] Let $M_1$ be an extended crown on at least $(4 r + 4) \cdot T$ elements. For every $(A_1, \ldots, A_\ell)$-extension $G_{M_1}'$ of $G_{M_1}$ there is an extended double crown $M_2$ of the same size and an $(A_1, \ldots, A_\ell)$-extension $G_{M_2}'$ of $G_{M_2}$ such that $G_{M_1}' \equiv_q G_{M_2}'$.
*Proof of Lemma \[lem:exis\]* Let us consider the substructure $C_1$ of $G_{M_1}$ corresponding to vertices labeled $L_1, L_2$, and its extension $(A_1, \ldots, A_\ell)$-extension $C_1'$. Thus $C_1$ is a 2-colored cycle. The $r$-neighborhood of a vertex $a$ in $C_1$ is the set of vertices which can be reached from $a$ by a path of length at most $r$. The neighborhood consists of two arcs of $r$ edges each. The $r$-neighborhoods of vertices labeled $L_1$ (resp., $L_2$) in $C_1$ are isomorphic. The extension adds an additional coloring with $2^\ell$ colors. There are $T$ isomorphism types of $r$-neighborhoods in $C_1'$. Hence there are two elements $a$ and $b$ such that their distance on the cycle is at least $2 r + 2$ and their $r$-neighborhoods are isomorphic. Let $a'$ and $b'$ be the successors of $a$ and $b$ on the cycle (using some orientation).
Form $C_2'$ from $C_1'$ by deleting edges $(a, a')$ and $(b, b')$, and adding edges $(a, b')$ and $(b, a')$. It follows from Lemma \[lem:hanf\] that $C_1' \equiv_q C_2'$.
Let $G_{M_2}'$ be the extended colored graph obtained from $C_2'$ by adding colored bottom vertices as in $G_{M_1}'$ . Form $G_{M_2}$ from $G_{M_2}'$ by deleting the unary relations $A_1, \ldots, A_\ell$ and let $M_2$ be the extended double crown represented by $G_{M_2}$. The sizes of $M_1$ and $M_2$ are the same.
We claim that $G_{M_1}' \equiv_q G_{M_2}'$. It is sufficient to show that Duplicator wins the $q$-round first-order game on the two structures. As $C_1' \equiv_q C_2'$, Duplicator wins the $q$-round first-order game on $C_1'$ and $C_2'$. Also, Duplicator trivially wins the $q$-round first-order game on the two isomorphic sets of bottom vertices. As all edges are present between the cycles and the bottom vertices, the combination of the two winning strategies gives a winning strategy on $G_{M_1}'$ and $G_{M_2}'$. $\Box$ $\Box$
To conclude the proof of Theorem \[th:main\] assume that ${\cal R}$-revision operators are characterizable by a postulate $P$. Then ${\cal R}$ is universal monadic second-order definable by Lemma \[lem:uni8\]. Thus there is a universal mondaic second-order sentence satisfied by every partial order in ${\cal R}_1$ and falsified by every partial order in ${\cal R}_0$, contradicting Lemma \[lem:mainle\]. $\Box$
[^1]: Partially supported by NSF grant CCF-0916708
[^2]: We note that because of finiteness this representation corresponds to the belief set framework. Computational complexity issues are not discussed here thus the details of the representation are irrelevant.
|
---
abstract: 'We make the first attempt to estimate and interpret the biphase data for astronomical time series. The biphase is the phase of the bispectrum, which is the Fourier domain equivalent of the three-point correlation function. The bispectrum measures two key nonlinear properties of a time series – its reversability in time, and the symmetry about the mean of its flux distribution – for triplets of frequencies. Like other Fourier methods, it is especially valuable for working with time series which contain large numbers of cycles at the period of interest, but in which the signal-to-noise at a given frequency is small in any individual cycle, either because of measurement errors, or because of the contributions from signals at other frequencies. This has long been the case for studies of X-ray binaries, but is increasingly becoming true for stellar variability (both intrinsic and due to planetary transits) in the Kepler era. We attempt in this paper also to present some simple examples to give a more intuitive understanding of the meaning of the bispectrum to readers, in order to help to understand where it may be applicable in astronomy. In particular, we give illustrative examples of what biphases may be shown by common astrophysical time series such as pulsars, eclipsers, stars in the instability strip, and solar flares. We then discuss applications to the biphase data for understanding the shapes of the quasi-periodic oscillations of GRS 1915+105 and the coupling of the quasi-periodic oscillations to the power-law noise in that system.'
author:
- |
Thomas J. Maccarone\
Department of Physics, Box 41051, Science Building, Texas Tech University, Lubbock TX 79409-1051\
School of Physics and Astronomy, University of Southampton, SO16 4ES\
email:thomas.maccarone@ttu.edu
title: 'The biphase explained: understanding the asymmetries in coupled Fourier components of astronomical timeseries'
---
\[firstpage\]
methods: statistical – X-rays:binaries – stars:variables:general
Introduction
============
Astronomy is one of the first sciences to make use of time series analysis, with the studies of the orbits of planets in the solar system (see e.g. Way et al. 2012 and references within for some examples). Perhaps because of the historical emphasis on orbits, astronomical time series analysis has traditonally focused on the frequencies at which the strongest variability is found, with comparitively less emphasis on the phases of different Fourier components of time series.
In a variety of systems in nature, and in laboratory studies of dynamical systems, power spectra of sources reveal strong variability over a wide range of frequencies. In order to be able to demonstrate that power exists on a large range of timescales, one ideally will have time series on which to work which are uninterrupted (or at least regularly sampled) and which are long relative to the timescales of interest. In X-ray binaries, the fast timescales of variation and strong variability allow one to probe a wide range of timescales effectively, and it has been known for about four decades that aperiodic variability can be strong in these objects (e.g. Terrell 1972). Time series of magnetograms from active regions of the Sun, which can also be made at high cadence over long time spans, also show power spectra well modelled by power laws over wide ranges of frequencies, rather than by power at a few discrete frequencies (Abramenko 2005). In recent years, the Kepler satellite has taken long, nearly uninterrupted time series of many other stars, and components in the power spectra of solar-type stars which span a broad range of frequencies have been observed (e.g. Jiang et al. 2011).
Simple tools exist for studying the time profiles of oscillations when the variability is strictly periodic (albeit, perhaps non-sinusoidal). In cases of bright sources, with variations much larger than the noise level on individual data points, one can simply examine the raw time series in the time domain. When larger noise components are present (whether they are physical, such as the noise due to stellar activity on a star with planetary transits, or are simply noise due to measurement uncertainties), one can fold the time series on the period and see the mean profile. What has generally not been done in astronomy is to examine the phase couplings of aperiodic variability.
In the cases where phase dependences are studied, often, the emphasis is on lags between different photon wavelengths (as e.g. in reverberation mapping of active galactic nuclei – e.g. Edelson & Krolik 1998 or studies of time lags in X-ray binaries – e.g. Nowak et al. 1999) – although, in some cases, some information about the nonlinearity of a system can be obtained solely through studies of the power spectrum and the cross-spectrum or cross correlation function (e.g. Maccarone, Coppi & Poutanen 2000; Shaposhnikov 2012). The reasons for this are twofold. First, under some circumstances the meaning of the phase lag between two wavelengths of light can have an immediately obvious interpretation. For example, in the case of reverberation mapping, it gives the added light travel time by taking a route that passes through the line emission region.
Secondly, making measurements of nonlinear variability in systems which possibly have red noise contributions requires a large number of high quality independent measurements of the Fourier spectrum (or some alternative statistical measure of the Fourier spectrum). This is rarely the case in astronomy. Furthermore, the real advantages of non-linearity analyses in the Fourier domain are seen when the signal-to-noise on individual measurements is poor, but a very large number of measurements exist; and/or there is substantial aperiodic variability, or there are a very large number of frequencies contributing to the variability, so that simple folding of the data on a characteristic period does not capture all that is happening in the system.
X-ray binaries represent a particularly good example of a class of systems which are particularly ripe for sophisticated nonlinearity analyses. They show variability on a wide range of timescales. They are typically the subjects of very low background rate observations where individual photons are counted and, with many X-ray observatories, the count rate per time resolution element is significantly less than unity. In recent years satellites such as Kepler and Corot have obtained very long, high precision uninterrupted observations of bright stars, which are likely to be affected by some combination of asteroseismic modes, planetary transits, coronally activity and atmospheric turbulence (e.g Jiang et al. 2011), and which also allow the study of the evolution of the rapid variability of cataclysmic variables (Scaringi et al. 2012).
There have been some attempts made to characterize and understand the nonlinear variability of X-ray binaries. In the very early era of X-ray astronomy, some attempts were made to measure, for example, the asymmetries of light curves (Priedhorsky et al.1979). More recently, that there is some nonlinearity was proved by the presence of an rms-flux relation (Uttley & McHardy 2001).
In this paper, we present a more detailed treatment than has been presented in the past of what can be learned from use of the bispectrum. Because the topic is fairly new to astronomical time series analysis, we will take a more pedagogical tone than is taken in most typical papers in astronomy, and will develop some ideas already well known in other fields of research. We will show, in particular, that the bispectrum presents a good means for determining whether a time series is reversible (in a statistical sense) and whether a time series has a symmetric flux distribution.
The bispectrum: a tutorial
==========================
The bispectrum is an example of a higher order time series analysis technique which can be used to understand the phase correlations in a single time series. Successful applications have been made in studies of brain waves (e.g. Gajraj et al. 1998), of speech patterns (Fackrell 1997), of vibrations of machinery (Rivola & White 1998), plasma physics (van Milligen et al. 1995), and, ocean waves, for which it was first developed (Hasselman et al. 1963). Spatial, rather than temporal, bispectra have been studied widely in astronomy, for purposes of undertanding the non-gaussianity of the cosmic microwave background (e.g. Kamionkowski et al. 2011). A large fraction of the literature on the bispectrum was developed for the study of ocean waves, and we will draw heavily on what has already been developed in that field for building up our understanding of what we can learn from the bispectrum.
The bispectrum is the first in a series of polyspectra – analogies to the classical Fourier spectrum which take into account more than one timescale. The bispectrum of two frequencies, $k$ and $l$, $B(k,l)$ is defined by:
$$B(k,l)=\frac{1}{K} \sum_{i=0}^{K-1} X_i(k)X_i(l)X^*_i(k+l),
\label{bispeceqn}$$
where there are $K$ segments to a time series, and $X_i(f)$ denotes the Fourier transform of the $i$th segment of the light curve at frequency $f$. The asterisk is used in the final term to denote that a complex conjugate is being taken.
One can see that, in order to produce useful measurements of the bispectrum, one needs to have a large number of independent measurements of the time series, each with high signal to noise, and with the power spectrum stationary over the duration of the observations. The expectation value of the bispectrum is unaffected by Gaussian noise, but its value can be strongly affected by Poisson noise (e.g. Uttley et al. 2005), since Poisson noise is nonlinear.
The bispectrum is related to two quantities of a time series, the skewness and the asymmetry. The skewness is related to the mean cube of the values of the data points in a distribution in the same way that the variance is related to the mean square of the data points in a distribution – i.e. it is the third moment of the flux distribution. The asymmetry is related to the directionality of the time series, in a manner similar, but not identical, to the time skewness statistic developed by Priedhorsky et al. (1979) which is applied in the time domain and considers only a single characteristic timescale. In Maccarone & Coppi (2002), we adapted the time skewness statistic of Priedhorsky et al. (1979) by rearranging some terms and dividing by the cube of the standard deviation to non-dimensionalize the time skewness.[^1]
$$TS(\tau) = \frac{1}{\sigma^3}\frac{1}{K}
\sum_{i=0}^{K-1} (s(t)-\bar{s})^2(s(t-\tau)-\bar{s}) - (s(t)-\bar{s})(s(t-\tau)-\bar{s})^2$$
Given that the bispectrum is simply a complex number, it can be thought of as consisting of a magnitude and a phase. The phase is called the biphase, and for reasons that will become clear in the next section, the biphase must be defined over the full $2\pi$ interval, and not simply as the arctangent of the imaginary part of the bispectrum divided by the real part of the bispectrum as is sometimes done in the bispectrum literature. A version of the magnitude has been considered most heavily in previous astronomical time series papers. This version is known as the bicoherence. It is quite similar to the cross-coherence used to test whether the time lags between two energy bands are constant – it takes on a value from 0 to 1, with 0 indicating that there is no nonlinear coupling of the phases of the different Fourier components between different observations, and 1 indicating total coupling. The most commonly used expression for the bicoherence is that of Kim & Powers (1979):
$$b^2(k,l) =
\frac{\left|\sum{X_i(k)X_i(l)X^*_i(k+l)}\right|^2}{\sum{\left|X_i(k)X_i(l)\right|^2}\sum{\left|X_i(k+l)\right|^2}},
\label{bicoeqn}$$
where $b^2(k,l)$ is the squared bicoherence – although see e.g. Hinich & Wolinsky (2004) who point out that other methods of normalization are more sensitive to some types of non-linear behavior. The Kim & Powers (1979) normalization has the attractive property that, for a system with power at only three frequencies, the squared bicoherence represents the fraction of the power at the third frequency that can be explained by coupling of the three modes (see also Elgar & Guza 1985); such a simple interpretation, however, is not possible in the cases of broadband coupling (McComas & Briscoe 1980).
Understanding the biphase
-------------------------
The biphase thus holds powerful information about the shape of the light curve. Masuda & Kuo (1981) worked out some key implications of the biphase. In the early papers on the bispectrum (e.g. Hasselmann et al. 1963), the focus was on the skewness, which is closely related to the real part of the bispectrum. Masada & Kuo’s paper examined a few simple cases, where only three commensurate frequencies were considered, and present a few examples showing that a positive skewness when considering a particular set of timescales will manifest itself in a positive real component for the bispectrum. The real component describes the extent to which the flux distribution of the source is skewed, and the imaginary component described the extent to which the time series is symmetric in time in a statistical sense. Poisson noise thus affects only the real component.
### Skewness of the flux distribution
A positive skewness results from an asymmetric distribution of fluxes, with a long tail to high flux. Accreting objects often show log-normal flux distributions (see e.g. Lyutyj & Oknyanskij 1987; Gaskell 2004; Uttley, McHardy & Vaughan 2005) – note also that the fact that the optical fluxes of active galactic nuclei follow a log-normal distribution is obscurred, to some extent by the use of the logarithmic magnitude scale, rather than fluxes, for most optical work (Gaskell 2004). The log-normal distribution has a positive skewness. Therefore, any cases of negative real components of bispectra (or, alternative, of biphases in the range from $\pi/2$ to $3\pi/2$) are especially interesting, as they indicate particular timescales in particular observations on which can immediately determine that the flux distribution is not the “standard” log-normal distribution.
The log-normal distribution is often found to provide a good first order description of the distribution of values of a range of phenomena, both in nature (e.g. Makuch et al. 1979), and in the social sciences (the log-normal distribution is an underlying assumption in the Black & Scholes 1973 formula for option pricing – although is has been argued to underpredict rare events – e.g. Haug & Taleb 2011). As a result, in some cases, it may make sense to apply time series analysis techniques to the logarithms of the measured values, rather than to the values themselves – in such a case finding a substantial value of the real component of the bispectrum would indicate that the distribution deviated from log-normal, which might be more enlightening than demonstrating that the distribution deviates from being symmetric about the mean. We do not make calculations of the properties of the log of the count rate distributions in this paper, but rather we simply note that it may be worth doing under certain circumstances (e.g. it may make more sense to work with magnitudes than fluxes when dealing with bright optical sources, if the optical flux distribution is expected to be log-normal).
### Asymmetry of the time series
The asymmetry of the time series is related to the imaginary component of the bispectrum. A biphase of $\pm{\pi}/2$ is obtained for a sawtooth wave. A sawtooth wave, for example, has a symmetric flux distribution, and hence a zero real component for the bispectrum and complete asymmetry in time, and hence a purely imaginary component to the bispectrum. The sign convention is such that positive imaginary components of the bispectrum correspond to sawtooths which rise more slowly than they fall off. Again, there are some indications of what to expect from past measurements. For X-ray binaries in the hard state, for example, Maccarone, Coppi & Poutanen (2000) suggested on the basis of the combination of hard time lags and narrower autocorrelation functions at higher energies that the characteristic variability pattern for these systems must be a relatively slow rise characterized by a relatively fast fall-off, and that this slow rise must be slower and start earlier at low energies than it does at higher energies. This basic idea, at least for the rapid variability was verified by use of the time skewness statistic (Maccarone & Coppi 2002).
We note also that one can also think of the asymmetry in time as being the skewness of the Hilbert transform of the time series. The Hilbert transform, in the Fourier domain, can be executed by shifting the phases of all positive frequency components by $-90$ degree and all of the negative frequency components by $+90$ degrees. Since this converts sin $x$ to $-{\rm cos} x$, and cos $x$ to sin $x$, we can see that it bears a relation to the negative derivative of the function (although no weighting by the frequencies is applied). This thus yields some similarity with the time skewness statistic, which is a flux-weighted average of the slope of the light curve on a particular timescale.
Plots for extreme cases
=======================
We now present schematic diagrams for a few “extreme” cases of light curves that can occur astrophysically. These can be used to develop an intuitive picture of what lightcurves should look like for different values of the biphase.
“Ideal” pulsars: harmonics with biphase = 0
-------------------------------------------
First, let us consider an overly simplified description of a pulsar light curve. If we suppose that the time series has a series of $\delta-$functions appearing periodically, where the source is bright, and has zero flux at all other times, then clearly there is no asymmetry to the time series, but there is a strong skewness in the flux distribution, with most of the data points at values much less than the mean. Such a timeseries will have a positive skewness on all timescales on which there is power in the power spectrum. The biphase will thus generically be 0 wherever there is power. For pulse shapes with some asymmetry to them, there will be biphases different from zero, but always with positive real components to the bispectrum.
Eclipses: harmonics with biphase = $\pi$
----------------------------------------
Next, we can consider the opposite case: periodic drops in flux of a constant amount which occur periodically. Apart from a DC offset and a constant of proportionality, this scenario is essentially the same as taking each flux value to be the negative of the flux values of the scenario above. This gives a strong negative skewness, and no asymmetry. The biphase should thus be $\pi$ for all cases where there is any Fourier power. The plots of time series which have biphases of 0 and $\pi$ are presented in figure \[biphasezeropi\].
\
Sawtooth oscillations: harmonics with biphase of $\pm\pi/2$
-----------------------------------------------------------
While sawtooth oscillations are not common in astrophysics, there are a few cases where they are seen. For example, solar flare radio emission can sometimes show linear rises followed by rapid drops in flux (Klassen et al. 2001) – and these sawtooth oscillations are common in other kinds of magnetic reconnection scenarios (Zweibel & Yamada 2009). Classical Cepheids and RR Lyrae stars show the opposite - sharp rises in flux, followed by slow, linear decays. Rapidly rising, linearly fading sawtooths will give $-\pi/2$ for the biphase and linearly rising, rapidly fading sawtooths will give $\pi/2$ for the biphase. Generically, any function which rises more sharply than it fades will have a negative imaginary component of the biphases and and function which fades more sharply than it rises will have a positive imaginary component of the biphase.
A strict sawtooth wave is defined as the summation over all integer values of $j$ of $\frac{1}{j}{\rm cos} [j \omega_0 t +
(j-1)\times{\pi}/2]$. The constant added phase may alternatively be multiplied by $-1$ to allow the opposite sense of symmetry. We plot the summation of the first 40 terms of a sawtooth oscillation in figure \[sawtooth\]. As the number of terms approaches infinity, the wave form approaches a strict instantaneous rise, linear decay (or linear rise, instantaneous decay).
![This is a summation of the first 40 harmonics for a sawtooth oscillation with a biphase of $-\pi/2$.[]{data-label="sawtooth"}](sawtooth_40_new.eps){width="3.5"}
Beyond the simple examples
--------------------------
It is important to remember, also that the biphase alone does not determine the shape of a time series. The examples listed above are the easiest cases to visualize for their particular values of the biphase. However, these are all cases where the power spectrum is composed solely of a fundamental frequency and its overtones, and where the amplitudes of the different harmonics are set in a specific manner. In a sawtooth wave, for example, the normalization of the sine wave corresponding to a particular harmonic is inversely proportional to its frequency. A system with the same set of biphases, but a different power spectrum, would similarly be strongly asymmetric in time, but could have a light curve with a qualitatively different shape.
For example, in figure \[mod\_saw\] we plot a modification of the sawtooth wave. Like in figure \[sawtooth\], we plot the sum of 40 harmonics, all as cosine waves with an added phase of $\pi/2 \times (j-1)$, for the $j$th harmonic. Instead of using a normalization of $1/j$ for the Fourier spectrum, we use $1/j^2$, which gives more weight to the lower harmonics, and thus results in the more curved shape to the time series near the peak. The time series is still symmetric in its flux distribution, and asymmetric in time, and so it still has biphase of $-\pi/2$.
![This is a modified sawtooth. The frequencies which contribute, and the phases at those frequencies are the same as for the sawtooth wave in figure \[sawtooth\], but the amplitudes of the cosine waves at the different harmonics have been changed to scale as $1/j^2$ instead of scaling as $1/j$. This sawtooth wave has biphase $-\pi/2$ for all combinations of harmonics in which the two lower frequencies add up to the higher frequency.[]{data-label="mod_saw"}](mod_sawtooth_new.eps){width="3.5"}
Beyond a pure harmonic structure
--------------------------------
The easiest examples for which to attempt to visualize the bispectrum are the cases where all the relevant frequencies are integer multiples of a fundamental frequency. The real power spectra of many interesting classes of astrophysical objects, including, but not limited to, accreting compact objects with low magnetic fields, have quite broad power spectra. At the present time, it has been shown only for GRS 1915+105 that this broad power spectrum carries a nonlinear relationship with the quasi-periodic oscillations seen in the source (Maccarone et al. 2011).
Given this finding, it is of interest to show what the light curves will look like for different types of power spectra and different values of the biphase, in order to help develop an intuition for the meaning of the biphase. We thus present some calculations of simulated light curves for such Fourier spectra. We generate a Fourier spectrum using an approach similar to that taken in Timmer & König (1995) – see also Davies & Harte (1987) – with some small modifications.
First, we consider examples where the two noise components are both at lower frequencies than the QPO frequency. We draw a random amplitude at each frequency such that the power spectrum will take a value uniformly distributed between 0 and 2 times the desired power spectrum level at that frequency. We draw phases randomly for the lower non-QPO frequency, and then force the combination of the higher non-QPO frequency’s phase and the QPO frequency’s phase to give the biphase at the desired value – this forces the biphase to have a particular value for the case where $f_1+f_2=f_{QPO}$. There is then no consistent value of the biphase for the coupling within the noise component, but there is a consistent value of the biphase of the QPO.
We compute some illustrative examples of time series with different values of the biphase. We treat the noise as a power spectrum with a broken power law, with flat power below a break frequency, and a $\nu^{-1}$ slope above the break frequency. The break frequency is set to be 1/4 of the QPO frequency. The QPO is modelled as a peak at a single frequency (i.e. it is taken to be strictly periodic) for the sake of simplicity. Above the QPO frequency, we assume there is no Fourier power. We consider the cases with biphases of 0, $\pi/2$, $\pi$, and $-\pi/2$, and plot them in figure \[lfnoise\_examples\]. We then perform the same procedure as above, except for a case where the QPO frequency is set to be about 2/3 of the break frequency of the noise power spectrum. The results of these calculations are plotted in figure \[lfqpo\_examples\].
The basic structures of the signals still agree with the idealized cases discussed above. When the biphase is 0, one can see that the deviations from the mean are sharper, but less frequent in the positive direction than in the negative direction, while the opposite is true for biphase of $\pi$. Furthermore, computation of the skewness of the time series yields positive values for the former case and negative values for the latter case. When the biphase is $-\pi/2$, one can see that there are sharp rises from the mean, followed by slower decays in the value of the time series, and the oppsite is true for a biphase of $\pi/2$.
\
\
\
\
Nonlinear coupling with zero bicoherence
----------------------------------------
There [*are*]{} cases where coupling clearly exists between different frequencies in a time series, but where the bispectrum will tend to zero. An example is a square wave. Square waves have symmetric flux distribution and are symmetric in time. Triangle waves lack a bispectrum, but have nonlinear couplings in the same manner. These can both be understood in a straightforward manner by considering the harmonics that contribute to the waves. Both triangle waves and square waves are composed of the sums of odd integer harmonics only. Since no two odd numbers can sum to make a third odd number, it is not possible to find sets of frequencies in square waves or in triangle waves for which the bispectrum will take a non-zero value.
Still higher order polyspectra can, in principle, be used to investigate the even higher moments of light curves that can characterize such waveforms. At the present, we stick with bispectral analysis because even it is not yet well understood, at least in astronomy, and because the statistical errors on the trispectrum, the next higher moment, are likely to be too large to make good use of that statistics with existent data sets.[^2]
In the near term, one could consider simply constructing flux distributions and attempting to determine whether the kurtosis of the flux distribution differs strongly from the expectations for a pure log-normal distribution. If so, then the trispectrum would be worth computing to try to isolate the timescales contributing most strongly to the kurtosis. If not, then the trispectrum may still contain information not contained within the power spectrum and the bispectrum, but the computational difficulties in computing the trispectrum, along with the difficulties in visualizing and interpreting it for non-harmonic frequencies may make the exercise unproductive.
The LOFT mission (Feroci et al. 2012) should open up the possibility of computing trispectra in cases where RXTE can be used only to compute bispectra. With about 20 times the collecting area, it should be possible to deal with an additional term with associated uncertainties in the numerator of the polyspectrum, but a set of simulations and discussion of this topic goes well beyond the scope of this paper. The difficulties in interpreting the trispectrum would remain, but an approach like that in this paper, considering first some simple cases like square waves and triangle waves, could be used as a starting point if some statistically significant signals were detected.
An application outside high energy astrophysics: exoplanet searches
===================================================================
One of the key sources of background for exoplanet transit detections is stellar activity (e.g. Aigrain, Favata & Gilmore 2004). Planetary transits should fit the following characteristics:
1. A time-symmetric time series (apart from some very weak effects due to the rotation of the star being eclipsed – Rossiter-McLaughlin)
2. Strong harmonic structure, with the relative intensities of the different harmonics depending on the duration of the eclipse.
3. Biphase of $\pi$.
The bispectrum may then hold the possibility to allow the detection of moderate strength transits even in stars with strong noise. While the flickering from a sea of weak stellar variability may make it difficult to prove, from a power spectrum alone, that a particular star is transiting a planet, the bispectrum may in some cases indicate that particular frequencies show the coupling expected for an eclipse. An added benefit is that the soft X-rays produced by stars with strong activity are strongly absorbed by oxygen, so there might be hope to search for oxygen in these stars’ atmospheres more readily than can be done in the optical. In cases of elliptical orbits where effects such as Doppler beaming, ellipsoidal modulations, and reflection may have asymmetric modulations on the orbital period, the expectation value of the biphase may not be strictly $\pi$ – but in general, the amplitudes of such variations will be a few orders of magnitude less than the amplitude of variations due to transiting (Loeb & Gaudi 2003). In fact, it may be more likely that the bispectrum can be used to help establish the nature of periodicities due to non-eclipsing planets than that it will hinder the use of the bispectrum for detecting planets.
The passages of star spots may also produce bispectra with biphases near $\pi$. A key difference is that the effects of star spots are not consistent from orbit to orbit. The periods change as the spots move away from the equator, and even at constant lattitude, the phases change as spots are created and destroyed. As a result, the bicoherence may be a good means of separating starspot activity from transiting.
Application to the biphase of GRS 1915+105
==========================================
In a previous paper (Maccarone et al. 2011 – M11), we showed that the X-ray binary GRS 1915+105 shows strong bicoherence in interactions between its strong quasi-periodic oscillations and its broadband noise. For this system, several different patterns of variability were seen in the bicoherence plots in the data. The biphase of that system was not, however, considered in our previous paper. We use the same computations of the bispectrum from exactly the same data sets we used in M11. Those computations were made by taking long observations of GRS 1915+105 during which the power spectrum appeared stationary, taking a series of Fourier transforms, and then following equation \[bispeceqn\] and equation \[bicoeqn\] above. In the process of preparing this paper for publication, we realized a clerical error in M11 resulted in the wrong time resolution being given in the text for the time resolution used for observation 10408-01-25-00 – it was 1/128 second, rather than 1/64 second.
A review of the bicoherence results
-----------------------------------
If one considers the two lower frequencies involved in coupling, with the understanding that the third frequency follows trivially from the first two, then we can consider which values of $f_1,f_2$ show strong coupling with $f_1+f_2$. The three characteristic patterns found were called the “web”, “cross” and “hypotenuse” patterns.
The web pattern is characterized by strong bicoherence for $f_1+f_2$ = $f_{QPO}$ where $f_{QPO}$ is the strongest quasi-periodic oscillation in the power spectrum; for $f_1=f_2=f_{QPO}$, and for $f_1=f_2=2f_{QPO}$; and is weaker, but still clearly detectable for $f_1=2f_{QPO}, f_2>2f_{QPO}$. The hypotenuse pattern is quite similar in appearance to the web pattern, except that only the $f_1+f_2$ = $f_{QPO}$ and the harmonic are seen strongly. The cross pattern shows strong bicoherence whenever the QPO frequency is either $f_1$ or $f_2$, but not for the case $f_1+f_2=f_{QPO}$.
RXTE observation 10408-01-25-00: a low frequency QPO with a “web” pattern bicoherence: dips in the light curve for the fundamental plus second harmonic
-------------------------------------------------------------------------------------------------------------------------------------------------------
First, we present the bicoherence plot for observation 10408-01-25-00, in figure \[bico1\]. We have modified the plot from the version presented in M11, so that the regions we discuss in the text here can be more readily identified. Our aim in this paper is not to discuss the strength of the bicoherence, but merely to show that we are calculating the biphase for specific regions of strong bicoherence.
{width="6"}
In this system we can examine the biphases of the few strongest peaks in the bispectrum to get a feel for the properties of the strongest set of waves in the system. The peak of our bicoherence is $f_1=36,
f_2=37$ with the units of the frequency here being the frequency resolution of the power spectrum, 0.03125 Hz (i.e. 1/32 Hz). The biphases for all combinations where $f_1$ and $f_2$ both range from 35 to 38 times the frequency resolution are found in the range from 0.82$\pi$ to $0.97\pi$, with the mean being $0.86\pi$ and the standard devation on the phase, estimated from the variance in the measured values, is $0.05\pi$ – this can be seen in the bicoherence plot as the region marked H1 in figure \[bico1\]. For the case where the first and second harmonics interact to produce the third harmonic, with $f_1$ from bins 35 to 38 and $f_2$ form bins 71 to 75, then mean value of the biphase is $0.22\pi$ with a standard deviation of $0.08\pi$ and a standard deviation of the mean of $0.02\pi$ – this can be seen as region H2 in figure \[bico1\]. For the case where the second harmonic interacts with itself to produce the fourth harmonic, (i.e where $f_1$ and $f_2$ range from 71 to 75 times the frequency resolution of the power spectrum), we find that the mean biphase is $0.11\pi$, with a standard deviation of $0.17\pi$ and a standard deviation of the mean of $0.04\pi$ – this can be seen as region H3 in figure \[bico1\], and it is clear from this figure that the bicoherence is quite weak at this peak. Because the fundamental and the second harmonic are stronger than the third and fourth harmonics, and the bicoherence is also stronger for the first two frequencies, the effects of the first two frequencies’ phase couplings have the major impact on the shape of the QPO in the light curve.
The biphase 0.86$\pi$ is quite close to $\pi$ itself. This means that the shape of the oscillation, as determined by just the two strongest frequencies is one marked by a relatively smooth profile apart from deep dips occurring on the fundamental frequency, with phase width of less than $\pi$. The folded light curve of the data, made using the `efold` tool within FTOOLS, shown in figure \[folded10408\] looks as expected.[^3]
We can also consider the case where the two noise frequencies add up to the fundamental QPO frequency, in the region marked NN in figure \[bico\]. This, in fact, is the real power of the bispectrum, as this information cannot be obtained by folding the data on the QPO period. Here, we take call bispectrum measurements where $f_1+f_2$ ranges from 35 to 38 in units of the frequency resolution. We find all the values of the biphase to be between -0.12$\pi$ and 0.33$\pi$ with a mean value of the biphase of 0.14$\pi$, with a standard deviation of 0.09$\pi$ and a standard deviation of the mean of 0.01$\pi$. The value of this biphase is then a bit less than $\frac{1}{7}\pi$ – i.e. it is much closer to zero than to $\pi/2$, and the major property of the behavior of the source in terms of the interactions of the noise and the QPO should be that those expected for a “pulsar”-like system. That is, the “envelope” on which the QPO is imposed should be one of spikes shooting well above a baseline flux which is slightly below the mean. This is, in fact the case. When the light curve is binned on a timescale of $\frac{1}{16}$ second, the mean count rate is 8164 cts/sec, the minimum is 4160 counts/sec, and the maximum is 16272 counts/sec, roughly a factor of two above and below the mean. The skewness of the flux distribution is positive, as calculated using `lcstats` from within the FTOOLS. The system thus largely follows the expectation for a lognormal flux distribution, as is often observed for X-ray binaries’ light curves (Uttley et al. 2005). The deviation from a biphase of exactly zero suggests that on long timescales, the intensity of the QPO rises sharply and falls more slowly. While some power is present in the bicoherence above the QPO frequency, this power is weak, so we do not investigate the biphase there.
RXTE observation 20402-01-15-00: a medium frequency QPO with a “cross” pattern bicoherence: a nearly sawtooth pattern for the fundamental plus second harmonic
--------------------------------------------------------------------------------------------------------------------------------------------------------------
In observation 20402-01-15-00, the “cross” pattern is seen – the bicoherence is strong when the QPO has either the median or the lowest frequency of the three frequencies being considered, but the bicoherence is not above the noise level for the case where $f_1+f_2=f_{QPO}$, and $f_1,f_2$ are two noise frequencies. We can now look at the biphases of the source. The QPO here peaks in frequency bin 72 (corresponding to a frequency of 2.2 Hz, given the 1/32 Hz frequency resolution), and shows a strong second harmonic.
{width="6"}
First let us examine the properties of the bispectrum for the frequencies in which the $f_1\approx{f_2}$ is the QPO frequency and $f_1+f_2$ is the frequency of the second harmonic of the QPO. We take the range of frequency bins from 69 to 76 in units of the frequency resolution – the region labelled H1 in figure \[bico2\]. The biphases here range from $-0.16\pi$ to $-0.35\pi$, with a mean value of $-0.29\pi$, a standard deviation of 0.05$\pi$, and a standard deviation of the mean of 0.01$\pi$. As the biphase is $-0.29\pi$, the real component of the bispectrum is positive, indicating a flux distribution skewed to the bright end of the mean. The imaginary component is negative, indicating a fast rise, slow decay shape to the oscillation.
We can then look at the interactions between the QPO and the noise component, which are strong for the triplets of frequencies of $f_1,
f_{QPO}$ and $f_1+f_{QPO}$, where $f_1<f_{QPO}$ – the region labelled NQ in figure \[bico2\]. In this case, the measurement errors are large on the biphases, and the values span nearly the full range of $2\pi$, so phase wrapping prevents us from using the mean and dispersion of the biphases values directly as has been done for the previous cases. In order to find a mean biphase, we average the sines of the biphases and the cosines of the biphases. We find that the final biphase has a value of $-0.21\pi$, and we take the variance in the mean value of the sines and cosines of the biphases and use standard error propagation to find a standard deviation of $0.09\pi$ and a standard deviation of the mean of $0.01\pi$. The shape of the light curve on these timescales is thus fairly similar to the shape of the QPO. As in the previous observation, while some power is present in the bicoherence above the QPO frequency, this power is weak, so we do not investigate the biphase there.
30184-01-01-000: hypotenuse pattern, high freq: a nearly sawtooth pattern for the fundamental plus second harmonic
------------------------------------------------------------------------------------------------------------------
In this observation, the bicoherence shows strong power on the timescale of the harmonic, and for the case where $f_1+f_2=f_{QPO}$, but not for any frequency higher than the fundamental frequency of the QPO, except at the harmonics of the QPO. For this observation, the QPO is at a frequency of approximately 3.4 Hz, peaking in bin 57 with a frequency resolution of 1/16 Hz. The bicoherence plot is given in figure \[bico3\].
{width="6"}
First, we examine the coupling between the fundamental and the harmonic, the region marked H1 in figure \[bico3\]. Taking all frequency bins from 55-59, we find that the biphasess all lie between $1.31\pi$ and $1.55\pi$, with a mean of 1.42 $\pi$, a standard deviation of $0.07\pi$, and a standard deviation of the mean of .01$\pi$. The flux distribution of the source here is thus nearly symmetric, while the time series itself has some asymmetry. The biphase for the interaction between the fundamental and the second harmonic indicates that the QPO rises quickly and decays slowly, as a decaying sawtooth wave. We note that the values of the squared bicoherence in this observation, even at the harmonic, are less than 0.05, indicating that the relative phases of the fundamental and the harmonic show substantial variation, and hence so does the shape of the oscillation – at the same time, the statistical significance of the difference between the bicoherence and zero is quite strong (M11), so the mean shape of the oscillation must be something like an inverse sawtooth wave.
The interaction of the QPO with the noise component shows a different behavior. Because the QPO is extremely strong in this observation, we must move far off the QPO centroid in order not to have the biphases estimates affected substantially by the wings of the QPO. We take the means of the sines and cosines of the biphases for all cases with $f_1,f_2<45$ in units of the frequency resolution, and $f_{QPO}$ from 55 to 59 in the same units – this is the region marked NN is figure \[bico3\]. We find that the mean biphase is $0.32\pi$ radians, with a standard deviation of $0.2\pi$ and a standard deviation of the mean of $0.02\pi$. The flux distribution is thus skewed to positive values. The time symmetry is that of a sawtooth wave with slow rise and fast decay – the QPO amplitude is rising slowly and turning off more quickly. Thus, when we compare with obseravation 10408-01-25-00, we see that in both observations, the interactions with the noise component are fairly similar, while the shapes of the oscillations themsleves are quite different.
Discussion
==========
Since our previous paper, we have become aware of some mechanisms for producing some of the observed bicoherence patterns in a tidy manner. In particular, the “hypotenuse” pattern is reproduced very well by a bilinear oscillator (Rivola & White 1998; White 2009). The bilinear oscillator is a system described by a differential equation quite similar to that for a simple harmonic oscillator, except that the restoring force has a different normalization for positive and negative displacements. The bilinear oscillator is of interest to engineers because it provides a good mathematical description to a cracked or fatigued beam within a machine, and hence measuring the bispectrum of the machine in response to being driven by vibrations can allow the crack to be detected without taking apart the machine or waiting for the machine to suffer a catastrophic breakdown.
While obviously cracked beams do not exist in astrophysical situations, other types of force law with similar mathematical dependences may exist in accretion disks. If we find that the power spectrum and the bispectrum of the bilinear oscillator can give a good mathematical description of the time series we observe, then we can focus theoretical efforts on producing physical models that are mathematically similar to the bilinear oscillator.
Following White (2009), we calculate a bilinear oscillator which follows the equation: $$\frac{d^2y}{dt^2}+c\frac{dy}{dt}+\kappa(y)y=x(t)$$
For a first run, we set $\kappa$ equal to 200000 for $y<0$ and to 1000000 for $y>0$. We set $c$ equal to 50, and we integrate over discretized time steps of .0003 time units, with an initial value of $y$ of 200. We drive the oscillator with a white noise process. The driving force has a mean expectation value of 0.0 and an expected standard deviation of $5.52\times10^6$.
This run produces a bicoherence plot that looks quite similar to the hypotenuse pattern seen in the real data for observation 30184-01-01-000. The biphase for the QPO’s interaction with the harmonic here is very nearly 0. That is, the flux distribution is skewed to positive values, but the time series is symmetric in time. The same is true for the interactions of two noise components that add up to the QPO frequency. We next perform a simulation which is identical to the first, except that we exchange the values of $\kappa$ with respect to $y=0$ – i.e. we set $\kappa$ equal to 200000 for $y>0$ and to 1000000 for $y<0$. While the details of the time series are different the key statistical properties measured by the biphase are the same. We conduct a few experiments with stronger damping. Making the system more strongly damped decreases the bicoherence and broadens the QPO, but does not change the biphase substantially.
It thus appears that models mathematically similar to the bilinear oscillator are unlikely to produce the observed bispectra of X-ray binary light curves. There remains a catch, of course – the inherently oscillating parameter in a quasi-periodic oscillation is almost certainly not just the count rate. Models for the few Hz QPOs we discuss in this paper include models in which the disk undergoes a global oscillation due to the Lense-Thirring precession (e.g. Stella & Vietri 1998; Fragile et al. 2007), and thermal-viscous oscillations (Abramowicz et al. 1989; Chen & Taam 1994). In the Lense-Thirring pression model, the parameter oscillating is the inclination angle of the inner accretion disk, and the X-ray count rate will be a function of that inclination angle and may be modulated additionally, for example, by fluctations in the accretion rate changing the surface brightness of the disk in ways independent (or, at least, not directly tied to) of the observer’s inclination angle (some initial exploration of this possibility has been made in Ingram & Done 2011). Numerical calculations made to date do not cover enough cycles of the oscillation to allow calculations of the biphase of oscillations from the Lense-Thirring precession, but it is intriguing that the one published numerical calculation which includes ray tracing does seem to show qualitatively similar phenomenology to that see in observation 30184-01-01-000 (Dexter & Fragile 2011). It would also be straightforward for the Lense-Thirring model to produce periodic occultations of part of the accretion disk, and hence to produce a biphase of approximately $\pi$ – this has been considered in Ingram & Done (2012), although the bispectrum of the simulation has not yet been computed. In principle, the same type of ray tracing calculations used in Dexter & Fragile (2011) could be run over a wider range of parameter space to detemine whether the biphase properties could be matched in a manner that is also consistent with the additional information given from other system parameters such as the mass accretion rate. In some cases, also, higher order corrugation modes may be present (see e.g. Tsang & Lai 2009).
The situation for the thermal viscous oscillation is perhaps even more difficult to reconcile with the data. The simulated light curves of Chen & Taam (1994) show slow rises, followed by quick decays of the flux. This would be expected to produce a biphase near $\pi/2$, giving opposite behavior to that seen, for example, observation 30184-01-01-000. Other models exist for explaining the low frequency quasi-periodic oscillations in X-ray binaries, and have the accompanied by noise components, but at present, simulated light curves for them have not been presented which could allow us to determine under what conditions the observed biphases might be reproduced (e.g. Varniére & Tagger 2002; Machida & Matsumoto 2008).
In future work, the biphase analysis may also be extended to help develop a better understanding of the light curves of sources without strong quasi-periodic oscillations. In particular, the recent development of an analytic method for calculating simulated light curves from propagation models (Ingram & van der Klis 2013) should allow one to study these models efficiently to determine how well they match the observed data in biphase. Given that the flux distributions of X-ray binaries are widely found to be log-normal when they are dominated by noise components (Uttley et al. 2005), we can reasonably expect that in all cases, the real component of the biphase will be positive. We can also expect that the imaginary components will be positive as well, given that the fastest variability is expected at the highest count rates due to the higher rate of energy generation in the inner part of the accretion flow. The exact value of the biphase is likely to trace the emissivity of the accretion flow.
Summary
=======
We have presented an introduction to the use of the biphase aimed at astronomers wishing to apply it to time series analysis. First, we briefly summarize the meaning of the biphase, so that a quick look at its value can be used to develop a quick intuition about the properties of a timeseries.
1. Time series which are symmetric in time have purely real bispectra, and time series which have flux distributions symmetric about the mean have purely imaginary bispectra.
2. When the real component is positive, the flux distribution is skewed to positive values.
3. Thus spiky time series like pulsar light curves will have biphases near 0.
4. Time series like eclipsing binary light curves will have biphases near $\pi$.
5. When the imaginary component is positive, the time series rises slowly and fades quickly. This yields biphases near $\pi/2$.
6. When the imaginary component is negative, the time series rises quickly and fades slowly. This yield biphases near $-\pi/2$.
We have also applied the biphase to several obseravtions of GRS 1915+105. We have found that the profile of the QPO matches well between light curves produced by folding on the QPO period and the predictions from the biphase data. We have also found that the actual values of the biphases vary widely from observation to observation. No simple models that we have considered can reproduce the observed biphase data.
Acknowledgments
===============
I am grateful to Michiel van der Klis, Chris Fragile, Patricia Arévalo, Simon Vaughan, and to Paul White of the University of Southampton’s Institute for Sound and Vibration Research for extremely interesting and valuable discussions. I also thank the Astrophysis Insitute of the Canary Islands for hospitality while a portion of this work was completed. Finally, I thank the referee, Adam Ingram, for a report which was both prompt and helpful, and which has led to improvements in the clarity and content of the paper.
[99]{} Aigrain S., Favata F., Gilmore G., 2004, A&A, 414, 1139 Abramenko V.I., 2005, ApJ, 629, 1141 Abramowicz M.A., Szuszkiewicz E., Wallinder F., 1989, in Theory of Accretion Disks, eds. F. Meyer, W.J. Duschl, J. Frank, E.Meyer-Hofmeister (Dordrecht: Kluwer), 141 Davies R.B., Harte D.S., 1987, Biometrika, 74, 95 Dexter J., Fragile P.C., 2011, ApJ, 730, 36 Edelson R.A., Krolik J.H., 1988, ApJ, 333, 646 Elgar S., Guza R.T., 1985, J. Fluid Mech., 161, 425 Fackrell J., 1997, PhD Thesis, University of Edinburgh Feroci M., et al., 2012, Experimental Astronomy, 34, 415 Gajraj R.J., Doi M., Matzaridis H., Kenny G.N., 1998, British Journal of Anaesthesia, 80, 46 Gaskell C.M., 2004, ApJ, 612, L21 Hasselman K.W., Munk W. & MacDonald G., 1963, Time Series Analysis, John Wiley: New York, M. Rosenblatt editor, p.125 Haug E.G., Taleb N.N., 2011, Journal of Economic Behavor and Organization, 77 Hesse K.H., Wielebinski R., 1974, A&A, 31, 409 Ingram A., Done C., 2011, MNRAS, 415, 2323 Ingram A., Done C., 2012, MNRAS, 419, 2369 Jiang C., et al., 2011, ApJ, 742, 120 Kamionkowski M., Smith T.L., Heavens A., 2011, PhysRevD, 83, 023007 Klassen A., Aurass H., Mann G., 2001, A&A, 370, L41 Loeb A., Gaudi B.S., 2003, ApJ, 588L, 117 Lyutyj V.M. & Oknyanskij V.L., 1987, AZh, 64, 465 Maccarone T.J., Coppi P.S., 2002, MNRAS, 336, 817 Maccarone T.J., Coppi P.S., Poutanen J., 2000, ApJ, 537L, 107 Maccarone T.J., Uttley P., van der Klis M., Wijnands R.A.D., Coppi P.S., 2011, MNRAS, 413, 1819 (M11) Machida M., Matsumoto R., 2008, PASJ, 60, 613 Makuch R.W., Freeman D.H., Johnson M.F., 1979, Journal of Chronic Disease, 32, 245 Masada A., Kuo Y.-Y., 1981, Deep Sea Research, 28A, 213 McComas C.H., Briscoe M.G., 1980, Journal of Fluid Mechanics, 97, 205 Nowak M.A., Vaughan B.A., Wilms J., Dove J.B., Begelman M.C., 1999, ApJ, 510, 874 Priedhorsky W., Garmire G.P., Rothschild R., Boldt E., Serlemitsos P., Holt S., 1979, ApJ, 233, 350 Rivola A., White P., 1998, Journal of Sound and Vibration Research, 216, 889 Scaringi S., Körding, E., Uttley, P., Knigge, C., Groot, P. J., Still, M., 2012, MNRAS, 421, 2854 Shaposhnikov N., 2012, astro-ph/1205.0748 Terrell N.J., 1972, ApJ, 174L, 35 Timmer J., Koenig M., 1995, A&A, 300, 707 Uttley P., McHardy I.M., 2001, MNRAS, 323L, 26 Uttley P., McHardy I.M., Vaughan S., 2005, MNRAS, 359, 345 van Milligen B.P., Sanchez E., Estrada T., Hidalgo C., Branas B., Carreras B., Garcia L., 1995, Physics of Plasmas, 2, 3017 Varnière P., Tagger M., 2002, A&A, 394, 329 Way M.J., Scargle J.D., Ali K.M., Srivastava A.N., 2012, “Advances in Machine Learning and Data Mining for Astronomy”, CRC Press: Boca Raton White P.R., 2009, in [*Encyclopedia of Structural Health Monitoring*]{}, editors C.Boller, F.-K. Chang and Y. Fujino, Wiley:Hoboken Zweibel E. G., Yamada M., 2009, ARA&A, 47, 291
\[lastpage\]
[^1]: We determined empirically in that paper that this is a good way to non-dimensionalize the skewness, since this approach yields skewnesses in different energy bands, with different count rates that are generally quite similar to one another as long as the source count rate dominates over the background count rate.
[^2]: We note that we have not attempted to make any computations of the trispectrum yet, and that our pessimism may be unwarranted, so we do not wish to be overly discouraging to others trying to compute it with RXTE data.
[^3]: The other observations also show folded light curves consistent with the qualitative expectations based on the biphases, but we do not plot them in the interests of keeping the paper from becoming too long.
|
---
abstract: 'We construct families of Newton-Okounkov bodies for the free group character varieties and configuration spaces of any connected reductive group.'
author:
- Christopher Manon
title: 'Newton-Okounkov polyhedra for character varieties and configuration spaces'
---
Keywords: Character Variety, Configuration Space, Newton-Okounkov Body
Introduction
============
For a commutative algebra $A$ (for our purposes taken over $\C$), and a valuation $v: A \to \Z^M$ of rank $M= dim(A),$ the image $v(A)$ is an affine semigroup contained in a convex body $C_v$ called the Newton-Okounkov body of $v.$ Newton-Okounkov bodies have recently become a subject of intense study, starting with the papers of Kaveh, Khovanskii [@KK] and Lazarfeld, Musta$\c{t} \check{a}$ [@LM]. When $A$ is taken to be a coordinate ring of a scheme $X$ (e.g. projective or affine), $C_v$ behaves like the Newton Polytope of a toric variety, providing combinatorial models from which many geometric and algebraic invariants can be computed. Newton-Okounkov bodies play directly into the geometry of $X$ in two related ways. In the case when $C_v$ is polyhedral, there is flat degeneration $X \Rightarrow X_{C_v}$, where $X_{C_v}$ is the toric variety attached to $C_v$. Additionally, Harada and Kaveh have linked Newton-Okounkov bodies $C_v$ to the study of integrable systems in $X$, when $X$ satisfies some additional conditions, [@HK]. This construction is also useful in combinatorics, as the set $v(A) \subset C_v$ provides a polyhedral labelling of a basis of $A$ which can brought to bear when the underlying vector space of $A$ has an enumerative meaning. With all of these applications in mind, the purpose of this paper is to construct large families of Newton-Okounkov bodies for two classes of spaces whose geometry, algebra and combinatorics are important in representation theory, the free-group character varieties and the configuration spaces for a reductive group $G.$
The character variety $\mathcal{X}(\pi, G)$ associated to a finitely generated group $\pi$ and a connected reductive group $G$ is defined to be the moduli space of representations of $\pi$ in $G$, defined as the $GIT$ quotient $\mathcal{X}(\pi, G) = Hom(\pi, G)/G.$ When $\pi$ is the fundamental group of a smooth manifold $M$, $\mathcal{X}(\pi, G)$ is the moduli space of flat, topological principal $G$ bundles on $M$. When $M$ is taken to be a surface, the character variety $\mathcal{X}(\pi, G)$ naturally serves as a non-commutative generalization of Teichmüller space [@FG], [@Go]. Stemming from these moduli interpretations, character varieties also appear as classical spaces in gauge theory, and their coordinate algebras $\C[\mathcal{X}(\pi, G)]$ appear in topological quantum field theory, [@Ba]. In this paper we use combinatorial elements of this field theoretic interpretation to build Newton-Okounkov bodies for $\mathcal{X}(F_g, G)$, where $F_g$ is a free group.
\[character\] To the following information we associate a convex polyhedral cone $C_{\bold{i}}(\Gamma)$ realized as the Newton-Okounkov body of a valuation $v_{\bold{i}, \Gamma}$ on $\C[\mathcal{X}(F_g, G)]$.
1. A trivalent graph $\Gamma$ with no leaves and $\beta_1(\Gamma) = g$.\
2. Total orderings on the non-leaf edges $E(\Gamma)$ and non-leaf vertices $V(\Gamma).$\
3. A spanning tree $\tree \subset \Gamma.$\
4. An orientation on the edges $\vec{e} = E(\Gamma) \setminus E(\tree)$\
5. An assignment $\bold{i}: V(\Gamma) \to R(w_0)$ of reduced decomposition of the longest word $w_0$ in the Weyl group of $G$ to each vertex $v \in V(\Gamma)$.\
Note that any generating set $\{w_1, \ldots, w_g\}$ of $F_g$ defines an automorphism $\Psi_{\vec{w}}: \mathcal{X}(F_g, G) \to \mathcal{X}(F_g, G)$, which can then be precomposed with the maps $\Phi_{\tree, \vec{e}}$. This produces a large set of filtrations on $\C[\mathcal{X}(F_g, G)]$ by pullback which carries an action by the automorphisms of $F_g$.
A Newton-Okounkov body $C_v$ associated to a projective coordinate $R_{\mathcal{L}}$ of a projective variety $X$ with ample line bundle $\mathcal{L}$ is naturally a cone over a compact convex body $\bar{C_v}$. The techniques we develop to produce the valuations $v_{\bold{i}, \Gamma}$ can also be applied to construct such a compact body for another class of algebraic varieties related to the representation theory of $G.$ Let $\vec{\lambda} = \lambda_1, \ldots, \lambda_n \in \Delta$ be dominant weights of $G$, and let $P_1, \ldots, P_n$ be the parabolic subgroups which respectively stabilize the highest weight vectors in the representations $V(\lambda_1^*), \ldots, V(\lambda_n^*).$ Recall that the flag variety $G/P_i$ has a $G-$linearized line bundle $\mathcal{L}_{\lambda^*}$, with $H^0(G/P_i \mathcal{L}_{\lambda^*}) = V(\lambda)$ we let $P_{\vec{\lambda}^*}(G)$ be the following diagonal $GIT$ quotient.
$$P_{\vec{\lambda}^*}(G) = G \backslash_{\vec{\lambda^*}} \prod G/P_i\\$$
This is called the configuration space of $G$-flags associated to $\vec{\lambda}^*.$ Our second main theorem produces a combinatorial family of polyhedral Newton-Okounkov bodies for the canonical line bundle $\mathcal{L}_{\vec{\lambda}^*}$ on $P_{\vec{\lambda}^*}(G)$ associated to this quotient construction.
\[configuration\] To the following information we associate a polytope $C_{\bold{i}}(\tree, \vec{\lambda}),$ realized as the Newton-Okounkov body of a valuation $v_{\bold{i}, \tree}$ on the projective coordinate ring $\C[P_{\vec{\lambda}^*}(G)] =$ $\bigoplus_{m \geq 0} H^0(P_{\vec{\lambda}^*}(G), \mathcal{L}(m\vec{\lambda}^*))$.
1. A trivalent tree $\tree$ with an ordering on leaves.\
2. A total ordering on the non-leaf edges $E(\tree)$ and non-leaf vertices $V(\tree).$\
3. An assignment $\bold{i}: V(\tree) \to R(w_0)$ of reduced decomposition of the longest word $w_0$ in the Weyl group of $G$ to each vertex $v \in V(\Gamma)$.\
In particular the integer points of $C_{\bold{i}}(\tree, \vec{\lambda})$ are in bijection with a basis of the invariant tensors $(V(\lambda_1) \otimes \ldots \otimes V(\lambda_n))^G = H^0(P_{\vec{\lambda}^*}(G), \mathcal{L}(\vec{\lambda}^*)).$
The $C_{\bold{i}}(\tree, \vec{\lambda})$ are cross-sections of a cone $C_{\bold{i}}(\tree)$, which serves as a Newton-Okounkov body of an affine master configuration space $P_n(G),$ defined as the following affine $GIT$ quotient.
$$P_n(G) = G \backslash (G/U)^n\\$$
Here $U \subset G$ is a maximal unipotent subgroup. Any flag variety $G/P$ with linearization $\mathcal{L}_{\lambda^*}$ can be obtained from $G/U$ as a right $\lambda^*-$linearized $GIT$ quotient by a maximal torus $T \subset G$. Accordingly, $P_{\vec{\lambda}^*}(G)$ is obtained from $P_n(G)$ by a right $T^n$ quotient. Using the same methods as in the proof of Theorem \[character\], we produce a $T^n$ invariant valuation $v_{\bold{i}, \tree}$ on $\C[P_n(G)]$ with Newton-Okounkov body $C_{\bold{i}}(\tree)$.
Methods
-------
We construct the valuations $v_{\bold{i}, \Gamma}$ by building filtrations on the coordinate ring $\C[\mathcal{X}(F_g, G)]$ in two steps, given in Sections \[step1\] and \[step2\]. When the associated graded algebra of a filtration is a domain, we say it is a “strong filtration”. The following (almost tautological) proposition allows us to use the notions of strong filtration and valuation interchangeably.
\[equivalenceprop\] Let $A$ be a domain, and let $\Z^M, <$ have the structure of an ordered group. The information of a strong increasing filtration $A = \cup_{w \in \Z^M} F_{\leq w}$ is equivalent to a valuation $v: A \to \Z^M$.
Starting with a filtration $F$, define $v_F$ by $v_F(a) = min\{w | a \in F_{\leq w}\}.$ For a valuation $v$ define $F^v_w \subset A$ by $F^v_w = \{a | v(a) \leq w\}.$ The property $v(ab) = v(a) + v(b)$ implies that $F_v$ is a strong filtration. Similarly, $F$ being a strong algebra filtration implies that $v_F(ab) = v_F(a) + v_F(b).$ We leave it to the reader to check the rest.
In Section \[step1\] we build a filtration inspired from one of the applications of character varieties to gauge theory. For a maximal compact $K \subset G$, BF theory on an appropriately chosen triangulated manifold $M$ is quantized by $L^2(\mathcal{X}(F_g, K)),$ which can be identified with the coordinate ring $\C[\mathcal{X}(F_g, G)]$, see [@Ba]. The states $L^2(\mathcal{X}(F_g, K))$ are spanned by the spin diagrams of $K$ (equivalently $G$), these are defined as follows.
Let $\Gamma$ be an oriented graph, a spin diagram with topology $\Gamma$ is the following information.
1. An assignment $\eta : E(\Gamma) \to \Delta,$ of dominant weights to the edges of $\Gamma.$\
2. An assignment of $G-$linear maps $\rho$ to the vertices $v \in V(\Gamma)$ which intertwine the incoming representations $\bigotimes_{e \to v} V(\lambda(e))$ with the outgoing representations $\bigotimes_{v \to f} V(\lambda(f))$.\
$$\begin{xy}
(0, 0)*{\bullet} = "A1";
(0, 10)*{\bullet} = "A2";
(0, 13)*{\phi};
(9, 15)*{\bullet} = "A3";
(-10, 15)*{\bullet} = "A4";
(-10, 12)*{\psi};
(-10, 25)*{\bullet} = "A5";
(-20, 10)*{\bullet} = "A6";
(3, 5)*{\lambda};
(5, 16)*{\eta};
(-8, 22)*{\mu};
(-17, 15)*{\alpha};
(-5, 15)*{\beta};
"A1"; "A2";**\dir{-}? >* \dir{>};
"A2"; "A3";**\dir{-}? >* \dir{>};
"A2"; "A4";**\dir{-}? >* \dir{>};
"A4"; "A5";**\dir{-}? >* \dir{>};
"A4"; "A6";**\dir{-}? >* \dir{>};
\end{xy}$$\
The purpose of Section \[step1\] is to show that for a fixed trivalent $\Gamma$ with $\beta_1(\Gamma) = g$, the spin diagrams with topology $\Gamma$ define a filtration of $\C[\mathcal{X}(F_g, G)]$. We identify the associated graded algebra of this filtration, and note that it is not an affine semigroup algebra unless $G$’s semisimple part is a product of copies of $SL_2(\C).$
In order to enhance these filtrations we must carefully choose a basis with amenable multiplication and combinatorial properties in the intertwiner spaces at each vertex $v \in V(\Gamma).$ This is provided by the dual canonical basis constructed by Lusztig, [@Lu]. The dual canonical basis can be used to define a basis in each invariant space $B(\mu, \lambda, \eta) \subset (V(\mu) \otimes V(\lambda) \otimes V(\eta))^G$, which are in turn identified with intertwiner spaces, [@BZ1], [@BZ2]. We study filtrations built from this basis in Section \[step2\].
For each choice $\bold{i} \in R(w_0)$ of a reduced decomposition of the longest element of the Weyl group of $G$, there is a labelling of the dual canonical basis by tuples of non-negative integers $b \to \vec{t} \in \Z^N$ called string parameters. In this way, the choice $\bold{i}$ assigns the elements of $(V(\mu)\otimes V(\lambda) \otimes V(\eta))^G$ to integer points in a convex polytope $C_{\bold{i}}(\mu, \lambda, \eta)$ studied in [@BZ2]. We use the inequalities of these polytopes to define the polyhedra in Theorems \[character\] and \[configuration\].
Our use of the dual canonical basis in this role follows previous work of Caldero [@C], and Alexeev, Brion [@AB], (see also Kaveh [@K]), who use a filtration on the string parameters of the dual canonical basis to define full rank valuations on the coordinate rings of spherical varieties. We combine the filtrations from Section \[step1\] with the string parameter filtrations of Section \[step2\] with the following construction.
\[compositeprop\] Let $A$ be a domain, with $F$ a strong filtration on $A$ by $\Z^M, <_1,$ and let $G$ be a strong filtration on $gr_F(A)$ by $\Z^L, <_2$ which is compatible with the induced grading. There is a strong filtration $F\circ G$ on $A$ by $\Z^{M+L}, <_1\circ <_2,$ where $<_1\circ <_2$ is the composite order built lexicographically by first ordering by $<_1$ and breaking ties with $<_2.$ This filtration has associated graded algebra $gr_G(gr_F(A)).$
Each space $F_{\leq w}/F_{< w} \subset gr_F(A)$ has a filtration $\ldots \subset G_{w, u} \subset \ldots.$ We pull the spaces $G_{w, u}$ back to a filtration $F\circ G_{w, u} \subset F_w$. By construction each space $F \circ G_{w, u}$ contains $F_{< w}$, this implies that $F \circ G_{w', u'} \subset F\circ G_{w, u}$ if $w' < w.$ If $w' = w$, then $u' < u$ and $F\circ G_{w', u'} \subset F\circ G_{w, u}$ by construction. It is straightforward to check the strong filtration property and the identity $gr_{F\circ G} = gr_G(gr_F(A))$.
Remarks
-------
Lawton [@La] and Sikora [@S] have given structure theorems for the coordinate rings $\C[\mathcal{X}(F_g, G)]$, and Lawton, Florentino give descriptions of the topology [@FL2] and the singular locus [@FL1] of $\mathcal{X}(F_g, G)$ in certain cases. It would be interesting to relate the degenerations constructed here to a Gröbner theory of their defining equations.
Theorem \[configuration\] gives a construction of a basis for the tensor product invariant spaces $(V(\lambda_1)\otimes \ldots \otimes V(\lambda_n))^G$ which is labelled by the lattice points in a convex, rational polytope. Howe, Jackson, Lee, and Tan [@HJLT], and Howe, Tan, Willenbring [@HTW] use a SAGBI construction achieve this for triple tensor product invariant spaces in the case $G = SL_m(\C).$ The cone $\Omega_3$ resulting from their construction is a cross section of the cone of Gel’fand-Tsetlin patterns, and is linearly equivalent to $C_{\bold{i}}(3)$. The algebraic structure of these cones is not very well understood outside the cases $SL_m(\C)$, $m = 2, 3, 4$. We also point out that the space $P_n(SL_2(\C))$ is the affine cone of the Plücker embedding of the Grassmannian variety $Gr_2(\C^n)$, and that the degenerations we construct in this case coincide with those constructed by Speyer and Sturmfels in [@SpSt].
Other enumeration problems in representation theory could plausibly be studied with the methods in this paper. Polyhedra which control Levi branching problems $L \subset G$ have been defined by Berenstein and Zelevinsky in [@BZ2]. These can be adapted along the lines of the program used in Sections \[step1\], \[step2\], and realized as Newton-Okounkov bodies.
The definition of Newton-Okounkov body we use is more general than the one in [@LM] and [@KK], where the valuation used to construct the Newton-Okounkov body comes from a flag of subspaces of the variety. It would be interesting to realize the tensor product polytope $C_{\bold{i}}(\tree, \vec{\lambda})$ as the Newton-Okounkov body attached to a flag $\mathcal{F}$ in a variety birational to $P_{\vec{\lambda}}(G)$.
The work of Harada and Kaveh [@HK] suggests that each $C_{\bold{i}}(\tree, \vec{\lambda})$ and $C_{\bold{i}}(\Gamma)$ should be the momentum image of an integrable system in $P_{\vec{\lambda}^*}(G)$ and $\mathcal{X}(F_g, G),$ respectively. A construction of such an integrable system for each polyhedra would be interesting for the symplectic geometry of $\mathcal{X}(F_g, G)$ and $P_{\vec{\lambda}^*}(G)$. It would also be interesting to see geometric relationships between the integrable systems associated to different valuations given by our construction. Partial results in this direction appear in [@HMM] for $G = SL_2(\C).$
Finally, we remark that Theorem \[configuration\] essentially appears in the unplublished notes [@M1], along with other remarks on the use of valuations in the study of branching problems.
Acknowledgements
----------------
We thank Kiumars Kaveh for numerous helpful conversations about Newton-Okounkov bodies. We also thank Sean Lawton and Adam Sikora for useful conversations about character varieties.
Branching filtrations {#step1}
=====================
In this section we make use of the ordering on dominant weights to construct filtrations of the coordinate rings of character varieties and configuration spaces. These filtrations are not fine enough to give affine semigroup associated graded algebras, however this construction reduces the problem to constructing such a filtration on $P_3(G)$ with is $T^3$ stable. Spin diagrams for the group $G$ emerge from this construction as labels for the graded components of the associated graded algebras we construct. We finish the subsection with an alternative $GIT$ construction of the character variety $\mathcal{X}(F_g, G)$ which makes the connection with spin diagrams more transparent.
Horospherical contraction and the algebra $\C[G]$
-------------------------------------------------
We briefly review the theory of horospherical contraction, due to Popov [@Po]. We choose a maximal torus $T \subset G$, with triangular decomposition $U_- T U_+ \subset G,$ and Weyl chamber $\Delta.$ Recall the Peter-Weyl theorem, which gives an isotypical decomposition of the vector space underlying the coordinate ring $\C[G].$
$$\C[G] = \bigoplus_{\lambda \in \Delta} V(\lambda) \otimes V(\lambda^*)\\$$
We let $b_{\lambda} \in V(\lambda)$ denote the highest weight vector with respect to $U_+$. Horospherical contraction relates $G$ to the affine variety $G/U$, which has coordinate ring $\C[G/U] = \bigoplus_{\lambda \in \Delta} V(\lambda) \otimes \C b_{\lambda^*} \subset \C[G]$. Multiplication in $\C[G/U]$ is computed by dualizing the map $C: V(\lambda + \eta) \to V(\lambda) \otimes V(\eta)$ which sends $b_{\lambda + \eta}$ to $b_{\lambda} \otimes b_{\eta}$.
Recall that there is a natural partial ordering on the dominant weights $\lambda \in \Delta$, where $\lambda > \eta$ if $\lambda - \eta$ can be expressed as a non-negative sum of positive roots. This ordering induces a $G\times G$-stable filtration on $\C[G].$
\[hcontract\]\[Horospherical contraction\] The dominant weight filtration on $\C[G]$ induced by $\Delta$ has associated graded algebra isomorphic to $\C[T\backslash [G/U_+ \times U_- \backslash G)],$ where the $T-$action has isotypical spaces $V(\lambda) \otimes V(\lambda^*) \subset \C[G/U_+ \times U_- \backslash G]$.
This follows from Chapter $3,$ Section $15$ of [@G].
As $\C[T \backslash (G/U_+ \times U_- \backslash G)]$ is a domain, any prolongation of the partial ordering on $\Delta$ to a complete ordering which is compatible with addition of weights defines a strong filtration $\cup_{\lambda \in \Delta} F_{\leq \lambda} = \C[G]$, where $F_{\leq \lambda} = \bigoplus_{\eta \leq \lambda} V(\eta)\otimes V(\eta^*).$ There are many such prolongations, we define one below.
Let $G$ be a simple complex group, with Weyl chamber $\Delta,$ and simple coweights $H_{\alpha_1}, \ldots, H_{\alpha_r}$. The total order $\bold{<}$ is defined by lexicographically organizing the orderings defined by $\lambda(H_{\alpha_i}) \in \Z$.
Let $Lie(G) = \mathfrak{g} = \mathfrak{z} \oplus \bigoplus \mathfrak{g}_i,$ with $\mathfrak{g}_i$ a simple Lie algebra, and $\mathfrak{z}$ the Lie algebra of the center $Z \subset G$. The Weyl chamber $\Delta$ has a corresponding product decomposition $\mathfrak{z}^* \times \prod \Delta_i$. Now we can define $\bold{<}$ on $\Delta$ by ordering the $\mathfrak{g}_i$ and using the induced lexicographic organization of the orderings $\bold{<}_i.$ We then break ties with any lexicographic ordering on $\mathfrak{z}.$ The following is straightforward.
\[proh\] The total order $\bold{<}$ respects addition of weights, and refines the partial dominant weight ordering $<$.
Branching algebras
------------------
We apply horospherical contraction to obtain filtrations on the coordinate rings of a class of spaces $B(\phi)$ called branching varieties. There is one such variety for each map $\phi: H \to G$ of complex, connected reductive groups. The space $P_n(G)$ is recovered as the branching variety $B(\delta_n)$, where $\delta_n: G \to G^{n-1}$ is the diagonal embedding. We choose maximal unipotent subgroups $U_H \subset H$, $U_G \subset G,$ and Weyl chambers $\Delta_H, \Delta_G,$ and define $B(\phi)$ as the following affine $GIT$ quotient.
$$B(\phi) = H \backslash [H/U_H \times G/U_G]\\$$
Here the action of $H$ is defined through $\phi.$ The coordinate ring of $B(\phi)$ is graded by the multiplicity spaces $W(\mu, \lambda)$ of $H$ irreducible representations in the irreducible representations of $G,$ as branched over the map $\phi.$
$$V(\lambda) = \bigoplus_{\mu \in \Delta_H} W(\mu, \lambda) \otimes V(\mu)$$
$$\C[B(\phi)] = \bigoplus_{\mu, \lambda \in \Delta_H \times \Delta_G} W(\mu, \lambda)\\$$
The branching algebras $\C[B(\phi)]$ come with special filtrations defined by diagrams in the category of reductive groups. We let $\phi = \pi \circ \psi$ be a factorization of $\phi$.
$$\begin{CD}
H @>\psi>> K @>\pi>> G\\
\end{CD}$$
The map $\psi$ defines an action of $H$ on $K$, and $\pi$ defines an action of $K$ on $G$. Using these actions, we can identify $B(\phi)$ with the following $GIT$ quotient.
$$B(\phi) = H \times K \backslash [H/U_H \times K \times G/U_G]\\$$
The space $K \backslash K \times G/U_G$ is isomorphic to $G/U_G$, and likewise the resulting action of $H$ on $H/U_G\times G/U_G$ is induced through $\phi = \pi \circ \psi.$ The direct sum decomposition $\C[K] = \bigoplus_{\eta \in \Delta_K} V(\eta)\otimes V(\eta^*)$ induces a decomposition of $\C[B(\phi)]$.
$$\C[B(\phi)] = \bigoplus_{\lambda, \eta, \mu \in \Delta_G, \Delta_K, \Delta_H} W(\mu, \eta) \otimes W(\eta, \lambda)\\$$
This defines a $T_H \times T_G$-stable filtration $F^{\psi, \pi}$ of $\C[B(\phi)]$ by the dominant weights $\eta \in \Delta_K.$
$$F^{\psi, \pi}_{\leq \eta} = \bigoplus_{\lambda, \gamma \leq \eta, \mu} W(\lambda, \gamma) \otimes W(\gamma, \mu)\\$$
\[degrep\] The associated graded algebra of $F^{\psi, \pi}$ is the affine $GIT$ quotient $\C[T_K \backslash B(\pi) \times B(\psi)]$, where the isotypical spaces of $T_K$ are the $W(\lambda, \gamma)\otimes W(\gamma, \mu).$
The filtration $F^{\psi, \pi}$ is induced from the horospherical filtration on $\C[K]$. The $K\times K$ stability of horospherical contraction implies that the associated graded algebra is the domain $\C[B(\pi) \times B(\psi)]^{T_K}.$ This filtration is $T_H \times T_G$-stable by construction.
Notice that the associated graded algebra $\C[B(\pi) \times B(\psi)]^{T_K}$ has a residual algebraic action of $T_K.$
We finish this subsection by applying this construction to the diagonal map $\delta_n: G \to G^{n-1}.$ Recall that $B(\delta_n) = P_n(G)$. Let $\tree$ be a tree with two internal vertices and $n$ leaves labelled $0, \ldots, n-1.$ Let $k +1$ be the number of edges incident on the vertex connected to the $0$ vertex, and $m$ be the number of leaves connected to the other internal vertex. This structure defines a factorization $\delta_n = Id ^s \times \delta_m \times Id ^t \circ \delta_k: G \to G^{n-1},$ where $s + t = k-1.$ Proposition \[degrep\] implies there is a filtration $F^{\tree}$ on $\C[P_n(G)]$, with associated graded algebra $\C[P_m(G) \times P_k(G)]^T$.
$$\begin{xy}
(0, 0)*{\bullet} = "A1";
(0, 10)*{\bullet} = "A2";
(0, 13)*{\delta_3};
(9, 15)*{\bullet} = "A3";
(-10, 15)*{\bullet} = "A4";
(-10, 12)*{\delta_3};
(-10, 25)*{\bullet} = "A5";
(-20, 10)*{\bullet} = "A6";
(3, 5)*{G};
(5, 17)*{G};
(-8, 22)*{G};
(-17, 15)*{G};
(-5, 15)*{G};
"A2"; "A1";**\dir{-}? >* \dir{>};
"A3"; "A2";**\dir{-}? >* \dir{>};
"A4"; "A2";**\dir{-}? >* \dir{>};
"A5"; "A4";**\dir{-}? >* \dir{>};
"A6"; "A4";**\dir{-}? >* \dir{>};
\end{xy}$$\
Given a trivalent tree $\tree$ with $n$ leaves, and an ordering on $E(\tree),$ we iterate this construction to obtain a filtration $F^{\tree}$ on $\C[P_n(G)]$.
\[treeweight\] For a trivalent tree $\tree$ with $n$ leaves, and an ordering on $E(\tree)$ there is a $T^n$-invariant filtration on $\C[P_n(G)]$ with associated graded algebra the coordinate ring of $P_{\tree}(G) = [\prod_{v \in V(\tree)} P_3(G)]/ T^{E(\tree)}.$
An ordering $E(\tree) = \{e_1, \ldots, e_{n-3}\}$ induces a length $n-3$ chain of collapsing maps on trees, $\pi_i: \tree_{i-1} \to \tree_i$ where $\tree_0 = \tree$, and $\tree_i$ is obtained from $\tree_{i-1}$ via $\pi_i$ by collapsing the edge $e_i.$ The tree $\tree_{n-3}$ has a single internal vertex, and $\tree_{n-4}$ has a single internal edge. By the previous construction there is a $T^n$-stable filtration on $\C[P_n(G)]$ with associated graded algebra $P_{\tree_{n-4}}(G) = [P_{v(u)}(G) \times P_{v(w)}(G)]/T$, where $v(u)$ and $v(w)$ are the valences of the two internal vertices $u, w \in V(\tree_{n-4})$. The map $\pi_{n-5}$ collapses the edge $e_{n-2}$ to either $u$ or $w$, yielding a corresponding filtration on $\C[P_{v(u)}(G)]$ or $\C[P_{v(w)}(G)].$ This filtration is invariant with respect to $T$ above, and so induces a filtration on $\C[P_{v(u)}(G) \times P_{v(w)}(G)]/T$. We can now apply Proposition \[compositeprop\] to obtain a filtration on $\C[P_n(G)]$. Continuing this way, we obtain the proposition.
Character varieties and the master configuration space
------------------------------------------------------
Next we show that a similar family of filtrations can be constructed for the character variety $\mathcal{X}(F_g, G)$. This variety is constructed as the following $GIT$ quotient.
$$\mathcal{X}(F_g, G) = G^g/_{ad} G\\$$
Here the $ad$ subscript indicates the adjoint action of $G$ on the product $g \circ_{ad} (x_1, \ldots, x_n) =$ $(gx_1g^{-1}, \ldots, gx_ng^{-1}).$ The coordinate ring $\C[\mathcal{X}(F_g, G)]$ is therefore the algebra of adjoint $G$ invariants in $\C[G^g].$ By Proposition \[hcontract\], the horospherical contraction of $\C[G]$ to $\C[T \backslash (G/U_+ \times U_- \backslash G)]$ is $G\times G$ invariant, therefore we may place the $G\times G$-stable filtration on $\C[\mathcal{X}(F_g, G)]$ to obtain the following.
\[characterdegconfig\] There is a filtration on $\C[\mathcal{X}(F_g, G)]$ with associated graded ring equal to the coordinate ring of $[ T \backslash (G/U_+ \times U_- \backslash G)]^g /_{ad} G = P_{2g}(G)/T^g.$ Here the invariants of the torus $T^g$ are the tensor products $V(\lambda_1) \otimes V(\lambda_{2g})^G$, where $\lambda_{2k-1}^* = \lambda_{2k}.$
Now that we have connected the character variety $\mathcal{X}(F_g, G)$ with the master configuration space $P_{2g}(G),$ we may use the valuations we constructed with Proposition \[treeweight\].
\[characterstep1\] For every choice of a trivalent graph $\Gamma$, spanning tree $\tree \subset \Gamma$, an ordering on $E(\Gamma)$ and an orientation on $E(\Gamma) \setminus E(\tree) = \{e_1, \ldots, e_g\}$, there is a Filtration on $\C[\mathcal{X}(F_g, G)]$ with associated graded ring the coordinate ring of $P_{\Gamma}(G) = [\prod_{v \in V(\Gamma)} P_3(G)]/T^{E(\Gamma)}.$
We identify the ordered, oriented edges $e_1, \ldots, e_g$ with the components of $G^g$, where the orientation distinguishes the left and right hand sides of each component. The filtration above then yields associated graded algebra $\C[P_{2g}(G)/T^g]$. We split each edge $e_i$ in two $f_{2i-1}, f_{2i}$, and build the trivalent tree $\tree'$ using the topology of the spanning tree $\tree.$ This defines a filtration on $\C[P_{2g}(G)]$ with associated graded algebra $\C[P_{\tree'}(G)] = [\prod_{v \in V(\Gamma)} P_3(G)]/T^{E(\tree')}.$ By the $T^{2g}$ stability of the filtration and Proposition \[compositeprop\], we may now induce a filtration on $\C[\mathcal{X}(F_g, G)]$ with associated graded algebra the quotient $P_{\Gamma}(G) = P_{\tree'}(G)/T^g.$
Graph construction of character varieties
-----------------------------------------
We present an alternative construction of the variety $\mathcal{X}(F_g, G)$, which motivates the graph filtrations of Proposition \[characterstep1\]. We fix a trivalent graph $\Gamma$ with no leaves, and consider the forest $\hat{\Gamma}$ obtained by splitting each edge in $E(\Gamma).$ This construction has also been discovered by Florentino and Lawton, [@FL].
We associate a copy of $M_3(G) = G \backslash G^3$ to each connected component of $\hat{\Gamma}.$ We then act on the product $\prod_{v \in V(\hat{\Gamma})} M_3(G)$ with $E(\Gamma)$ copies of $G$, where the component corresponding to $e \in E(\Gamma)$ acts on the two components associated to the pair of edges in $\hat{\Gamma}$ which split $e$ on the right hand side. We define $M_{\Gamma}(G)$ to be the $GIT$ quotient by this action.
$$M_{\Gamma}(G) = [\prod_{v \in V(\Gamma)} M_3(G)]/G^{E(\Gamma)}\\$$
For each choice of a spanning tree $\tree \subset \Gamma$, an ordering on the edges $\vec{e} = E(\Gamma) \setminus E(\tree),$ and an orientation of each edge in $E(\Gamma),$ there is an isomorphism $\Phi_{\tree, \vec{e}}: M_{\Gamma}(G) \to \mathcal{X}(F_g, G).$
We split the edges $e_i \in \vec{e} \subset E(\Gamma)$ into pairs $e_{2i-1}, e_{2i}$, ordered using the orientation on $e_i,$ this gives a trivalent tree $\tree'.$ We construct $M_{\tree'}(G) = \prod_{v \in V(\Gamma)} M_3(G) / G^{E(\tree')},$ and note that $M_{\tree'}(G)/G^{\vec{e}} = M_{\Gamma}(G).$ We will prove that $M_{\tree'}(G) \cong G \backslash G^{2g}$ in Lemma \[treecontract\]. We use the isomorphism $G_{2i-1}\times G_{2i}/G = G,$ $(g_{2i-1}, g_{2i}) \to g_{2i-1}g_{2i}^{-1}.$ This intertwines the left $G$ action on $G^{2g}$ with the adjoint action on $G^g.$
Note that we can define $M_{\Gamma}(G)$ for any graph, regardless of the valence of the vertices. In this sense we consider $M_{\tree}(G)$ for non-trivalent trees in the following lemma.
\[treecontract\] For any tree $\tree$ with $n$ leaves, each orientation on the edges of $\tree$ gives an isomorphism to the left quotient $M_{\tree}(G) \cong G \backslash G^n.$
It suffices to treat the case where $\tree$ has one internal edge $e,$ as this calculation can then be iterated to show the result by induction. Let $\partial(e) = \{u, w\}$, with the orientation pointing from $u$ to $v.$ We view $M_{\tree}(G)$ as $G^{V(u)} \times G^{V(w)}$ with a left action of $G \times G$ and a right action of $G$ on two components $G_{e, u} \subset G^{V(u)}, G_{e, w} \subset G^{V(w)}.$ We use the map $G^{V(u)} \times G^{V(w)} \to G^{V(u) + V(w) - 2}$ given by the following.
$$(g_1, \ldots, g_{V(u)-1}, g_{e, u}) \times (g_{e, w}, g_{V(u) +2}, \ldots, g_{V(w)}) \to\\$$
$$(g_1, \ldots, g_{V(u)-1} ,g_{e, w}g_{e, u}^{-1}, g_{V(u) +2}, \ldots, g_{V(w)}) \to$$ $$(g_1, \ldots, g_{V(u)-1}, g_{e, u}g_{e, w}^{-1}g_{V(u) +2}, \ldots g_{e, u}g_{e, w}^{-1}g_{V(w)})$$
This is a $G \times G \times G$, where the first component acts diagonally on the left of $ G^{V(u) + V(w) - 2}$, and the second and third components act trivially. Quotienting everything by $G\times G\times G$ then yields the isomorphism.
We may also view $M_{\Gamma}(G)$ as the following quotient.
$$\prod_{v \in V(\Gamma)} M_3(G) / G^{E(\Gamma)} = G^{V(\Gamma)} \backslash \prod_{v \in V(\Gamma)} G^3 / G^{E(\Gamma)}\\$$
$$= G^{V(\Gamma)} \backslash \prod_{e \in E(\Gamma)} [(G\times G)/G] = G^{V(\Gamma)} \backslash G^{E(\Gamma)}.$$
We can now recover Proposition \[characterstep1\] by replacing the rightmost term with the horospherical degeneration $G^{V(\Gamma)} \backslash \prod_{e \in E(\Gamma)} [G/U_- \times U_+ \backslash G]/T = P_{\Gamma}(G).$
Valuations from the dual canonical basis {#step2}
========================================
In the previous section we established that for each graph $\Gamma$ (resp. tree $\tree$) with compatible information, the coordinate rings $\C[\mathcal{X}(F_g, G)]$ and $\C[P_n(G)]$ have the following direct sum decompositions, where $W(\lambda, \eta, \mu)$ denotes the invariant vectors in $V(\lambda) \otimes V(\eta) \otimes V(\mu)$.
$$\C[\mathcal{X}(F_g, G)] = \bigoplus_{\lambda: E(\Gamma) \to \Delta} \bigotimes_{v \in V(\Gamma)} [W(\lambda(v, i), \lambda(v, j), \lambda(v, k))]\\$$
$$\C[P_n(G)] = \bigoplus_{\lambda: E(\tree) \to \Delta} \bigotimes_{v \in V(\tree)} [W(\lambda(v, i), \lambda(v, j), \lambda(v, k))]\\$$
Here $(v, i)$ denotes a vertex $v$ with incident edge $i$. In this section we show how to obtain finer combinatorial pictures of $\C[\mathcal{X}(F_g, G)]$ and $\C[P_n(G)]$ by structuring the intertwiner spaces $W(\lambda, \eta, \mu)$ using the dual canonical basis.
String parameters and polytopes for tensor product multiplicities
-----------------------------------------------------------------
We recall the construction of polyhedral cones $C_{\bold{i}}(3)$ which control tensor product multiplicites for a reductive group $G.$ We take $G$ to be semisimple, but we will later remove this restriction. Lusztig [@Lu] constructs a basis $\mathbb{B}$ of the subalgebra $\mathcal{U}_q(\mathfrak{u}_+)$ of the quantized universal enveloping algebra $\mathcal{U}_q(\mathfrak{g}).$ Specialization at $q = 1$ yields the canonical basis for each irreducible representation $V(\lambda) \subset \mathcal{U}(u_+)$. The dual pairing between $\mathcal{U}(\mathfrak{u}_+)$ and $\C[U_+]$ induces a dual basis $B(\lambda^*) \subset V(\lambda^*) \subset \C[U_+]$. A basis $B \subset \C[U_- \backslash G]$ can then be constructed by taking the union $B = \coprod_{\lambda} B(\lambda) \times \{\lambda\}$.
We fix a reduced decomposition $\bold{i} \in R(w_0)$. The entries of $\bold{i}$ correspond to simple roots $\alpha_{i_1}, \ldots, \alpha_{i_N}$, which in turn correspond to raising operators $e_{i_1}, \ldots, e_{i_N} \in \mathfrak{u}_+.$ These are used to define a function $w_{\bold{i}}: \C[U_+] \to \Z_{\geq 0}^N$ as follows. First we compute $t_1 = min\{t | e_{i_1}^{t + 1} \circ_{\ell} f = 0\},$ this gives us the first $\bold{i}-$string parameter $s_1,$ as well as a new function $f_1 = e_{i_1}^t \circ_{\ell} f$. We then perform the same construction with $e_{i_2}$ and $f_1$, then $e_{i_3}$ and $f_2$, and so on. This process produces a vector $w_i(f) \in \Z^N.$
The function $w_{\bold{i}}$ is a valuation on $\C[U_+]$ when the string parameters are lex ordered first to last.
By convention we set $w_{\bold{i}}(0) = -\infty,$ and for any $C \in \C$, $w_{\bold{i}}(C) = 0.$ For $f, g \in \C[U_+]$ let $e_{i_k}$ be the first raising operator for which the string parameter differs, with say $t_k(f) > t_k(g)$. By definition, $w_{\bold{i}}(f + g) = w_{\bold{i}}(f).$ If no such $k$ exists, then $w_{\bold{i}}(f + g) \leq w_{\bold{i}}(f) = w_{\bold{i}}(g)$.
Applying $e_{i_1}^M$ to $fg$ yields the following.
$$e_{i_1}^M(fg) = \sum_{p + q = M} \binom{M}{p} e_{i_1}^p(f)e_{i_1}^q(g)$$
If $M > t_1(f) + t_1(g),$ all terms in this sum must vanish. If $M \leq t_1(f) + t_1(g),$ then the fact that $\C[U_+]$ is a domain implies that this sum is non-zero, as this is the case for $M = t_1(f) + t_1(g),$ where the sum has exactly one term, a multiple of $e_{i_1}^{t_1(f)}(f)e_{i_1}^{t_1(g)}(g).$ We may repeat this calculation with $e_{i_2}$ on this term. By induction this yields $w_{\bold{i}}(fg) = w_{\bold{i}}(f) + w_{\bold{i}}(g).$
We obtain a valuation $v_{\bold{i}}$ on the coordinate ring $\C[U_- \backslash G] \subset \C[U_+ \times T]$, with image in $\Z^N \times \Delta$ by breaking ties in the $<$ ordering with $v_{\bold{i}}$. Berenstein and Zelevinsky, [@BZ1] and Alexeev and Brion, [@AB] give inequalities for the image of this valuation using devices derived from the fundamental representations $V(\omega_i)$ of the Langlands dual group $\breve{G}$, called $\bold{i}-$trails. An $\bold{i}-$trail from a weight $\gamma$ to a weight $\eta$ in the weight polytope of a representation $V$ is a sequence of weights $(\gamma, \gamma_1, \ldots, \gamma_{\ell-1}, \eta),$ such that consecutive differences of weights are integer multiples of simple roots from $\bold{i}$, $\gamma_i - \gamma_{i+1} = c_k \alpha_{i_k},$ and the application of the raising operators $e_{i_1}^{c_1} \circ \ldots \circ e_{i_{\ell}}^{c_{\ell}}: V_{\eta} \to V_{\gamma}$ is non-zero. For any $\bold{i}-$trail $\pi,$ Berenstein and Zelevinsky define $d_k(\pi) = \frac{1}{2}(\gamma_{k-1} + \gamma_k)(H_{\alpha_{i_k}}).$ In what follows, the entries of the Cartan matrix $A$ are denoted $a_{ij},$ and the element of the Weyl group $W$ corresponding to $\alpha_i$ is denoted $s_i.$ For the following see [@AB], [@K], and [@BZ1].
\[flagimage\] The image $v_{\bold{i}}(B) = v_{\bold{i}}(\C[U_- \backslash G])$ is equal to the integral points in a convex polyhedral cone $C_{\bold{i}} \subset \Z^N \times \Delta,$ defined by the following inequalities.
1. $\sum_k d_k(\pi) t_k \geq 0$ for any $\bold{i}-$trail $\omega_i \to w_0s_i\omega_i$ in $V(\omega_i),$ for all fundamental weights $\omega_j$ of the dual Langlands group.\
2. $t_k \leq \lambda(H_{\alpha_{i_k}}) -\sum_{\ell = k+1}^N a_{i_{\ell}, i_k} t_{\ell}$ for $k = 1, \ldots, N.$\
Any set $e_1, \ldots, e_m$ of $k-$linear nilpotent operators on a $k-$algebra $A$ defines a valuation in this way. It would be very useful to know general sufficient conditions for the body $v_{\vec{e}}(A) \subset \R_{\geq 0}^m$ to be polyhedral.
For $b \in B$, with $v_{\bold{i}}(b) = (\lambda, \vec{t}) \in C_{\bold{i}}$ the tuple $\vec{t} \in \Z^N$ is called the string parameter of $b$ associated to $\bold{i}.$ As constructed, the basis $B$ is composed of $T\times T$-weight vectors, with weights $(\sum t_i\alpha_i - \lambda, \lambda).$ In particular, $v_{\bold{i}}$ is a $T\times T$ stable valuation. The filtration $F^{\bold{i}}$ corresponding to this valuation is given by $T\times T$-stable subspaces $F^{\bold{i}}_{\leq (\vec{t}, \lambda)} \subset \C[U_- \backslash G]$, each of which has a basis of those $b \in B$ with $v_{\bold{i}}(b) \leq (\vec{t}, \lambda).$
Now we consider the spaces $V_{\beta, \gamma}(\lambda) \subset V(\lambda),$ defined as the collection of those vectors of weight $\gamma$ which are annhilated by the raising operators $e_i^{\beta(H_{\alpha_i}) + 1}.$ The basis $B$ has the “good basis” property (see [@Mat]), this implies that $B_{\beta, \gamma}(\lambda) = B \cap V_{\beta, \gamma}(\lambda)$ is a basis for the space $V_{\beta, \gamma}(\lambda)$. In the case $\beta = \eta$, and $\gamma = \mu^* - \eta$ this space is classically known to be isomorphic to the invariant space $W(\mu, \lambda, \eta)$ (see [@Zh], we will also provide a proof in the next subsection). Berenstein and Zelevinksy characterize the string parameters $\vec{t}$ corresponding to the $b \in B_{\eta, \mu^* - \eta}(\lambda)$ as follows.
\[tenspolytope\] The decomposition $\bold{i}$ gives a labelling of $B_{\eta, \mu^* - \eta}(\lambda)$ by the points in $\Z_{\geq 0}^N$ such that the following hold.
1. $\sum_k d_k(\pi)t_k \geq 0$ for any $\bold{i}-$trail from $\omega_j$ to $w_0 s_j\omega_j$ in $V(\omega_j),$ for all fundamental weights $\omega_j$ of the dual Langlands group.\
2. $-\sum_k t_k \alpha_k + \lambda + \eta = \mu^*$\
3. $\sum_k d_k(\pi) t_k \geq \eta(H_{\alpha_j})$ for any $\bold{i}-$trail from $s_j\omega_j$ to $w_0\omega_j$ in $V(\omega_j),$ for all fundamental weights $\omega_j$ of the dual Langlands group.\
4. $t_k + \sum_{j > k} a_{i_k, i_j} t_j \geq \lambda(H_{\alpha_{i_k}})$\
These are the integral points in a polytope $C_{\bold{i}}(\mu, \lambda, \eta).$
The first and last conditions say that $(\lambda, \vec{t})$ is a member of $C(\bold{i})$ in the fiber over the weight $\lambda,$ the second condition says that the basis members lie in the weight $\mu^* - \eta$ subspace of $V(\lambda),$ and the third condition says that the appropriate raising operators $e_i^{\eta(H_{\alpha_i})+1}$ annihilate. We realize these polytopes as slices of the following polyhedral cone.
For a string parameterization $\bold{i},$ the cone $C_{\bold{i}}(3)$ is defined by the following inequalities on $(\lambda, \vec{t}, \eta) \in C(\bold{i}) \times \Delta \subset \Delta \times \Z_{\geq 0}^N \times \Delta.$
1. $\sum_k d_k(\pi)t_k \geq 0$ for any $\bold{i}-$trail from $\omega_j$ to $w_0 s_j\omega_j$ in $V(\omega_j),$ for all fundamental weights $\omega_j$ of the dual Langlands group.\
2. $-\sum_k t_k \alpha_k + \lambda + \eta \in \Delta$\
3. $\sum_k d_k(\pi) t_k \geq \eta(H_{\alpha_j})$ for any $\bold{i}-$trail from $s_j\omega_j$ to $w_0\omega_j$ in $V(\omega_j),$ for all fundamental weights $\omega_j$ of the dual Langlands group.\
4. $t_k + \sum_{j > k} a_{i_k, i_j} t_j \geq \lambda(H_{\alpha_{i_j}})$\
We let $\pi_1(\lambda, \vec{t}, \eta) = (-\sum_k t_k \alpha_k + \lambda + \eta)^*,$ $\pi_2(\lambda, \vec{t}, \eta) = \lambda$, and $\pi_3(\lambda, \vec{t}, \eta) = \eta.$ By construction $C_{\bold{i}}(\mu, \lambda, \eta)$ is the fiber of these maps over $(\mu, \lambda, \eta).$
The tensor product ring map
---------------------------
The $T\times T-$stable subspace $\bigoplus V_{\eta, \mu^*-\eta}(\lambda)t^{\eta}$ $\subset \C[U_- \backslash G \times T]$ inherits a basis $B_3$ from $B \times \Delta \subset \C[U_- \backslash G \times T]$. In this subsection we show that this space is an algebra, isomorphic to $\C[P_3(G)]$. As a result we will obtain both a basis $B_3 \subset \C[P_3(G)]$ and a $T^3-$invariant valution $v_{\bold{i},3}: \C[P_3(G)] \to C_{\bold{i}}(3)$ with $v_{\bold{i}, 3}(B_3)$ equal to the integer points in $C_{\bold{i}}(3).$
We use the isomorphism $P_3(G) \cong (U_- \backslash G \times U_- \backslash G \times G/U_+)/G \cong (U_- \backslash G \times U_- \backslash G)/U_+.$ Under this map, the torus $T^3$ which acts on $W(\mu, \lambda, \eta) \subset \C[P_3(G)]$ with character $(\mu, \lambda, \eta)$ corresponds to a torus $T_1 \times T_2 \times T_3$ acting on $(U_- \backslash G \times U_- \backslash G)/U_+$. Here, the tori $T_2$ and $T_3$ act on the left hand sides of the components $U_- \backslash G \times U_- \backslash G$ through the inverse, and the torus $T_1$ acts diagonally on these components through the dual. We make use of the following commutative diagram of affine varieties.
$$\begin{CD}\label{birat}
[T \times U_+ \times T \times U_+]/U_+ @<\pi<< T \times U_+ \times T \\
@VVV @VVV\\
[U_- \backslash G \times U_- \backslash G]/U_+ @<<< U_- \backslash G \times T\\
\end{CD}$$
The top row is the map $\pi: (s, u, t) \to (s, u, t, Id)$, this is an isomorphism with inverse $(s, x, t, y) \to (xy^{-1}, s, t)$. Notice this that is map intertwines the left $U_+ \times U_+$ action on $(U_+ \times U_+)/ U_+$ with the left and right actions of $U_+$ on itself. The bottom row is defined similarly, and the vertical arrows are given by the map $(s, u) \to su \in TU_+ \subset U_- \backslash G.$ Note that the map $\pi$ intertwines the action of $T_1 \times T_2 \times T_3$
We consider both the left $\circ_{\ell}$ and right $\circ_r$ actions of $U_+$ on itself and its coordinate ring. The irreducible representation $V(\lambda)$ has the following description as a subspace of $\C[U_+]$ ([@Mat], [@Zh]).
$$V(\lambda) = \{f \in \C[U_+] | e_i^{\lambda(H_{\alpha_i}) +1} \circ_{\ell} f = 0\}\\$$
Here $1 \in \C[U_+]$ is identified with the highest weight vector $v_{\lambda} \in V(\lambda).$ We let $V_{\eta}(\lambda) \subset V(\lambda)$ denote the space of functions $f$ which satisfy $e_i^{\eta(H_{\alpha_i}) + 1} \circ_r f = 0.$
The following diagram commutes, and the top row is an isomorphism of vector spaces.
$$\begin{CD}
(V(\lambda) \otimes V(\eta))^{U_-} @>>> V_{\eta}(\lambda)\\
@VVV @VVV\\
\C[(T \times U_+ \times T \times U_+)/U_+ ] @>\pi^*>> \C[T \times U_+ \times T]\\
\end{CD}$$
We take a function $f \in V(\lambda) \otimes V(\eta) \subset \C[(T \times U_+ \times T \times U_+)/U_+]$ and analyze the pullback $\pi^*(f).$ The function $f$ satisfies the equations $e_i^{\lambda(H_{\alpha_i}) + 1} \circ_{\ell} f = 0$ in the first $U_+$ component and $e_i^{\eta(H_{\alpha_i}) + 1} \circ_{\ell} f = 0$ in the second. By definition of $\pi$, these equations are satisfied if and only if $e_i^{\lambda(H_{\alpha_i}) + 1} \circ_{\ell} \pi^*(f) = 0$, and $e_i^{\eta(H_{\alpha_i}) + 1} \circ_r \pi^*(f) = 0.$
Now we consider what happens when $f \in (V(\lambda)\otimes V(\eta))^{U_+}$ has weight $\mu^*$, this is the case when $f$ represents an intertwiner $V(\mu^*) \to V(\lambda) \otimes V(\eta).$ In the coordinate ring $\C[G] = \bigoplus_{\lambda \in \Delta} V(\lambda) \otimes V(\lambda^*)$, specialization at $Id$ is contraction of $V(\lambda)$ against the dual $V(\lambda^*).$ The coordinate ring $\C[U_- \backslash G] \subset \C[G]$ is the subalgebra of spaces $\C v_{-\eta} \otimes V(\eta),$ it therefore follows that $\pi^*(f) \in \C[U_+]$ is the coefficient of the $v_{\eta}$ component of $f$. Since $f$ was chosen to have weight $\mu^*$, $\pi^*(f) \in V_{\eta}(\lambda)$ must be a $\mu^* - \eta$ weight vector.
The space $W(\mu, \lambda, \eta) \cong (V(\lambda) \otimes V(\eta))^{U_-}_{\mu^*}$ is isomorphic to $V_{\eta, \mu^* - \eta}(\lambda).$
We have already established a $1-1$ map $\pi^*: (V(\lambda) \otimes V(\eta))^{U_+}_{\mu^*} \to V_{\eta, \mu^* - \eta}(\lambda)$. To show that this map is also onto, we observe that $(V(\lambda) \otimes V(\eta))^{U_+}$ is a direct sum of its dominant weight spaces, each of which maps to a distinct $V_{\eta, \mu^* - \eta}(\lambda) \subset V_{\eta}(\lambda),$ and that $(V(\lambda) \otimes V(\eta))^{U_+} \cong V_{\eta}(\lambda).$
The torus $T_1 \times T_2 \times T_3$ acts on the space $V_{\eta, \mu^* - \eta}(\lambda)$ with character $((\mu^* - \eta + \eta)^*, \lambda, \eta) =(\mu, \lambda, \eta).$ We have now established a $T^3-$stable map of algebras, identifying $\C[P_3(G)]$ with the subspace $\bigoplus_{\mu, \lambda, \eta} V_{\eta, \mu^* - \eta}(\lambda)t^{\eta} \subset \C[U_- \backslash G \times T].$
\[3tensor\] For each $\bold{i}$ there is a $T^3$ stable valuation $v_{\bold{i}, 3}$ on $\C[P_3(G)]$, with associated graded ring equal to the affine semigroup algebra $\C[C_{\bold{i}}(3)].$ The torus $T_1 \times T_2 \times T_3$ acts on $\C[C_{\bold{i}}(3)]$ with characters $\pi_1, \pi_2, \pi_3: C_{\bold{i}}(3) \to \Delta$. Furthermore, $v_{\bold{i}, 3}(B_3)$ coincides with the integer points in $C_{\bold{i}}(3).$
The valuation $v_{\bold{i}, 3}$ is constructed from $\bold{<}$ on $\Delta$ and $v_{\bold{i}}$ on $\C[U_- \backslash G].$ It is invariant with respect to $T_1 \times T_2 \times T_3,$ because $v_{\bold{i}}, \bold{<}: \C[U_- \backslash G \times T \to C_{\bold{i}}$ is $T^4$ invariant. By construction the character with respect to the torus action on $\C[C_{\bold{i}}(3)]$ corresponds to the maps $\pi_1, \pi_2, \pi_3$. The image $v_{\bold{i}, 3}(\C[P_3(G)])$ is then the image of those $b\otimes t^{\eta} \in \C[U_-\backslash G \times T]$ which lie in $\bigoplus_{\mu, \lambda, \eta} V_{\eta, \mu^* - \eta}(\lambda)t^{\eta}$ under $v_{\bold{i}}, \bold{<}$. This coincides with the integer points in $C_{\bold{i}}(3)$ by construction.
This exposition has been for the semisimple case, but as in [@AB], everything can be generalized readily to the reductive case. The weights that define a non-zero $W(\mu, \lambda, \eta)$ are of the form $\mu' + \tau_1,$ $\eta' + \tau_2$ and $\lambda' + \tau_3$ where $\tau_i$ are characters of the center $Z(G)$ with $\tau_1 + \tau_2 + \tau_3 = 0,$ and $\mu', \eta', \lambda'$ are dominant weights of the semisimple part of $G.$ The subspace $V_{\eta, \mu - \eta}(\lambda)$ is the same as the subspace $V_{\eta', \mu' - \beta' + (\tau_1 - \tau_2)}(\lambda' + \tau_3) = V_{\eta', \mu' -\eta' + \tau_3}(\lambda' + \tau_3) = V_{\eta', \mu' - \eta'}(\lambda')\otimes \C\tau_3.$ So this space inherits the subset of the dual canonical basis of the semi-simple part of $G$ coming from $V_{\eta', \mu' - \beta'}(\lambda')$ tensored with the character $\tau_3.$ Everything else goes through as above after a total order has been chosen on the characters of the center $\mathfrak{z} \subset \mathfrak{g}$.
Proof of Theorems \[character\] and \[configuration\]
======================================================
We can now construct filtrations on $\C[\mathcal{X}(F_g, G)]$ and $\C[P_n(G)]$ with toric associated graded algebras $\C[C_{\bold{i}}(\Gamma)], \C[C_{\bold{i}}(\tree, \vec{\lambda})].$ We focus on the filtrations on the algebra $\C[\mathcal{X}(F_g, G)],$ we choose $\Gamma$, with a total ordering on $E(\Gamma)$, a total ordering on $V(\Gamma)$, and an assignment $\bold{i}: V(\Gamma) \to R(w_0).$
The efforts of Section \[step1\] give a filtration on $\C[\mathcal{X}(F_g, G)]$ by $(\Delta, <)^{E(\Gamma)}$ with associated graded algebra $\C[P_{\Gamma}(G)].$ In Section \[step2\] we construct a $T^3$ invariant valuation on $\C[P_3(G)]$ with associated graded algebra $\C[C_{\bold{i}}(3)].$ We use the ordering on $V(\Gamma)$ and the assignment $\bold{i}: V(\Gamma) \to R(w_0)$ to define a full rank, $T^{E(\Gamma)}-$stable valuation on $\bigotimes_{v \in V(\Gamma)} \C[P_3(G)]$ with associated graded algebra $\bigotimes_{v \in V(\Gamma)} \C[C_{\bold{i}(v)}(3)]$. Passing to $T^{E(\Gamma)}$ invariants gives a filtration on $\C[P_{\Gamma}(G)]$, and by Theorem \[3tensor\] the associated graded algebra is the semigroup algebra of the following polyhedral cone.
We let $\pi_{v, i}$ be the projection map on $C_{\bold{i}(v)}(3)$ defined by the edge $i$ incident on the vertex $v \in V(\Gamma).$
We define $C_{\bold{i}}(\Gamma)$ to be the toric fiber product cone in $\prod_{v \in V(\Gamma)} C_{\bold{i}(v)}(3)$ defined by the conditions $\pi_{v, i} = \pi_{u, i}^*$ for all edges $i$ with endpoints $u, v$.
Now Theorem \[character\] follows from Proposition \[compositeprop\]. The same program can be carried out on the algebra $\C[P_n(G)]$, giving a $T^n$-stable valuation with associated graded algebra $\C[C_{\bold{i}}(\tree)].$ Theorem \[configuration\] then follows by specializing the weights at the leaves of $\tree$ to $\vec{\lambda},$ using Theorem \[3tensor\]. The following are also immediate.
\[configbasis\] For every choice of a trivalent tree $\tree$ with $n$ ordered leaves, and an assignment $\bold{i}: V(\tree) \to R(w_0)$ we have
1. A basis $B(\tree, m\vec{\lambda}) \subset H^0(P_{\vec{\lambda}^*}(G), \mathcal{L}(m\vec{\lambda}^*)) =$ $(V(\lambda_1) \otimes \ldots \otimes V(\lambda_n))^G$,\
2. A labelling $v_{\tree, \bold{i}}: B(\tree, m\vec{\lambda}) \to C_{\bold{i}}(\tree, m\vec{\lambda}) \subset (\Delta \times \Z_{\geq 0}^N \times \Delta)^{V(\tree)}$.\
We conclude by remarking that the image of the valuations we’ve constructed coincide with all of the lattice points in their corresponding convex bodies.
The integer points in $C_{\bold{i}}(\Gamma)$ (resp. $C_{\bold{i}}(\tree)$) are in bijection with the images of the induced valuations $v_{\bold{i}, \Gamma}$ and $v_{\bold{i}, \tree},$ respectively.
Each basis member of $\C[\mathcal{X}(F_g, G)]$ gives an element of $C_{\bold{i}}(\Gamma)$ by the constructions in Sections \[step1\], \[step2\]. If $\vec{t} \in C_{\bold{i}}(\Gamma)$ is an integer point, it is likewise an integer point in $(\Delta \times \Z_{\geq 0}^N \times \Delta)^{V(\tree)}$, and therefore corresponds to a product $\otimes_{v \in V(\Gamma)} b_{v}$ of dual canonical basis elements with compatible dominant weight data, is in $B(\Gamma)$ by construction.
Examples {#example}
========
We describe the cones $C_{\bold{i}}(3)$ for $G= SL_m(\C)$ and all rank $2$ simple groups. For $G = SL_m(\C)$ we describe particular instances of $B_{\bold{i}}(\Gamma), B_{\bold{i}}(\tree).$ The inequalities we present are culled from both [@BZ2] and the treatment by Littelman [@Li].
Type A
------
For $G = SL_m(\C)$, we take $\bold{i}$ to be the “nice” decomposition (see [@Li]).
$$w_0 = s_1(s_2s_1)(s_3s_2s_1)\ldots (s_{m-1}\ldots s_1)\\$$
The polyhedron $C_{\bold{i}}(3)$ is then the cone $BZ_3(SL_m(\C))$ of Berenstein-Zelevinsky triangles [@BZ3], for more on these objects see [@MZ].
For this definition we refer to Figure \[triangle\]. A BZ triangle $T \in BZ_3(SL_m(\C))$ is an assignment of non-negative integers to vertices of the version of the diagram in Figure \[triangle\] with $2(m-1)$ vertices on a side. If $v, w$ are a pair of vertices which are across a hexagon from a pair $u, y$, then $T(v) + T(w) = T(u) + T(y).$
We let $a_1, \ldots, a_{2m-2}$ $= b_1, \ldots, b_{2m-2} =$ $c_1, \ldots, c_{2m-2} = a_1$ label the vertices clockwise around the boundary of the diagram. This lets us define the following three projection maps $\pi_1, \pi_2, \pi_3: BZ_3(SL_m(\C)) \to \Delta_{SL_m(\C)}.$
$$\pi_1(T) = (a_1 + a_2, \ldots, a_{2m-3} + a_{2m-2})\\$$
$$\pi_2(T) = (b_1 + b_2, \ldots, b_{2m-3} + b_{2m-2})$$
$$\pi_3(T) = (c_1 + c_2, \ldots, c_{2m-3} + c_{2m-2})$$
The maps $\pi_i$ are constructed to coincide with their counterparts in Section \[step2\].
We can associate a dual graph to each $T \in BZ_3(SL_m(\C))$ by replacing each entry of weight $a$ with an edge to the center of its adjacent hexagon weighted $a$. The resulting graphs are also called honeycombs, and have been studied by a number of authors, see e.g. [@KTW], [@GP].
The cone of $\Gamma-$BZ quilts $BZ_{\Gamma}(SL_m(\C)) \subset \prod_{v \in V(\Gamma)}BZ_3(SL_m(\C))$ is then defined to be those tuples $(T_v)$ with $\pi_{i, v}(T_v) = \pi_{i, u}(T_u)^*$ when an edge $i$ joins $u$ and $v,$ see Figures \[quilt\] and \[quilt2\].
We represent gluing triangles $T_1, T_2$ with matching boundary components as weighted graphs on composite diagrams, as in Figure \[quilt\]. Paths at the meeting boundaries of $T_1, T_2$ are joined by an arrangement of weighted paths in an $X$ configuration. When the path has weight $1$, this is either a line (see the left corner of the quilt in Figure \[quilt\]) or by a crooked line (see the right corner of the quilt in Figure \[quilt\]). The cones $BZ_{\tree}(SL_3(\C))$ are studied by the author and Zhou in [@MZ].
We finish this subsection with an analysis of the generators of the semigroup algebra $\C[BZ_{\Gamma}(SL_2(\C))]$. The semigroup $BZ_3(SL_2(\C))$ is the free semigroup on three generators, we depict an element of this semigroup as an arrangment of three types of paths in a dual trinode $\tau$, see Figure \[graphweight\].
For an element $T \in BZ_3(SL_2(\C))$, counting the number of endpoints in each edge $e, f, g$ of $\tau$ produces an integer weighting $w_T: \{e, f, g\} \to Z_{\geq 0}.$ These three numbers must satisfy the triangle inequalities, $|w_T(e) - w_T(f)| \leq w_T(g) \leq w_T(e) + w_T(f),$ and $w_T(e) + w_T(f) + w_T(g) \in 2\Z.$
The semigroup $BZ_{\Gamma}(SL_2(\C))$ can then be described as the set of weightings $w: E(\Gamma) \to \Z_{\geq 0}$ which satisfy these properties at each $v \in V(\Gamma).$ We associate a planar arrangement of paths $P(w)$ in $\Gamma$, by replacing the weights at each trinode with an arrangement of paths as above. For an edge $e \in E(\Gamma)$, the endpoints of these paths in $e$ are then connected in the unique planar way. Symmetrically, a path $\gamma$ in the graph $\Gamma$ has an associated weighting $w_{\gamma}: E(\Gamma) \to Z_{\geq 0}$ obtained by setting $w_{\gamma}(e)$ equal to the number of times $\gamma$ passes through $e.$ For a more in depth account of this construction, see [@M2].
The semigroup $BZ_{\Gamma}(SL_2(\C))$ is generated by the $w: E(\Gamma) \to \Z_{\geq 0}$ with $w(e) \leq 2$.
Fix a $w \in BZ_{\Gamma}(SL_2(\C))$, and consider the induced planar arrangement of paths $P(w)$ in $\Gamma$ with multiweight $w.$ Suppose $w(e) > 2$ for some edge $e \in E(\Gamma).$ We pick a path $\gamma \in P(w)$ which passes through $e$. If $\gamma$ passes through $e$ with weight $1$, then we may remove $w_{\gamma}$ to obtain a weighting $w'$ with strictly smaller total weight $\sum_{f \in E(\Gamma)} w'(f).$
If $\gamma$ weights $e$ greater than $1$, we assign an orientation to $\gamma.$ If two components at $e$ have the same direction, we may alter the weighting as in Figure \[graphproof\], yielding two closed paths $\gamma' \cup \gamma".$ Without loss of generality we assume that $P(w) = \{\gamma\},$ so that the weightings $w_{\gamma'}$ and $w_{\gamma"}$ satisfy $w_{\gamma'} + w_{\gamma"} = w.$ In this case we pull off the new closed path $\gamma',$ which has strictly smaller total weight. If $w(e) > 2$ at least two components through $e$ must have the same direction.
As a corollary, a set of functions in $\C[\mathcal{X}(F_g, SL_2(\C))]$ which represent the set of $w \in BZ_{\Gamma}(SL_2(\C))$ with $w(e) \leq 2$ form a finite subduction basis for the filtration defined by $\Gamma.$
$G_2$
-----
We give inequalities for the tensor product cones for $G_2$, for $R(w_0) = \{\alpha_1\alpha_2\alpha_1\alpha_2\alpha_1\alpha_2 \alpha_2\alpha_1\alpha_2\alpha_1\alpha_2\alpha_1\}.$ These cones have six string parameters $t_1, t_2, t_3, t_4, t_5, t_6,$ and six weight parameters $\lambda = (\lambda_1, \lambda_2), \eta = (\eta_1, \eta_2), \mu = (\mu_1, \mu_2)$. The cone $C_{\alpha_1\alpha_2\alpha_1\alpha_2\alpha_1\alpha_2}(3)$ is defined by the following inequalities.
$$6t_2 \geq 2t_3 \geq 3t_4 \geq 2t_5 \geq 6t_6 \geq 0; \ \ \lambda_2 \geq 2t_6\\$$
$$\eta_1 \geq t_1 - 3t_2 - t_3 - 3t_4 - t_5 - 3t_6, \ \ t_3 - 3t_4 - t_5 - 3t_6, \ \ t_5 - 3t_6\\$$
$$\eta_2 \geq t_6, \ \ t_4 - t_5 - 3t_6, \ \ t_2 - t_3 - 3t_4 - t_5 - 3t_6\\$$
$$2t_1 - 3t_2 + 2t_3 -3t_4 + 2t_5 -3t_6 = \lambda_1 + \eta_1 - \mu_1\\$$
$$-t_1 +2t_2 -t_3 +2t_4 -t_5 +2t_6 = \lambda_2 + \eta_2 - \mu_2$$
The alternative cone $C_{\alpha_2\alpha_1\alpha_2\alpha_1\alpha_2\alpha_1}(3)$ is defined by the following inequalities.
$$2t_2 \geq 2t_3 \geq t_4 \geq 2t_5 \geq 2t_6 \geq 0\\$$
$$\lambda_1 \geq t_6; \ \ \lambda_2 \geq t_2 + t_4 - t_5, \ \ t_2 + t_5 - t_6, \ \ t_3 - t_4 - t_6, \ \ t_5 - 3t_6, \\$$
$$t_2 + t_3 -2t_4, \ \ 2t_2 - t_4, \ \ 3t_2 - t_3, \ \ t_3 - t_5, \ \ t_4 - 2t_6, \ \ 2t_4 - t_5 - t_6, \ \ 3t_4 - 2t_5$$
$$\eta_1 \geq t_6, \ \ t_4 -3t_5 - t_6, \ \ t_2 - 3t_3 - t_4 - 3t_5 - t_6\\$$
$$\eta_2 \geq t_5 - t_6, \ \ t_3 - t_4 -3t_5- t_6, \ \ t_1 - t_2 - 3t_3 - t_4 - 3t_5 - t_6\\$$
$$2(t_2 + t_4 +t_6) - 3(t_1 + t_3 + t_5) = \lambda_1 + \eta_1 - \mu_1\\$$
$$2(t_1 + t_3 + t_5) - (t_2 + t_4 + t_6) = \lambda_2 + \eta_2 - \mu_2$$
$SP_4$
------
We give the inequalities for the $SP_4$ tensor product cones corresponding to the decompositions $R(w_0) = \{\alpha_1\alpha_2\alpha_1\alpha_2, \alpha_2\alpha_1\alpha_2\alpha_1\}$. There are four string parameters $t_1, t_2, t_3, t_4$, and dominant weight parameters $\lambda = (\lambda_1, \lambda_2), \eta = (\eta_1, \eta_2), \mu = (\mu_1, \mu_2)$. The cone $C_{\alpha_1\alpha_2\alpha_1\alpha_2}(3)$ is defined by the following inequalities.
$$2t_2 \geq t_3 \geq 2t_4 \geq 0\\$$
$$\lambda_2 \geq t_4; \ \ \lambda_1 \geq 2t_3 - 2t_4, \ \ 2t_2 - 2t_3 - 2t_4, \ \ 2t_1 + 2t_2\\$$
$$\eta_1 \geq t_1 -t_2 + 2t_3 -t_4, \ \ t_3 -t_4 ; \ \ \eta_2 \geq t_2 -2t_3 + 2t_4, \ \ t_4\\$$
$$2t_1 -2t_2 +2t_3 -2t_4 = \lambda_1 +\eta_1 - \mu_1\\$$
$$-t_1 +2t_2 -t_3 +2t_4 = \lambda_2 + \eta_2 - \mu_2$$
The cone $C_{\alpha_2\alpha_1\alpha_2\alpha_1}(3)$ is defined by the following inequalities.
$$t_2 \geq t_3 \geq t_4 \geq 0\\$$
$$\lambda_2 \geq 2t_1, t_2; \ \ \lambda_1 \geq 2t_1\\$$
$$\eta_1 \geq t_3 -t_4, \ \ t_1 -t_2 + 2t_3; \ \ \eta_2 \geq t_2 - 2t_3 + 2t_4, \ \ t_4\\$$
$$t_1 + t_2 + t_3 + t_4 = \lambda_1 +\eta_1 - \mu_1\\$$
$$t_2 + t_4 - t_1 - t_3 = \lambda_2 + \eta_2 - \mu_2$$
[10]{}\[biblio\]
V. Alexeev and M. Brion, *Toric degenerations of spherical varieties*, Selecta Mathematica 10, no. 4, (2005), 453-478.
J. C. Baez, *An Introduction to Spin Foam Models of BF Theory and Quantum Gravity*, arXiv:gr-qc/9905087
A. Berenstein and A. Zelevinsky, *Canonical bases for the quantum group of type $A_r$ and piecewise-linear combinatorics*, Duke Math. J. 82 (1996), 473
A. Berenstein and A. iZelevinsky, *Tensor product multiplicities, canonical bases and totally positive varieties*, Invent. Math, 2001, 143, 77–128.
A, Berenstein and A. Zelevinksy, *Triple Multiplicities for $sl(r+1)$ and the Spectrum of the Algebra of the Adjoint Representation*, Journal of Algebraic Combinatorics 1 (1992), 7-22
P. Caldero, *Toric degenerations of Schubert varieties*, Transform. Groups 7 (2002), no. 1, 51–60.
I. Dolgachev, *Lectures on Invariant Theory*, London Math. Soc. Lecture Note Series [**296**]{} (2003).
D. Eisenbud *Commutative Algebra With A View Toward Algebraic Geometry*, Graduate Texts in Mathematics 150, Springer-Verlag, 1995.
C. FLorentino, S. Lawton, *Character varieties and the moduli quiver representations*, to appear: Papers from 5th Ahlfors-Bers Colloquium, Rice University, Contemp. Math. AMS, 2013.
C. Florentino, S. Lawton, *Singularities of free group character varieties* Pac. J. of Math, 260, 1 (2012), 149-179
C. Florentino, S. Lawton, *The topology of moduli spaces of free group representations* Mathsu. Ann. 345, 2 (2009), 453-489
V.V. Fock, A.B. Goncharov, *Moduli spaces of local systems and higher Teichmuller theory*, Publ.Math. Inst.Hautes E´tudes Sci. No. 103 (2006), 1–211.
W. Fulton, *Introduction to Toric Varieties* Princeton University Press, Princeton, NJ, 1993.
W. Goldman, The symplectic nature of fundamental groups of surfaces. Advances in Math. 54, (1980), 200-225.
F.D. Grosshans, *Algebraic homogeneous spaces and invariant theory*, Springer Lecture Notes, vol. 1673, Springer, Berlin, 1997.
O. Gleizer and E. Postnikov, *Littlewood-Richardson coefficients via Yang-Baxter equation*, Internat. Math. Res. Notices, 1999, 741–774.
M. Harada, K. Kaveh, *Toric degenerations, integrable systems and Okounkov bodies*, arXiv:1205.5249.
B. J. Howard, , C. A. Manon, J. J. Millson, *The toric geometry of triangulated polygons in Euclidean space*, Canadian Journal of Mathematics, to appear.
R. Howe, S. Jackson, S.T. Lee, E-C Tan, J. Willenbring, *Toric degeneration of branching algebras*, Advances in Mathematics 220 (2009) 1809–1841.
R. Howe, E.-C. Tan, J. Willenbring, *A basis for the $GL_n(\C)$ tensor product algebra*, Adv. in Math, 196, 2 (2005), 531-564.
K. Kaveh, *Crystal bases and Newton-Okounkov bodies*, arXiv:1101.1687v1 \[math.AG\].
K. Kaveh, A. G. Khovanskii, *Newton-Okounkov bodies, semigroups of integral point, graded algebras and intersection theory*, Ann. of Math, 176 (2012), 925-978.
M. Kashiwara, *The crystal base and Littelmann’s refined Demazure character formula*, Duke Math. J. 71 (1993), 839–858.
A. Knutson, T. Tao, and C. Woodward, *The honeycomb model of GL(n) tensor products II: Puzzles determine facets of the Littlewood-Richardson cone*, Journal of the AMS, 17 (2004) 19-48
S. Lawton, *Algebraic independence in $SL(3;C)$ character varieties of free groups*, J. Algebra, 324, 6, (2010) 1383-1391.
R. Lazarsfeld, M. Musta$\c{t}\check{a}$, *Convex bodies associated to linear series*, Ann. sci. de l’ENS, 4, 42 (2009), 783-835.
P. Littelmann, *Cones, Crystals, and Patterns*, Transf. Groups 3, (1998), 145-179.
G. Lusztig, *Canonical bases arising from quantized enveloping algebras*, J. Amer. Math. Soc. 4 (1991) 356-421.
C. Manon, *Toric degenerations and tropical geometry of branching algebras*, arXiv:1103.2484 \[math.AG\].
C. Manon, *Compactifications of character varieties and skein relations on conformal blocks*, arXiv:1401.8249 \[math.AG\].
C. Manon and Z. Zhou, *Semigroups of $SL_3(\C)$ tensor product invariants*, J. Algebra, 400, (2014) 94–104.
O. Mathieu, *Good bases for $G$-modules*, Geometriae Dedicata, Volume 36, Number 1, 51-66.
V. L. Popov, *Contractions of the actions of reductive algebraic groups*, Mat. Sb. (N.S.), 130(172),3, (1986), 310-334.
I. Pak, E. Vallejo, *Combinatorics and geometry of Littlewood–Richardson cones*, European J. Combin. 26 (2005) 995–1008.
A. Sikora, *Character varieties* Trans. Amer. Math. Soc. 364, no. 10, (2012), 5173 -5208.
D. Speyer and B. Sturmfels, *The tropical Grassmannian*, Adv. Geom. 4, no. 3, (2004), 389-411.
D. P. Zhelobenko, *Compact Lie Groups and Their Representations*, American Mathematical Society, Providence, 1973
Christopher Manon:\
Department of Mathematics,\
George Mason University\
Fairfax, VA 22030 USA
|
---
abstract: 'We study higher-degree generalizations of symplectic groupoids, referred to as [*multisymplectic groupoids*]{}. Recalling that Poisson structures may be viewed as infinitesimal counterparts of symplectic groupoids, we describe “higher” versions of Poisson structures by identifying the infinitesimal counterparts of multisymplectic groupoids. Some basic examples and features are discussed.'
address:
- 'IMPA, Estrada Dona Castorina 110, Rio de Janeiro, 22460-320, Brasil '
- 'Departamento de Matemática Aplicada - IM, UFRJ. Av. Athos da Silveira Ramos 149 (CT bloco C) Cidade Universitaria 21941-909 - Rio de Janeiro, RJ - Brasil - Caixa-postal: 68530'
- 'Departamento de Matemática Fundamental, Facultad de Matemáticas, Universidad de la Laguna, Spain'
author:
- 'Henrique Bursztyn, Alejandro Cabrera and David Iglesias'
title: Multisymplectic geometry and Lie groupoids
---
[*In memory of Jerry Marsden*]{}
Introduction
============
Multisymplectic structures are higher-degree analogs of symplectic forms which arise in the geometric formulation of classical field theory much in the same way that symplectic structures emerge in the hamiltonian description of classical mechanics, see [@Got; @Hel; @KS] and references therein. This symplectic approach to field theory was explored in a number of Marsden’s publications, which treated (as it was typical in Marsden’s work) theoretical as well as applied aspects of the subject, see e.g. [@GIMM; @GIM; @Mar1; @Mar2]. Multisymplectic geometry (as in [@CID2; @CID]) also arises in other settings, such as the study of homotopical structures [@CR], categorified symplectic geometry [@BHR], and geometries defined by closed forms [@MS].
Poisson structures are generalizations of symplectic structures which are central to geometric mechanics[^1] and permeate Marsden’s work. A natural problem in multisymplectic geometry is the identification of “higher” analogs of Poisson structures bearing a relation to multisymplectic forms that extends the way Poisson geometry generalizes symplectic geometry. In this note we discuss one possible approach to tackle this issue.
Our viewpoint relies on the relationship between Poisson geometry and objects known as [*symplectic groupoids*]{} [@CDW; @We87]. This relationship is part of a generalized Lie theory in which Poisson structures arise as infinitesimal, or linearized, counterparts of symplectic groupoids, in a way analogous to how Lie algebras correspond to Lie groups. In order to find higher analogs of Poisson structures the route we take is to first consider higher-degree versions of symplectic groupoids, referred to as [*multisymplectic groupoids*]{}, and then to identify the geometric objects arising as their infinitesimal counterparts. Recalling that symplectic groupoids are Lie groupoids equipped with a symplectic structure that is compatible with the groupoid multiplication, in the sense that the symplectic form is [*multiplicative*]{} (see below), multisymplectic groupoids are defined analogously, as Lie groupoids endowed with a multiplicative multisymplectic structure. Our identification of the infinitesimal objects corresponding to multisymplectic groupoids builds on the infinitesimal description of general multiplicative differential forms obtained in [@AC; @bc].
For a manifold $M$, our “higher-degree” analogs of Poisson structures can be conveniently expressed (in the spirit of Dirac geometry [@courant]) in terms of subbundles $$\label{eq:L}
L\subset TM\oplus \wedge^k T^*M$$ satisfying suitable properties, including an involutivity condition with respect to the “higher” Courant-Dorfman bracket on the space of sections of $TM\oplus \wedge^k T^*M$ (see e.g. [@Hi Sec. 2]). Related geometric objects have been recently considered in the study of higher analogs of Dirac structures in [@Zambon] (see also [@VYM]). But, as it turns out, the higher Poisson structures that arise from multisymplectic groupoids are not particular cases of the higher Dirac structures of [@Zambon] (for example, comparing with [@Zambon Def. 3.1], the higher Poisson structures considered here are not necessarily lagrangian subbundles, though always isotropic). An alternative characterization of these objects, more in the spirit of the bivector-field description of Poisson structures, is presented in Prop. \[prop:D\].
Another perspective on higher Poisson structures relies on the view of Poisson structures as Lie brackets on the space of smooth functions of a manifold. A natural issue in this context is finding an appropriate extension of the Poisson bracket defined by a symplectic form (see ) to multisymplectic manifolds. This problem involves notorious difficulties and much work has been done on it, see e.g. [@Forger1; @Kan; @CR]. The approach to higher Poisson structures in this note follows a different path and does not address any of the issues involved in the algebraic study of higher Lie-type brackets.
The paper is structured as follows. We review Poisson structures and their connection with symplectic groupoids in Section \[sec:poisson\]. In Section \[sec:multi\] we recall the basics of multisymplectic forms. The main results are presented in Section \[sec:multigrp\], in which we introduce multisymplectic groupoids and identify their infinitesimal counterparts. In Section \[sec:higherc\] we give different descriptions of these objects and explain some of their properties, while examples are discussed in Section \[sec:examples\].
As one should expect, higher Poisson structures naturally arise in connection with symmetries in multisymplectic geometry. This aspect of the subject is not treated here, though we hope to explore it, as well as its relations with field theory, in future work. Parallel ideas to those in this note can be also carried out in the context of polysymplectic geometry, see [@Nicolas].
[**Acknowledgments**]{}: H.B. and A.C. thank the organizers of the [*Focus Program on Geometry, Mechanics and Dynamics: the Legacy of Jerry Marsden*]{}, held at the Fields Institute in July of 2012, for their hospitality during the program, as well as MITACS for travel support (for which we also thank J. Koiller). H. B. was partly funded by CNPq and Faperj. D. I. thanks MICINN (Spain) for a “Ramón y Cajal" research contract; he is partially supported by MICINN grants MTM2009-13383 and MTM2009-08166-E and Canary Islands government project SOLSUB200801000238. We have benefited from many stimulating conversations with M. Forger, J. C. Marrero, N. Martinez, C. Rogers and M. Zambon. We also thank the referees for several useful comments that improved the presentation of this note.
Poisson structures and symplectic groupoids {#sec:poisson}
===========================================
We start by recalling a few different viewpoints to Poisson structures.
A [*Poisson structure*]{} on a smooth manifold $M$ is Lie bracket $\{\cdot,\cdot\}$ on $C^\infty(M)$ which is compatible with the pointwise product of functions via the Leibniz rule: $$\label{eq:leibniz}
\{f,gh\}=\{f,g\}h + \{f,h\}g,\;\;\; f,g,h \in C^\infty(M).$$ The Leibniz condition implies that $\{\cdot,\cdot\}$ is necessarily defined by a bivector field $\pi
\in \Gamma(\wedge^2 TM)$ via $$\pi(df,dg) = \{f,g\},\;\;\; f,g \in C^\infty(M).$$ This leads to the alternative description of Poisson structures on $M$ as bivector fields $\pi \in \Gamma(\wedge^2 TM)$ satisfying $[\pi,\pi]=0$, where $[\cdot,\cdot]$ is the Schouten-Nijenhuis bracket on multivector fields. (The vanishing of $[\pi,\pi]$ accounts for the Jacobi identity of $\{\cdot,\cdot\}$.) We denote Poisson manifolds by either $(M,\pi)$ or $(M,\{\cdot,\cdot\})$.
Symplectic manifolds are naturally equipped with Poisson structures. Given a symplectic manifold $(M,\omega)$, and denoting by $X_f$ the hamiltonian vector field associated with $f\in C^\infty(M)$ via $$\label{eq:hamv}
i_{X_f}\omega = df,$$ the Poisson bracket on $M$ is given by $$\label{eq:poissons}
\{f,g\} = \omega(X_g,X_f).$$
A more recent perspective on Poisson structures, which is the guiding principle of this note, relies on another type of connection between Poisson structures and symplectic manifolds. It is based on the fact that Poisson geometry fits into a generalized Lie theory, naturally expressed in terms of Lie algebroids and groupoids, see e.g. [@CDW]. In this context, Poisson manifolds are seen as infinitesimal counterparts of global objects called [*symplectic groupoids*]{} [@We87], analogously to how Lie algebras are regarded as infinitesimal versions of Lie groups. We will briefly recall the main aspects of the theory.
Let ${\mathcal{G}}\toto M$ be a Lie groupoid (the reader can find definitions and further details in [@CW]). We use the following notation for its structure maps: ${\mathsf{s}}$, ${{\mathsf{t}}}: {\mathcal{G}}\to M$ for the source, target maps, $m: {\mathcal{G}}{_{\mathsf{s}}\times_{{\mathsf{t}}}} {\mathcal{G}}\to {\mathcal{G}}$ for the multiplication map[^2], $\epsilon: M\hookrightarrow {\mathcal{G}}$ for the unit map, and ${{\mathsf{inv}}}:{\mathcal{G}}\to {\mathcal{G}}$ for the groupoid inversion. We will often identify $M$ with its image under $\epsilon$ (the submanifold of ${\mathcal{G}}$ of identity arrows).
A differential form $\omega \in \Omega^r({\mathcal{G}})$ is called [*multiplicative*]{} if it satisfies $$\label{eq:multip}
m^*\omega = {{\mathrm{pr}}}_1^*\omega + {{\mathrm{pr}}}_2^*\omega,$$ where ${{\mathrm{pr}}}_i: {\mathcal{G}}{_{\mathsf{s}}\times_{{{\mathsf{t}}}}}{\mathcal{G}}\to {\mathcal{G}}$, $i=1,2$, is the natural projection onto the $i$-th factor[^3]. A [*symplectic groupoid*]{} is a Lie groupoid ${\mathcal{G}}\toto
M$ equipped with a multiplicative symplectic form $\omega \in
\Omega^2({\mathcal{G}})$. In this case, condition is equivalent to the graph of the multiplication map $m$ being a lagrangian submanifold of ${\mathcal{G}}\times {\mathcal{G}}\times \overline{{\mathcal{G}}}$, where $\overline{{\mathcal{G}}}$ is equipped with the opposite symplectic form $-\omega$. Symplectic groupoids first arose in symplectic geometry in the context of quantization (see e.g. [@BW Sec. 8.3]) but turn out to provide a convenient setting for the study of symmetries and reduction [@MiWe].
In order to explain how symplectic groupoids are related to Poisson structures, recall that a [*Lie algebroid*]{} is a vector bundle $A\to M$ equipped with a bundle map $\rho: A\to TM$, called the [*anchor*]{}, and a Lie bracket $[\cdot,\cdot]$ on $\Gamma(A)$ such that $$[u,fv] = f[u,v] + ({\mathcal L}_{\rho(u)}f)v,$$ for $u,v \in \Gamma(A),\; f\in C^\infty(M)$. Lie algebroids are infinitesimal versions of Lie groupoids: for a Lie groupoid ${\mathcal{G}}\toto M$, its associated Lie algebroid is defined by $A=\ker(d{\mathsf{s}})|_M$, with anchor map $d{{\mathsf{t}}}|_A: A\to TM$ and Lie bracket on $\Gamma(A)$ induced by the Lie bracket of right-invariant vector fields on ${\mathcal{G}}$. Much of the usual theory relating Lie algebras and Lie groups carries over to Lie algebroids and groupoids, a notorious exception being Lie’s third theorem, i.e., not every Lie algebroid arises as the Lie algebroid of a Lie groupoid (see [@CF] for a thorough discussion of this issue).
The first indication of a connection between Poisson geometry and Lie algebroids/groupoids is the fact that, if $(M,\pi)$ is a Poisson manifold, then its cotangent bundle $T^*M\to M$ inherits a Lie algebroid structure, with anchor map given by $$\label{eq:anchor}
\pi^\sharp: T^*M \to TM,\;\;\; \pi^\sharp(\alpha)=i_\alpha\pi,$$ and Lie bracket on $\Gamma(T^*M)=\Omega^1(M)$ given by $$\label{eq:liep}
[\alpha,\beta]={\mathcal L}_{\pi^\sharp(\alpha)}\beta -
{\mathcal L}_{\pi^\sharp(\beta)}\alpha - d(\pi(\alpha,\beta)).$$
The precise relation between Poisson structures and symplectic groupoids is as follows. First, given a symplectic groupoid $({\mathcal{G}}\toto M,\omega)$, its space of units $M$ inherits a natural Poisson structure $\pi$, uniquely determined by the fact that the target map ${{\mathsf{t}}}: {\mathcal{G}}\to M$ is a Poisson map (while ${\mathsf{s}}: {\mathcal{G}}\to
M$ is anti-Poisson); moreover, denoting by $A$ the Lie algebroid of ${\mathcal{G}}$, there is a canonical identification between $A$ and the Lie algebroid structure on $T^*M$ induced by $\pi$, explicitly given by $$\mu: A \stackrel{\sim}{\to} T^*M,\;\; \mu(u) = i_u\omega|_{TM}.$$ Here we view $TM$ as a subbundle of $T{\mathcal{G}}|_M$ via $\epsilon:
M\hookrightarrow {\mathcal{G}}$, so that we can write $$\label{eq:decomp}
T{\mathcal{G}}|_M = TM\oplus A.$$ In other words, the Lie groupoid ${\mathcal{G}}$ integrates the Lie algebroid $T^*M$ defined by $\pi$.
Conversely, given a Poisson manifold $(M,\pi)$ and assuming that its associated Lie algebroid is integrable (i.e., can be realized as the Lie algebroid of a Lie groupoid[^4]), then its ${\mathsf{s}}$-simply-connected integration ${\mathcal{G}}\toto M$ inherits a symplectic groupoid structure. (As shown in [@catfel], one can obtain ${\mathcal{G}}$ by means of an infinite-dimensional Marsden-Weinstein reduction.)
The upshot of this discussion is that [*Poisson manifolds are the infinitesimal versions of symplectic groupoids.*]{}
Some of the prototypical examples of symplectic groupoids are traditional phase spaces in mechanics. For example, any cotangent bundle $T^*Q$, equipped with its canonical symplectic form, is a symplectic groupoid over $Q$ with respect to the groupoid structure given by fibrewise addition of covectors; in this case, source and target maps coincide, both being the bundle projection $T^*Q\to Q$, and the corresponding Poisson structure on $Q$ is trivial: $\pi=0$. A more interesting example is given by the cotangent bundle of a Lie group $G$. In this case, besides the symplectic groupoid structure over $G$ that we just described, $T^*G$ is also a symplectic groupoid over $\mathfrak{g}^*$, where $\mathfrak{g}$ denotes the Lie algebra of $G$. The groupoid structure $$T^*G\toto \mathfrak{g}^*$$ is induced by the co-adjoint action of $G$ on $\mathfrak{g}^*$ (see e.g. [@MiWe]); source and target maps are given by the momentum maps for the cotangent lifts of the actions of $G$ on itself by left and right translations, while the corresponding Poisson structure on $\mathfrak{g}^*$ is just its natural Lie-Poisson structure. The fact that the target map is a Poisson map may be viewed as the [*Lie-Poisson reduction theorem*]{} (see e.g. [@MR Sec. 13.1]), another one of Marsden’s favorite topics. The correspondence between Poisson structures and symplectic groupoids extends much of the theory relating $\mathfrak{g}^*$ and $T^*G$ to more general settings.
Multisymplectic structures {#sec:multi}
==========================
A [*multisymplectic structure*]{} [@CID2; @CID] on a manifold $M$ is a differential form $\omega\in \Omega^{k+1}(M)$ which is closed and nondegenerate, in the sense that $i_X\omega=0$ implies that $X=0$, for $X\in \Gamma(TM)$. Equivalently, the nondegeneracy condition says that the bundle map $$\label{eq:mnondeg}
\omega^\sharp: TM \to \wedge^kT^*M,\;\;\; X\mapsto i_X\omega,$$ is injective. As in [@BHR; @CR], we refer to a multisymplectic form of degree $k+1$ as a [*$k$-plectic*]{} form. Hence a $1$-plectic form $\omega$ is a usual symplectic structure, in which case the map is necessarily surjective; note that the wedge powers $\omega^r$, $r=2,\ldots,\dim(M)$, are natural examples of higher degree multisymplectic forms. For completeness, we briefly recall some other examples, see e.g. [@CID].
For a manifold $Q$, the total space of the exterior bundle $\wedge^k
T^*Q$ carries a canonical $k$-plectic form $\omega_{can}$, generalizing the canonical symplectic structure on $T^*Q$. Indeed, there is a “tautological” $k$-form $\theta$ on $\wedge^k T^*Q$ given by $$\theta_{\xi}(X_1,\ldots,X_k)=\xi(dp(X_1),\ldots,dp(X_k)),$$ where $p: \wedge^k T^*Q \to Q$ is the natural bundle projection, $\xi \in \wedge^k T^*Q$, and $X_i$, $i=1,\ldots,k$, are tangent vectors to $\wedge^k T^*Q$ at $\xi$. Then $$\label{eq:can}
\omega_{can} = d\theta$$ is a $k$-plectic form on $\wedge^kT^*Q$. These $k$-plectic manifolds are closely related to the multi-phase spaces in field theory (see e.g. [@GIMM; @Hel] and references therein).
Other examples of $k$-plectic manifolds include $(k+1)$-dimensional orientable manifolds equipped with volume forms. An important class of 2-plectic manifolds is given by compact, semi-simple Lie groups $G$, equipped with the Cartan 3-form $H\in \Omega^3(G)$, i.e., the bi-invariant 3-form uniquely defined by the condition $H(u,v,w)={{\left\langle {{u,[v,w]}} \right\rangle}}$, where $u$, $v$, $w\in \mathfrak{g}$ and ${{\left\langle {{\cdot,\cdot}} \right\rangle}}$ is the Killing form (see e.g. [@BHR; @CID]). Hyper-Kähler manifolds are examples of 3-plectic manifolds: if $\omega_1$, $\omega_2$, $\omega_3$ are the three Kähler forms on a hyper-Kähler manifold $M$, then the form $\omega_1\wedge\omega_1 +
\omega_2\wedge\omega_2 + \omega_3\wedge\omega_3 \in \Omega^4(M)$ is 3-plectic [@CID; @MS].
In physical applications (such as quantization), an important issue concerns the identification of an appropriate analog of the Poisson bracket on a $k$-plectic manifold $(M,\omega)$; there is an extensive literature on this problem, see [@CID2; @Forger1; @Kan; @CR]. As a starting point, one usually considers forms $\alpha\in \Omega^{k-1}(M)$ for which there exists a (necessarily unique) vector field $X_\alpha$ such that $i_{X_\alpha}\omega = d\alpha$; such forms are called [*hamiltonian*]{}. Then, on the space of hamiltonian $(k-1)$-forms, one defines the bracket $$\label{eq:hpoisson}
\{\alpha,\beta\}=i_{X_\alpha}i_{X_\beta}\omega,$$ which is a direct generalization of the Poisson bracket when $k=1$. This skew-symmetric bracket turns out to be well defined on the space of hamiltonian $(k-1)$-forms, but the Jacobi identity usually fails (see e.g. [@CID2; @CR]): $$\label{eq:jac}
\{\alpha,\{\beta,\gamma\}\} + \{\gamma,\{\alpha,\beta\}\} +
\{\beta,\{\gamma,\alpha\}\} = -d
i_{X_\alpha}i_{X_\beta}i_{X_\gamma}\omega.$$
Much work has been done to deal with this “defect” on the jacobiator of , either by forcing its elimination or by somehow making sense of it. One approach relies on noticing that closed $(k-1)$-forms are automatically hamiltonian, so one can consider the quotient space of hamiltonian forms modulo closed forms (see e.g. [@CID2]); the bracket descends to this quotient and, since the right-hand side of is exact, the quotient inherits a genuine Lie-algebra structure[^5]. By using multivector fields, one can also consider hamiltonian forms of other degrees and show that these Lie algebras fit into larger graded Lie algebras. A more recent approach, see [@BHR; @CR], shows that, without taking quotients (so as to force the vanishing of the jacobiator), the bracket on hamiltonian forms can be naturally understood in terms of structures from homotopy theory; namely, this bracket is part of a Lie $k$-algebra (a special type of $L_\infty$-algebra). A missing ingredient in these generalizations of the Poisson bracket is a corresponding analog of the Leibniz rule . For a discussion in this direction, see e.g. [@Hra; @Kan].
Just as symplectic manifolds are particular cases of Poisson manifolds, one could wonder about the analog of Poisson manifolds in multisymplectic geometry. As recalled in Section \[sec:poisson\], the Leibniz rule is central for the general definition of a Poisson structure. So, as indicated by the previous discussion on Poisson brackets on $k$-plectic manifolds, it is not evident how to define such analogs in terms of algebraic/Lie-type structures on spaces of forms. A different, more geometric, perspective to this problem will be discussed next.
Multisymplectic groupoids and their infinitesimal versions {#sec:multigrp}
==========================================================
We start with a straightforward generalization of symplectic groupoids to multisymplectic geometry: A [*multisymplectic groupoid*]{} is a Lie groupoid equipped with a multisymplectic form that is multiplicative, in the sense of . We will also use the terminology [*$k$-plectic groupoid*]{} when the multisymplectic form has degree $k+1$.
Recalling that Poisson structures arise as infinitesimal versions of symplectic groupoids, as briefly explained in Section \[sec:poisson\], we will now identify the infinitesimal objects corresponding to multisymplectic groupoids.
Let ${\mathcal{G}}\toto M$ be an ${\mathsf{s}}$-simply-connected Lie groupoid, let $A\to M$ be its Lie algebroid, with anchor map $\rho: A\to TM$. The following result is established in [@AC; @bc]: there is a 1-1 correspondence between closed, multiplicative forms $\omega \in
\Omega^{k+1}({\mathcal{G}})$ and vector-bundle maps $\mu: A\to \wedge^k T^*M$ (covering the identity map on $M$) satisfying: $$\begin{aligned}
&i_{\rho(u)}\mu(v) = -i_{\rho(v)}\mu(u),\label{eq:IM1}\\
&\mu([u,v]) = {\mathcal L}_{\rho(u)}\mu(v) - i_{\rho(v)}d (\mu(u)),
\label{eq:IM2}\end{aligned}$$ for $u, v \in \Gamma(A)$. Such maps $\mu$ are called (closed) [*IM $(k+1)$-forms*]{} (where IM stands for [*infinitesimally multiplicative*]{}). Using , one can write the explicit relation between $\omega$ and $\mu$ as $$\label{eq:rel}
i_{X_k} \ldots i_{X_1}\mu_x(u) = \omega_x(u,X_1,\ldots,X_k),$$ for $u \in A|_x$ and $X_i\in TM|_x$, $x\in M$.
We now discuss a slight refinement of this result taking into account the nondegeneracy condition of multisymplectic forms. We will need a few properties of multiplicative forms on Lie groupoids, all of which follow from . If $\omega$ is a multiplicative form on ${\mathcal{G}}$, then the following holds: $$\label{eq:propm}
\epsilon^*\omega=0,\qquad {{\mathsf{inv}}}^*\omega=-\omega,$$ and $$\label{eq:mu}
i_{u^r}\omega = {{\mathsf{t}}}^* \mu(u),\;\;\; \forall u\in \Gamma(A),$$ where $u^r$ is the vector field on ${\mathcal{G}}$ determined by $u \in
\Gamma(A)$ via right translations; see [@bcwz Sec. 3] for the proofs of these identities (the proofs there work in any degree, though the statements refer to 2-forms). Using the second equation in and , we also obtain $$\label{eq:mu2}
i_{\overline{u}^l}\omega = -{\mathsf{s}}^* \mu(u),$$ where $\overline{u}^l = {{\mathsf{inv}}}_*(u^r)$ (note that this vector field coincides with the one defined by left translations of $\overline{u}=d{{\mathsf{inv}}}(u) \in \Gamma(\ker(d{{\mathsf{t}}})|_M)$).
\[prop:nondeg\] A closed, multiplicative form $\omega^{k+1}({\mathcal{G}})$ is nondegenerate if and only if its corresponding IM form $\mu: A \to \wedge^k T^*M$ satisfies
- $\ker \mu = \{0\}$,
- $({{\mathrm {Im}}}(\mu))^\circ = \{X\in TM\,|\, i_X\mu(u)=0\;\forall u\in A\} = \{0\}$.
Assume that $\omega$ is nondegenerate, and let us verify that $(1)$ and $(2)$ hold. If $u\in \ker \mu$, then (by ) $i_u\omega ={{\mathsf{t}}}^*\mu(u) =0$, so $u=0$ and $(1)$ follows. Let now $X\in ({{\mathrm {Im}}}(\mu))^\circ |_x$, $x\in M$. Then $i_ui_X\omega=-i_X{{\mathsf{t}}}^*\mu(u)={{\mathsf{t}}}^* i_X\mu(u)=0$ for all $u\in
A|_x$. We claim that this implies that $i_X\omega=0$, so that $X=0$ by nondegeneracy, and hence $(2)$ holds. To see that, it suffices to check that $i_{Z_k} \ldots i_{Z_1}i_X\omega =0$ for arbitrary $Z_i\in T{\mathcal{G}}|_x$, $i=1,\ldots,k$. Using , we write $Z_i = X_i + u_i$, for $X_i\in TM|_x$ and $u_i\in A|_x$. Expanding out $i_{Z_k} \ldots i_{Z_1}i_X\omega$ using multilinearity, we see that the term $i_{X_k} \ldots i_{X_1}i_X\omega$ vanishes by the first condition in , and all the other terms vanish as a consequence of the fact that $i_ui_X\omega=0 \; \forall u\in
A$.
Conversely, suppose that $(1)$ and $(2)$ hold, and let $X\in
T_g{\mathcal{G}}$ be such that $i_X\omega=0$. Then $$i_{u^r}i_X\omega=0=-i_X({{\mathsf{t}}}^*\mu(u))$$ for all $u \in \Gamma(A)$, which means that $d{{\mathsf{t}}}(X)\in
({{\mathrm {Im}}}(\mu))^\circ$, so $d{{\mathsf{t}}}(X)=0$ by $(2)$. Hence $X$ is tangent to the ${{\mathsf{t}}}$-fiber at $g$, and we can find $v\in \Gamma(A)$ so that ${{\mathsf{inv}}}_*(v^r) |_g=\overline{v}^l |_g =X$. By , at the point $g$ we have $$i_X\omega= i_{\overline{v}^l}\omega = -{\mathsf{s}}^* \mu(v),$$ so $i_X\omega=0$ implies that $\mu(v)=0$, hence $v=0$ by $(1)$, and $X= \overline{v}^l |_g =0$.
It follows that the infinitesimal counterpart of a $k$-plectic groupoid is a closed IM $(k+1)$-form $\mu: A\to \wedge^k T^*M$ additionally satisfying conditions (1) and (2) of Prop. \[prop:nondeg\]. A natural terminology for the resulting object is [*IM k-plectic form*]{}. In this paper, we will alternatively refer to them as [*higher Poisson structures of degree $k$*]{}, or simply [*$k$-Poisson structures*]{} (being aware that this may clash with the terminology for different objects in the literature). Before giving different characterizations of $k$-Poisson structures and examples, we briefly explain how 1-Poisson structures are the same as ordinary Poisson structures.
The case $k=1$ {#subsec:k1}
--------------
For a bundle map $\mu: A\to T^*M$, note that condition $(1)$ in Prop. \[prop:nondeg\] says that $\mu$ is injective, while $(2)$ says that $\mu$ is surjective. It follows that a 1-Poisson structure is a bundle map $\mu: A\to T^*M$ satisfying , (i.e., a closed IM 2-form), and that is an isomorphism.
Note that given a Poisson structure $\pi$ on $M$, if we consider the associated Lie algebroid $A=T^*M$, see and , it is clear that $$\label{eq:id}
\mu= {{\mathrm {Id}}}: A \to T^*M$$ is a 1-Poisson structure. It turns out that any 1-Poisson structure is equivalent[^6] to one of this type. To justify this claim, it will be convenient to view Poisson structures from the broader perspective of Dirac geometry [@courant].
Let us consider the bundle ${\mathbb{T}M}:= TM \oplus T^*M \to M$ equipped with the non-degenerate, symmetric fibrewise bilinear pairing ${{\left\langle {{\cdot,\cdot}} \right\rangle}}$ given at each $x\in M$ by $$\label{eq:pairing}
{{\left\langle {{(X,\alpha),(Y,\beta)}} \right\rangle}}:= \beta(X) + \alpha(Y),$$ for $X,Y\in T_xM,\; \alpha,\beta \in T_x^*M$, and with the Courant-Dorfman bracket ${[\![\cdot,\cdot]\!]}: \Gamma({\mathbb{T}M})\times
\Gamma({\mathbb{T}M})\to \Gamma({\mathbb{T}M})$, $$\label{eq:courant}
{[\![(X,\alpha),(Y,\beta)]\!]}:=([X,Y],{\mathcal L}_X\beta-i_Yd\alpha).$$ Poisson structures on $M$ are equivalent to subbundles $L\subset
{\mathbb{T}M}$ satisfying
- $L=L^\perp$, i.e., $L$ is [*lagrangian*]{} with respect to ${{\left\langle {{\cdot,\cdot}} \right\rangle}}$,
- $L\cap TM=\{0\}$,
- ${[\![\Gamma(L),\Gamma(L)]\!]}\subseteq
\Gamma(L)$.
Condition (d1) is equivalent to $L$ being isotropic, i.e., $L
\subseteq L^\perp$, and the dimension condition $\mathrm{rank}(L)=\dim(M)$. Using the exact sequence $$L\cap TM \to L \to T^*M$$ induced by the natural projection ${{\mathrm{pr}}}_2:{\mathbb{T}M}\to T^*M$, we see that (d2) is equivalent to saying that $L$ projects isomorphically onto $T^*M$. It follows that conditions (d1) and (d2) can be alternatively written as
- $L \subseteq L^\perp$,
- ${{\mathrm{pr}}}_{2}|_L: L \to T^*M$ is an isomorphism.
Given a subbundle $L\subset {\mathbb{T}M}$, conditions (d1’) and (d2’) are equivalent to $L$ being the graph of a skew-adjoint bundle map $T^*M\to TM$; such maps are always of the form $\alpha\mapsto
i_\alpha\pi$, where $\pi$ is a bivector field. The involutivity condition (d3) amounts to $[\pi,\pi]=0$.
Let $\mu: A\to T^*M$ be a 1-Poisson structure, and let us consider the bundle map $$\label{eq:rmmap}
(\rho,\mu): A \to {\mathbb{T}M},$$ where $\rho: A\to TM$ is the anchor. Since $\mu$ is an isomorphism, the map is injective, and its image is a subbundle $L\subset {\mathbb{T}M}$ satisfying (d2’). Note that condition for $\mu$ amounts to condition (d1’) for $L$, while becomes (d3). It follows that $L$ represents a Poisson structure on $M$, explicitly given by $$\pi(\alpha,\beta) = i_{\rho(\mu^{-1}(\alpha))}\beta,\qquad
\alpha,\beta\in T^*M.$$ It is clear from that $\mu: A\to T^*M$ is an isomorphism of Lie algebroids, where $T^*M$ has the Lie-algebroid structure induced by $\pi$ (as in and ), showing the equivalence between $\mu$ and the 1-Poisson structure associated with $\pi$.
As we see next, one has a similar interpretation of general $k$-Poisson structures in terms of higher Courant-Dorfman brackets (as in [@Hi Sec. 2]), leading to objects closely related to those studied in [@Zambon].
Descriptions of $k$-Poisson structures {#sec:higherc}
======================================
Let us consider the vector bundle $${\mathbb{T}M}^{(k)}:= TM\oplus \wedge^kT^*M;$$ we denote by ${{\mathrm{pr}}}_1: {\mathbb{T}M}^{(k)} \to TM$ and ${{\mathrm{pr}}}_2: {\mathbb{T}M}^{(k)} \to
\wedge^kT^*M$ the natural projections. The same expressions as in and lead to a symmetric $\wedge^{k-1}T^*M$-valued pairing ${{\left\langle {{\cdot,\cdot}} \right\rangle}}$ on the fibres of ${\mathbb{T}M}^{(k)}$ and a bracket ${[\![\cdot,\cdot]\!]}$ on $\Gamma({\mathbb{T}M}^{(k)})$, that we will keep referring to as the Courant-Dorfman bracket.
Given a subbundle $L\subset {\mathbb{T}M}^{(k)}$, we keep denoting by $L^\perp$ its orthogonal relative to ${{\left\langle {{\cdot,\cdot}} \right\rangle}}$; note that, for $k>1$, it may happen that $L^\perp$ does not have constant rank (see Section \[sec:examples\]). We will keep calling $L$ [*isotropic*]{} if $L\subset L^\perp$, and [*involutive*]{} if its space of sections $\Gamma(L)$ is closed under ${[\![\cdot,\cdot]\!]}$. For a subbundle $D\subseteq \wedge^k T^*M$, we let $$D^\circ:=\{X\in TM\,|\, i_X\alpha =0\, \forall \alpha\in D\}$$ be its annihilator.
Whenever $L\subset {\mathbb{T}M}^{(k)}$ is an isotropic and involutive subbundle, it inherits a Lie-algebroid structure with anchor map ${{\mathrm{pr}}}_1|_L : L\to TM$ and Lie bracket ${[\![\cdot,\cdot]\!]}|_{\Gamma(L)}$ on $\Gamma(L)$. In particular, it follows that the distribution $$\label{eq:dist}
{{\mathrm{pr}}}_1(L)\subseteq TM$$ is integrable and its integral leaves (the “orbits” of the Lie algebroid) define a singular foliation on $M$, see [@DZ Sec. 8.1]. One may also directly check that $$\label{eq:IML}
{{\mathrm{pr}}}_2|_L : L \to \wedge^{k}T^*M$$ is a closed IM $k$-form. Since $\ker({{\mathrm{pr}}}_2|_L)= L\cap TM$ and $$({{\mathrm{pr}}}_2(L))^\circ = L^\perp \cap TM \supseteq L\cap TM,$$ it is clear that is a $k$-Poisson structure if and only if $$\label{eq:nondegL}
L^\perp\cap TM=\{0\}.$$ By considering the bundle map , we will think of any isotropic, involutive subbundle $L\subseteq {\mathbb{T}M}^{(k)}$ satisfying as a $k$-Poisson structure. It turns out that all $k$-Poisson structures on $M$ are of this type.
\[prop:L\] Any $k$-Poisson structure $\mu: A\to \wedge^k T^*M$ is equivalent to a subbundle $L\subset {\mathbb{T}M}^{(k)}$ that is isotropic, involutive, and satisfies .
Let $ \mu: A \to \wedge^k T^*M$ be a $k$-Poisson structure. The bundle map $(\rho,\mu): A \to {\mathbb{T}M}$ is an isomorphism onto its image (due to condition $(1)$ in Prop. \[prop:nondeg\]), which is a subbundle $L\subset {\mathbb{T}M}^{(k)}$ that is isotropic, involutive, and satisfies (as a result of , and condition $(2)$ in Prop. \[prop:nondeg\], respectively). It is clear that $(\rho,\mu): A \to L$ is an isomorphism of Lie algebroids, which establishes the desired equivalence.
We conclude that the infinitesimal versions of $k$-plectic groupoids can be seen as isotropic, involutive subbundles $L\subset {\mathbb{T}M}^{(k)}$ satisfying . Note that the condition $L=L^\perp$ (see (d1)) may not hold for $k> 1$ (we will see simple examples in Section \[sec:examples\]); in the case $k=1$, the condition $L^\perp \cap TM = ({{\mathrm{pr}}}_2(L))^\circ =\{0\}$ implies that ${{\mathrm{pr}}}_2(L)=T^*M$, so that $L=L^\perp$.
There is yet another characterization of $k$-Poisson structures, closer in spirit to the description of Poisson structures via bivector fields.
\[prop:D\] There is a one-to-one correspondence between subbundles $L\subset
{\mathbb{T}M}^{(k)}$ as in Prop. \[prop:L\] and pairs $(D,\lambda)$, where $D\subseteq \wedge^k T^*M$ is a subbundle and $\lambda: D\to TM$ is a bundle map (covering the identity) satisfying the following conditions: [(a)]{} $D^\circ =\{0\}$, [(b)]{} $i_{\lambda(\alpha)}\beta
= -i_{\lambda(\beta)}\alpha$, for $\alpha,\beta \in D$, and [(c)]{} the space $\Gamma(D)$ is involutive with respect to the bracket (c.f. ) $$\label{eq:lbrk}
[\alpha,\beta]_\lambda := {\mathcal L}_{\lambda(\alpha)}\beta -
i_{\lambda(\beta)}d\alpha = {\mathcal L}_{\lambda(\alpha)}\beta -
{\mathcal L}_{\lambda(\beta)}\alpha - d(i_{\lambda(\alpha)}\beta),$$ and $\lambda: \Gamma(D)\to \Gamma(TM)$ preserves brackets.
Given a $k$-Poisson structure $L\subset TM\oplus \wedge^k T^*M$, note that ${{\mathrm{pr}}}_2|_L: L\to \wedge^k T^*M$ is injective (since ${{\mathrm {ker}}}({{\mathrm{pr}}}_2|_L)=L\cap TM\subseteq L^\perp\cap TM=\{0\}$). Setting $D={{\mathrm{pr}}}_2(L)$ and $\lambda={{\mathrm{pr}}}_1\circ ({{\mathrm{pr}}}_2|_L)^{-1}$, we see that $L=\{(\lambda(\alpha),\alpha)\,|\, \alpha \in D\}$. Then is equivalent to condition $(a)$, while $(b)$ means that $L$ is isotropic. The involutivity of $L$ is equivalent to condition $(c)$.
For $k=1$, as previously remarked, $D=T^*M$ (as a result of $(a)$), while $(b)$ says that $\lambda = \pi^\sharp$, for a bivector field $\pi$. The involutivity condition in $(c)$ is automatically satisfied, and the bracket-preserving property is equivalent to the Poisson condition $[\pi,\pi]=0$ (see e.g. [@BC Lem. 2.3]).
For a $k$-Poisson structure defined by $(D,\lambda)$ as in Prop. \[prop:D\], $D$ acquires a Lie algebroid structure with bracket and anchor $\lambda$, in such a way that ${{\mathrm{pr}}}_2|_L:L\to D$ is an isomorphism of Lie algebroids. In terms of $(D,\lambda)$, the singular foliation on $M$ determined by the $k$-Poisson structure (see ) is given by the integral leaves of the distribution $\lambda(D)\subseteq TM$. Moreover, each leaf $\mathcal{O}$ inherits a $(k+1)$-form $\omega$ by $$\label{eq:leafform}
\omega(Y_0,Y_1,\ldots,Y_k)= i_{Y_k}\ldots i_{Y_1}\alpha,$$ where $Y_i\in \lambda(D)|_{\mathcal{O}}=T\mathcal{O}$, and $\alpha
\in D$ is such that $Y_0=\lambda(\alpha)$; indeed, property $(b)$ in Prop. \[prop:D\] assures that $\omega$ is well defined. One may also verify, using $(c)$ in Prop. \[prop:D\], that $\omega$ is closed. For $k=1$, one recovers the symplectic foliation that underlies any Poisson structure and completely determines it. However, for $k>1$, it is no longer true that the leafwise closed $(k+1)$-forms are nondegenerate, nor that a $k$-Poisson structure is uniquely determined by them, see Remark \[rem:fol2\] (c.f. [@Zambon Prop. 3.8]).
The description of $k$-Poisson structures in Prop. \[prop:D\] also makes the notion of morphism of $k$-Poisson manifolds more evident: if $(D_i,\lambda_i)$ is a $k$-Poisson structure on $M_i$, $i=1,2$, then a map $\phi: M_1\to M_2$ is a [*$k$-Poisson morphism*]{} if, for all $x\in M_1$, $\phi^*(D_2|_{\phi(x)})\subseteq D_1|_x$ and $d\phi(\lambda_1(\phi^*\alpha))=\lambda_2(\alpha)$, for all $\alpha\in D_2|_{\phi(x)}$.
Some examples and final remarks {#sec:examples}
===============================
We now give some examples of $k$-Poisson structures. The first two examples are from [@Zambon].
\[ex:multisymp\] Let $\omega\in \Omega^{k+1}(M)$ be a $k$-plectic form. Then its graph $$L=\{(X,i_X\omega),\; X\in TM\} \subset {\mathbb{T}M}^{(k)}$$ satisfies $L=L^\perp$ and is involutive (as a consequence of $\omega$ being closed, see [@Zambon Prop. 3.2]). Also, $L^\perp\cap TM = L\cap TM = \ker(\omega)=\{0\}$ by nondegeneracy. In terms of Prop. \[prop:D\], $D=\mathrm{Im}(\omega^\sharp)$ and $\lambda= (\omega^\sharp)^{-1}:D\to TM$. So, just as any symplectic structure is a Poisson structure, any $k$-plectic form is a particular type of $k$-Poisson structure. A $k$-plectic groupoid integrating this $k$-Poisson structure is the pair groupoid $M\times
M$, with $k$-plectic structure $p_1^*\omega - p_2^*\omega$ where $p_i$, $i=1,2$, denote the two natural projections from $M\times M$ to $M$.
Considering a $k$-plectic groupoid ${\mathcal{G}}\toto M$ with the $k$-Poisson structure of Example \[ex:multisymp\], one may use to check that the target map ${{\mathsf{t}}}: {\mathcal{G}}\to M$ is a $k$-Poisson morphism, extending the well-known property of symplectic groupoids, see Section \[sec:poisson\].
We saw in Section \[subsec:k1\] that Poisson bivector fields are the same as 1-Poisson structures. Other types of higher Poisson structures are obtained from top-degree multivector fields as follows.
\[ex:multivector\] Let $\pi \in \Gamma(\wedge^{k+1}TM)$ be a multivector field of top degree, i.e., $k=\dim(M)-1$. Then its graph $$L=\{(i_\alpha\pi,\alpha)\;|\; \alpha\in \wedge^kT^*M\}\subseteq
{\mathbb{T}M}^{(k)}$$ is isotropic and involutive – and, besides Poisson bivector fields, these are the only examples of non-zero multivector fields whose graphs have these properties, see [@Zambon Prop. 3.4]. Also, since ${{\mathrm{pr}}}_2(L)=\wedge^k T^*M$, it is clear that ${{\mathrm{pr}}}_2(L)^\circ =
L^\perp\cap TM =\{0\}$, so $L$ is a $k$-Poisson structure. The foliations defined by these $k$-Poisson structures are usually singular: leaves are either open subsets of $M$ or singular points (where $\pi$ vanishes). The restriction of $\pi$ to each open leaf is nondegenerate, and the induced $(k+1)$-forms $\omega$ on these leaves (see ) are the volume forms dual to $\pi$, i.e., they are defined by $i_{(i_\alpha \pi)} \omega = \alpha,
\forall \alpha \in \wedge^kT_x^*M$. The groupoids integrating these $k$-Poisson structures have been mostly studied when $\dim(M)=2$ (so $\pi$ is a bivector field), see [@GL; @Mart].
The fact that the particular $k$-Poisson structures of Examples \[ex:multisymp\] and \[ex:multivector\] are infinitesimal versions of $k$-plectic groupoids was observed in [@Zambon Prop. 3.7].
In the preceding examples, the bundle $L$ always satisfied $L=L^\perp$. For examples where this condition fails, consider subbundles $$\label{eq:Lk}
L\subseteq \wedge^kT^*M \subset {\mathbb{T}M}^{(k)}.$$ These are automatically isotropic and involutive. Note that $$L^\perp
= L^\circ \oplus \wedge^k T^*M, \;\; \mbox{ and }\;\;
L^\perp \cap TM = L^\circ.$$ So $L$ is a $k$-Poisson structure as long as $L^\circ=\{0\}$, and $L
\subsetneq L^\perp$ as long as $L$ is properly contained in $\wedge^kT^*M$. A $k$-plectic groupoid integrating it is $L$ itself, viewed as a vector bundle (with groupoid structure given by fibrewise addition), equipped with the $k$-plectic form given by the pullback of the canonical multisymplectic form on $\wedge^kT^*M$ (see ); the fact that this pullback is nondegenerate boils down to the condition $L^\perp \cap TM=L^\circ = \{0\}$.
\[ex:T\*k\] For $L=\wedge^k T^*M$, note that $L^\circ=\{0\}$ (and hence $L$ is a $k$-Poisson structure on $M$) if and only if $\dim(M)\geq k$.
Let $\xi$ be a nondegenerate $k$-form on $M$, and let $L\subset \wedge^kT^*M$ be the line bundle generated by $\xi$, $$L|_x=\{c\xi_x \; | \; c\in \mathbb{R}\},\;\;\; x\in M.$$ Then $L^\circ = \ker(\xi)=0$, so $L$ is a $k$-Poisson structure.
\[rem:fol2\] Note that all $k$-Poisson structures of the type determine the same foliation, the leaves of which are the points of $M$.
A general observation is that one can take direct products of $k$-Poisson structures: if $L_1$ and $L_2$ are $k$-Poisson structures on $M_1$ and $M_2$, respectively, we define their product by $$L := \{ (X+ Y, \alpha + \beta)\;|\; (X,\alpha)\in L_1,\,
(Y,\beta)\in L_2 \} \subseteq TM\oplus \wedge^kT^*M,$$ where $M=M_1\times M_2$ and we simplify the notation by identifying forms on $M_i$ with their pullbacks to $M$ via the projections. One may directly verify that $L$ is a $k$-Poisson structure on $M$. Moreover, if $({\mathcal{G}}_i\toto M_i,\omega_i)$ is a $k$-symplectic groupoid integrating $L_i$, $i=1,2$, the direct product ${\mathcal{G}}_1\times {\mathcal{G}}_2\toto M_1\times M_2$ (equipped with the $k$-plectic form $\omega_1 + \omega_2$) is a $k$-plectic groupoid that integrates $L$. The following is a concrete example.
\[ex:prod\] Let $(M,\omega)$ be a $k$-plectic manifold, and let $N$ be a manifold with $\dim(N)\geq k$. Then the subbundle $$L=\{(X, i_X\omega +\alpha)\;|\; X\in TM, \alpha\in \wedge^k T^*N\}
\subset T(M\times N)\oplus \wedge^k T^*(M\times N)$$ is a $k$-Poisson structure on $M\times N$ (c.f. [@Zambon Thm. 3.12]), the direct product of the $k$-plectic form on $M$ with the $k$-Poisson structure $L=\wedge^kT^*N$ on $N$ (see Example \[ex:T\*k\]). The leaves of $L$ are $M\times \{t\}$, $t\in N$, with induced $(k+1)$-form (as in ) given by $\omega$.
The next observation illustrates that $k$-Poisson structures become more rigid than Poisson structures when $k>1$.
Let $M$ and $N$ be as is Example \[ex:prod\], let $f\in
C^\infty(N)$, and consider the smooth family $\omega_t =
f(t)\omega$, $t\in N$, of $k$-plectic forms on $M$. For $k=1$, this family defines a Poisson structure on $M\times N$, uniquely determined by the fact that its symplectic leaves are $(M\times
\{t\},\omega_t)$. A higher generalization of this Poisson structure is given by the (isotropic) subbundle $L\subset T(M\times N)\oplus
\wedge^k T^*(M\times N)$ defined by $$L|_{(x,t)}=\{(X, i_X\omega_t +\alpha)\;|\; X\in T_xM, \alpha\in
\wedge^k T_t^*N\}.$$ As it turns out, for $k>1$, one may verify that such $L$ is involutive if and only if $df=0$, i.e., $f$ is (locally) constant.
We finally mention another product-type operation for multisymplectic manifolds leading to higher Poisson structures that are not multisymplectic.
Let $(M_i,\omega_i)$ be a $k_i$-plectic manifold, $i=1,2$. Let $M=M_1\times M_2$ and $\omega = \omega_1 \wedge \omega_2 \in
\Omega^{k_1+k_2 + 2}(M)$ (we keep the simplified notation of identifying forms on $M_i$ with their pullbacks to $M$ via the projections $M\to M_i$). Then $$L =\{(X,i_X\omega)=(X,(i_X\omega_1)\wedge \omega_2) \;|\; X\in
TM_1\} \subset TM\oplus \wedge^{k_1+k_2+1}T^*M$$ can be checked to be a $(k_1+k_2+1)$-Poisson structure. Its leaves are of the form $M_1\times \{y\}$, for $y\in M_2$, and the induced $(k_1+k_2+2)$-form on each leaf is zero. An integrating $k$-plectic groupoid is given by the direct product of the pair groupoid $M_1
\times M_1$ (see Example \[ex:multisymp\]) and the trivial groupoid over $M_2$, endowed with the multiplicative $(k_1+k_2 +
2)$-form given by $(p_1^*\omega_1 - p_2^*\omega_1)\wedge \omega_2$.
[99]{}
Arias Abad, C., Crainic, M., The Weil algebra and the Van Est isomorphism. [*Ann. Inst. Fourier (Grenoble)*]{}, [**61**]{} (2011), 927–970.
Baez, J., Hoffnung, H., Rogers, C., Categorified symplectic geometry and the classical string, [*Comm. Math. Phys.*]{} [**293**]{} (2010), 701–715.
Bates, S., Weinstein, A., [*Lectures on the geometry of quantization*]{}, Berkeley Mathematics Lecture Notes, 8. American Mathematical Society, Providence, RI; Berkeley Center for Pure and Applied Mathematics, Berkeley, CA, 1997.
Bursztyn, H., Cabrera, A., Multiplicative forms at the infinitesimal level. [*Math. Ann.*]{} [**353**]{} (2012), 663–705.
Bursztyn, H., Crainic, M.: Dirac structures, moment maps and quasi-Poisson manifolds. In [*The breadth of symplectic and Poisson geometry*]{}, Progr. Math 232, Birkhauser, 2005, 1–40.
Bursztyn, H., Crainic, M., Weinstein, A., Zhu, C., Integration of twisted Dirac brackets, [*Duke Math. J.*]{} [**123**]{} (2004), 549-607.
Cannas da Silva, A., Weinstein, A., [*Geometric models for noncommutative algebras*]{}. Berkeley Mathematics Lecture Notes, 10. American Mathematical Society, Providence, RI; Berkeley Center for Pure and Applied Mathematics, Berkeley, CA, 1999.
Cantrijn, F., Ibort, A., de Leon, M., Hamiltonian structures on multisymplectic manifolds. [*Rend. Sem. Mat. Univ. Pol. Torino*]{}, [**54**]{} (1996), 225–236. Geom. Struc. for Phys. Theories, I.
Cantrijn, F., Ibort, A., de Leon, M., On the geometry of multisymplectic manifolds. [*J. Austral. Math. Soc.*]{} Ser. A, [**66**]{} (1999), 303–330.
Cattaneo, A., Felder, G., [*Poisson sigma models and symplectic groupoids*]{}. Quantization of singular symplectic quotients, 61–93, Progr. Math., [**198**]{}, Birkhauser, Basel, 2001.
Coste, A., Dazord, P., Weinstein, A., *Groupoïdes symplectiques*. Publications du Département de Mathématiques. Nouvelle Série. A, Vol. 2, i–ii, 1–62, Publ. Dép. Math. Nouvelle Sér. A, 87-2, Univ. Claude-Bernard, Lyon, 1987.
Courant, T., Dirac manifolds, [*Trans. Amer. Math. Soc.*]{} [**319**]{} (1990), 631-661.
Crainic, M., Fernandes, R., Integrability of Lie brackets. [*Ann. of Math.*]{} [**157**]{} (2003), 575–620
Crainic, M., Fernandes, R., Integrability of Poisson brackets. [*J. Differential Geom.*]{} [**66**]{} (2004), 71–137.
Dufour, J.-P., Zung, N.-T., [*Poisson structures and their normal forms*]{}, Progress in. Mathematics, 242, Birkhauser Boston, 2005.
Forger, M., Paufler, C., Römer, H.: The Poisson Bracket for Poisson Forms in Multisymplectic Field Theory. [*Rev. Math. Phys.*]{} [**15**]{} (2003), 705–743.
Gotay, M., A multisymplectic framework for classical field theory and the calculus of variations. I. Covariant Hamiltonian formalism. In: [*Mechanics, analysis and geometry: 200 years after Lagrange*]{}. Elsevier, New York, 1991, 203–235.
Gotay, M., Isenberg, J., Marsden, J., Montgomery, R., Momentum Maps and Classical Relativistic Fields. Part I: Covariant Field Theory, ArXiv:physics/9801019.
Gotay, M., Isenberg, J., Marsden, J., Momentum Maps and Classical Relativistic Fields. Part II: Canonical Analysis of Field Theories, ArXiv:math-ph/0411032.
Gualtieri, M., Li, S.: Symplectic groupoids of log symplectic manifolds. Arxiv: 1206.3674.
Hélein, F., Multisymplectic formalism and the covariant phase space, in [*Variational Problems in Differential Geometry*]{}, London Mathematical Society Lecture Note Series 394, Cambridge University Press, 2012, p. 94-126.
Hitchin, N.: Generalized [C]{}alabi-[Y]{}au manifolds,[*Q. J. Math.*]{} **54** (2003), 281–308.
Hrabak, S. P., On a Multisymplectic Formulation of the Classical BRST symmetry for First Order Field Theories Part I: Algebraic Structures. ArXiv:math-ph/9901012.
Iglesias Ponte, D., Marrero, J. C., Vaquero, M.: Poly-Poisson structures. Arxiv: 1209.4003.
Kanatchikov, I. V.: On field theoretic generalizations of a Poisson algebra, [*Reports on Math. Phys.*]{} [**40**]{} (1997), 225–234.
Kijowski, J., Szczyrba, W.: A canonical structure for classical field theories, [*Comm. Math. Phys.*]{} [**46**]{} (1976), 183–206.
Mackenzie, K., Xu, P., Integration of Lie bialgebroids. [*Topology*]{} [**39**]{} (2000), 445–467.
Madsen, T., Swann, A., Closed forms and multi-moment maps. Arxiv: 1110.6541.
Marsden, J. E., Patrick, G., Shkoller, S., Multisymplectic geometry, variational integrators and nonlinear PDEs, [*Comm. Math. Phys.*]{} [**199**]{} (1998), 351–395.
Marsden, J. E., Pekarsky, S., Shkoller, S., West, M., Variational methods, multisymplectic geometry and continuum mechanics, [*J. Geom. Phys.*]{} [**38**]{} (2001), 253–284 (2001).
Marsden, J., Ratiu, T.: [*Introduction to Mechanics and Symmetry*]{}, Text in Applied Mathematics, Vol. 17, Springer-Verlag, 1994.
Martinez, N.: Work in progress.
Martinez-Torres, D.: A note on the separability of canonical integrations of Lie algebroids. [*Math. Res. Lett.*]{} [**17**]{} (2010), 69–75.
Mikami, K., Weinstein, A.: Moments and reduction for symplectic groupoid actions. [*Publ. RIMS, Kyoto Univ.*]{} [**24**]{} (1988), 121–140.
Rogers, C., $L_\infty$-algebras from multisymplectic geometry, [*Lett. Math. Phys,*]{} [**100**]{} (2012), 29–50.
Vankerschaver, J., Yoshimura, H., Marsden, J., Multi-Dirac Structures and Hamilton-Pontryagin Principles for Lagrange-Dirac Field Theories. ArXiv:1008.0252.
, [Symplectic groupoids and [P]{}oisson manifolds.]{} [*Bull. Amer. Math. Soc. (N.S.)*]{} [**16**]{} (1987), 101–104.
Zambon, M., $L_\infty$-algebras and higher analogues of Dirac structures and Courant algebroids. [*J. of Symplectic Geom.*]{} [**10**]{} (2012), 563–599.
[^1]: E.g., in the description of the interplay between hamiltonian dynamics and symmetries [@MR], and in the transition from classical to quantum mechanics [@CW].
[^2]: Here the fibred product ${\mathcal{G}}{_{\mathsf{s}}\times_{{\mathsf{t}}}}{\mathcal{G}}= \{(g,h) \in {\mathcal{G}}\times {\mathcal{G}}\,|\,
{\mathsf{s}}(g)={{\mathsf{t}}}(h)\}$ represents the space of composable arrows.
[^3]: For a function $f\in \Omega^0({\mathcal{G}})=C^\infty({\mathcal{G}})$, condition becomes $f(gh)=f(g)+f(h)$, i.e., it says that $f$ is a groupoid morphism into $\mathbb{R}$ (viewed as an abelian group).
[^4]: See e.g. [@We87] for a nonintegrable example and [@CF2] for a discussion of obstructions to integrability.
[^5]: In the case of exact $k$-plectic manifolds, a different way to eliminate the jacobiator defect is presented in [@Forger1], based on a modification of the bracket using the $k$-plectic potential.
[^6]: We say that two IM $(k+1)$-forms $\mu_1:A_1
\to \wedge^kT^*M$ and $\mu_2:A_2 \to \wedge^kT^*M$ are [*equivalent*]{} if there is a Lie-algebroid isomorphism $\phi: A_1\to
A_2$ such that $\mu_2\circ \phi = \mu_1$; these are infinitesimal versions of isomorphism of Lie groupoids preserving multiplicative forms.
|
---
abstract: 'We present high spatial resolution (FWHM $\approx$ 0.3–0.8) BIHK[$^{\prime}$]{}-band imaging of a sample of ultraluminous infrared galaxies (L$_{\rm ir} >10^{12}$ ; ULIGs) with “cool” mid-infrared colors ([*f*]{}$_{\rm 25\mu m}$/[ *f*]{}$_{\rm 60\mu m} < 0.2$) which select against AGN-like systems and which form a complementary sample to the “warm” ULIGs of Surace et al. (1998). We find that all of the cool ULIGs are either advanced mergers or are pre-mergers with evidence for still- separate nuclei with separations greater than 600 pc. Extended tidal features such as tails and loops as well as clustered star formation are observed in most systems. This extended tidal structure suggests a common progenitor geometry for most of the ULIGs: a plunging disk collision where the disks are highly inclined with respect to each other. The underlying host galaxies have H-band luminosities of 1–2.5 [*L*]{}$^*$, very similar to that found in the “warm” ULIGs. The nuclear regions of these galaxies have morphologies and colors characteristic of a recent burst of star formation mixed with hot dust and mildly extinguished by [*A*]{}$_{\rm V}$=2–5 magnitudes; only in one case (IRAS22491$-$1808) is there evidence for a compact emission region with colors similar to an extinguished QSO. Most of the observed star-forming knots appear to have very young (10 Myr) ages based on their optical/near-infrared colors. These star-forming knots are insufficiently luminous to typically provide more than 10% of the high bolometric luminosity of the systems.'
author:
- 'Jason A. Surace'
- 'D. B. Sanders'
- 'A.S. Evans'
title: 'High Resolution Optical/Near-Infrared Imaging of Cool Ultraluminous Infrared Galaxies'
---
Accepted for Publication in [*The Astrophysical Journal*]{}
Introduction
============
One of the most important results from the [*Infrared Astronomical Satellite*]{}[^1] ([*IRAS*]{}) all-sky survey was the discovery of a significant population of galaxies that emit the bulk of their luminosity in the far-infrared (e.g. Soifer et al. 1984). Studies of the properties of these “infrared galaxies” showed systematic trends coupled to the total far-infrared luminosity; more luminous systems were more likely to appear to be merger remnants or interacting pairs, and were more likely to possess AGN-like emission line features. A more complete review of the properties of luminous infrared galaxies is given by Sanders & Mirabel (1996). Much attention has been focused on so-called ultraluminous infrared galaxies (ULIGs), objects with infrared luminosities, $L_{\rm ir}$,[^2] greater than $10^{12}\ L_{\sun}$, which corresponds to the bolometric luminosity of QSOs [^3] (assuming the blue luminosity criterion $M_{\rm B} < -22.1$, adjusting for our adopted cosmology: Schmidt & Green 1983). Multiwavelength observations of a complete sample of 10 ULIGs led Sanders et al. (1988a) to suggest that these objects might plausibly represent the initial dust- enshrouded stage in the evolution of optically selected QSOs, and that the majority, if not all, QSOs may begin their lives in such an intense infrared phase.
Considerable attention has been devoted to so-called “warm” systems, which have mid- infrared colors $f_{25}/f_{60} > 0.2$. [^4] Sanders et al. (1988b) found that these systems predominantly have AGN optical spectra, very large molecular gas masses, (M$_{\rm H_2} \sim 10^{10}
M_\odot$), and advanced merger morphologies, and postulated that they represented the immediate transition phase between ULIGs and optically-selected QSOs.
An examination of deep far-infrared flux-limited samples such as the $f_{60} = 1$ Jy survey (Kim & Sanders 1998), however, shows that the majority (80% ; 90/115) of ULIGs are “cool” systems (i.e., [*f*]{}$_{25\mu m}$/[*f*]{}$_{60\mu m} < 0.2$). Thus, these galaxies are similar to the majority of ULIG systems previously studied by others (i.e. Sanders et al. 1988a, Kim 1995), rather than the smaller fraction of “warm” AGN-like systems like those discussed by Sanders et al. (1988b), Surace et al. (1998; hereafter Paper I) and Surace & Sanders (1999a; hereafter Paper II). Results derived in this paper for a sample of “cool” ULIGs are therefore likely to reflect the properties of ULIGs as a whole. In Paper II it was discussed that the warm ULIGs were possibly a transition state between cool ULIGs and QSOs. If this is true, then the cool ULIGs are expected to have properties similar to, yet less evolved than, the warm sample.
Many of the cool ULIGs have been imaged before at optical (Sanders et al. 1988, Kim 1995, Murphy et al. 1996) and near-infrared (Carico et al. 1990, Kim 1995, Murphy et al. 1996) wavelengths. However, these observations suffered from poor spatial resolution (FWHM $\geq$ 1.0) and lack of depth. Their wavelength coverage was limited predominantly to R and K-band, and was insufficient to disentangle reddening effects from intrinsic colors. Finally, several of the objects in the cool sample presented here have never been imaged before.
We present here new multiwavelength observations with 1.5 and 4$\times$ the spatial resolution of previous ground-based observations at optical and near-infrared wavelengths; despite being ground-based, they allow us to isolate interesting features such as the star-forming knots detected in the warm ULIG sample.
The Sample
==========
A sample of 18 “cool" (i.e., [*f*]{}$_{\rm 25\mu m}$/[*f*]{}$_{\rm 60\mu m} < 0.2$) ULIGs was drawn from the [*IRAS*]{} Bright Galaxy Sample of Sanders et al. (1988) as well as the IRAS 1-Jy sample (Kim & Sanders 1998). The cool ULIG sample was chosen to complement the samples of “warm" ULIGs and infrared-excess PG QSOs as part of a study of the possible evolutionary connection between ULIGs and optically-selected QSOs (Surace 1998). A key observational fact known from spectroscopic studies of the larger parent samples is that cool ULIGs, warm ULIGs, and PG QSOs are known to represent a spectroscopic sequence that ranges from objects whose distribution of spectral types is biased towards H II-like spectra (HII-40%, LINER-40%, Sy2-20%; e.g. Veilleux, Kim, & Sanders 1999), to objects dominated by Seyferts (HII-10%, LINER-20%, Sy2-40%, sy1-30%; Veilleux et al. 1997), and finally to optically-selected Sy 1s (Schmidt & Green 1983; i.e. part of the definition of QSOs). The 18 cool ULIGs discussed here have a similar distribution of spectral types (HII-54%, LINER-35%, Sy2-11%)as their parent sample. The “warm” sample has been discussed previously in Papers I & II, and the infrared-excess PG QSOs are the subject of a forthcoming paper.
All of the “cool” ULIGs have been chosen to lie within the volume [*z*]{} $<$ 0.16. This is the same volume limit as the “warm” ULIG sample of Paper I, and is very close to the completeness limit for ULIGs in deep [*IRAS*]{} surveys. Also, this is sufficiently nearby that the spatial resolution achievable from the ground can provide information on scales known from Papers I & II to be physically meaningful (typically a few hundred parsecs). Since there are over 100 such ULIGs known, this sample was selected first to include the original 7 cool ULIGs in the BGS sample, as these are most well-studied. The remaining cool ULIGs were selected such that their redshift distribution was similar to the “warm” ULIG and PG QSO samples of Sanders et al. (1988b) and Surace (1998). Specific objects were randomly chosen to lie in regions of the sky more amenable to observation from Mauna Kea, and to ameliorate crowding of the observing program in spring. Since this selection criterion is unrelated to the physical properties of the ULIGs, it should not bias the sample.
Observations and Data Reduction
===============================
The data were taken between October 1995 and March 1998 at the f/31 focus of the UH 2.2m telescope using a fast tip/tilt image stabilizer. This image stabilizer consists of a piezo-driven secondary with a pick-off mirror and guider ccd for guide-star acquisition. It is described by Jim (1995) and Pickles et al. (1994), and was used for the observations of Papers II & III. When used in off-axis guiding mode, it eliminates common-mode vibration of the telescope, as well as some seeing effects. This results in near-diffraction limited imaging (FWHM $\approx$ 0.3) in the near-infrared most of the time, since the seeing at the UH 2.2m site is extremely good. The system is not as effective at optical wavelengths, where the seeing is much poorer. The spatial resolution at I-band is typically 0.5-1, with 0.75 being typical. At B- band it is usually 1.
The near-infrared data were taken with the QUIRC 1024$\times$1024 HgCdTe camera in a manner identical to that of the “warm” ULIG sample in Paper II. The observations were made at H (1.6$\mu$m) and K[$^{\prime}$]{}(2.1$\mu$m). The H filter was chosen since it is the longest wavelength filter which is still relatively unaffected by thermal dust emission; dust hot enough to emit significantly at this wavelength would be above the dust sublimation temperature. The choice of the University of Hawaii K[$^{\prime}$]{}filter, which is bluer than both the Johnson K and the 2MASS K$_S$, was motivated by the lower thermal sky background in the K[$^{\prime}$]{}-band (Wainscoat & Cowie 1992). This improves the detectability of faint features such as star-forming knots. Throughout this paper we exclusively refer to K[$^{\prime}$]{}. Comparison to work by other authors is made using the conversion of Wainscoat & Cowie (1992).
The near-infrared data were reduced in the same manner as that described in Paper II. The data was initially sky-subtracted using consecutive, dithered frames; because the QUIRC field of view is so large (60), it was possible to dither the target on- chip, thereby increasing telescope efficiency by a factor of 2. The images were then flattened using median flats constructed from images of the illuminated dome interior. Each image was masked by hand to exclude bad pixels and regions contaminated by negative emission introduced by the sky subtraction. The images were aligned using the IMALIGN task in IRAF, which uses a marginal centroiding routine that calculates a best fit solution to a number of (user-supplied) reference stars in the field. Typical alignment errors were estimated (on the basis of the fit) to be about 0.25 pixels. Given that the data was typically sampled by 5 pixels FWHM for a point source, alignment errors are unlikely to be important. Images were scaled according to their exposure times and then, in order to account for any variable sky background, an offset was subtracted from each image based on the background measured in that frame. The images were combined by medianing using IMCOMBINE and rejecting pixels outside the linear regime of the array.
The optical data were taken with several different instrument configurations. The UH Tektronix 2048 and Orbit 2048 cameras were used at the f/31 focus of the UH 2.2m. In both cases, the data were binned 2$\times$2 in order to provide better spatial sampling (with adopted binned pixel sizes of 0.14 and 0.09, respectively). The Orbit 2048 was also used in conjunction with the HARIS spectrograph; by withdrawing the dispersion components it was possible to image through the spectrograph. The telescope f/31 beam was reimaged at f/10 resulting in an unbinned image scale of 0.14 pixel$^{-1}$.
The optical data reduction involved several steps. First, the CCD bias pattern was removed by subtracting from each image a high S/N median bias frame constructed from sequences of 20-30 bias frames taken at the beginning and end of each night. Pixel-to- pixel response variations were then corrected by dividing each image by a high S/N flat produced by making dithered observations of the twilight sky in each filter. Typical twilight exposures were 2-3 seconds each, short enough to avoid getting detectable flux from field stars, yet long enough to avoid flat-field errors introduced by the radial shutter used at the UH 2.2m. Estimated S/N for the flats (based on Poisson statistics and the gain of the CCD) was between 250-500. Neither CCD showed any evidence of measurable dark current, based on an examination of long closed shutter exposures. The images were then corrected to normal orientation by transposition and rotation using the ROTATE task in IRAF based on the known field rotation of the cassegrain focus of the UH 2.2m, which is accurate to better than 1 degree. The CCD overscan regions were trimmed using IMCOPY. The images were shifted and aligned using the method detailed above for the near-infrared data. The images were then averaged using an algorithm that rejects pixels inconsistent with the known noise properties of the CCD, allowing for rejection of cosmic rays. The shifted images were combined onto a larger image than the original data frames, thereby increasing the total field of view due to the dithering process. This was valuable primarily in order to increase the availability of PSF stars, since the size of the camera focal plane was much larger than the measurable extent of any of the galaxies.
In some cases the telescope cassegrain focus was rotated during the night in order to acquire brighter guide stars, thereby allowing faster guide rates and improving quality of the tip/tilt guiding. Additional flats were made, whenever feasible, at the rotation angles used. This was necessary since changes in the illumination of dust and other defects in the telescope optics tended to produce notable changes in the flat-field response, resulting in a strong background gradient in the vicinity of any optical defects such as dust.
The data were calibrated through observations of standard stars in the optical and near- infrared ( Landolt 1983, 1992, Elias et al. 1982). Typically, 4–5 standard stars were observed throughout the night at airmasses similar to those of the science targets. The stars were interleaved with the science targets; such a strategy also enabled them to be used for refocussing the telescope. In most cases the nights were photometric with 1$\sigma$ uncertainties in the photometric calibration of 0.05 magnitudes. For non- photometric nights, the data was calibrated by forcing agreement with large-aperture photometry already in the literature. In particular, the optical data for UGC 5101 were calibrated using Sanders et al. (1988a), while the IRAS 00091-0738 and IRAS 01199-2307 data were calibrated using the photometry of Kim (1995). K-corrections have not been applied to any magnitude reported in this paper. Since the most distant object is at redshift [*z*]{}=0.16, K-corrections are likely to be quite small. For a 10 Myr old starburst of the sort discussed below at the median redshift, the K-corrections are $\delta_M$(B,I,H,K[$^{\prime}$]{})=(0.12,0.13,0.07,-0.11), and for a 2 Gyr old starburst population they are $\delta_M$(B,I,H,K[$^{\prime}$]{})=(0.61,0.19,0.08,-0.16). Kim (1995) and Trentham et al. (1999) computed K-corrections based on very large aperture spectra for ULIGs at redshifts similar to this sample, and found that optical K- corrections were typically less than 0.25 magnitudes, and near-IR K-corrections were less than 0.1 magnitudes. In order to make it easier to compare the measurements in this paper to those of others, magnitudes are presented without the uncertainty introduced by an assumed spectral shape. While the bandpass compression term is known for each ULIG, the inclusion of a bandpass compression-only, “pseudo K-correction” is omitted since the possible confusion it would create outweighs the very small advantage of including it.
The point-spread-function (PSF) was calibrated with actual stars in the final combined science image using DAOPHOT in the manner described in Paper II. All of the stars were identified, scaled, shifted, and combined using a sigma-clipping algorithm and weighting according to total flux, thus creating as high a S/N PSF image as possible. In those few cases where no stars were found in the science images, the PSF was estimated by using the closest temporally adjacent standard star. Since the tip/tilt guiding has little effect on atmospheric distortions at short wavelengths, this technique works well for optical data. Similarly, since the seeing remains stable on timescales of many minutes, this technique is also effective in the near-infrared.
Additionally, some of the cool ULIGs have been observed by [*HST*]{}/WFPC2 through the F814W ([*I*]{}-band) filter as part of the Borne et al. (1997) ULIG snapshot survey and are publicly available from the [*HST*]{} archive (see Table 1). These data were reduced in the same manner as the WFPC2 data in Paper I. These data were included primarily for comparison with the ground-based data. Since they are only at a single wavelength corresponding directly to one of our ground-based filters, they cannot be easily used for the multiwavelength color analysis presented here. Their spatial resolution is too high to allow a direct comparison without significant aperture effects.
In two cases we do not have complete data. Near-infrared data was not taken of Arp 220 because it had already been observed with [*HST/NICMOS*]{} (Scoville et al. 1998); this data has been retrieved for use here. Mrk 273 was not observed in the near-infrared because it had already been observed with adaptive optics from CFHT (Knapen et al. 1997).
As in Paper II, in some cases high spatial resolution techniques such as deconvolution were applied to the data in order to enhance detectability of features. The Richardson- Lucy algorithm implemented in IRAF/STSDAS was used along with the data-derived PSFs. It was allowed to iterate 20–50 times until noticeable artifacting appeared. Previous use of this technique has shown the observed structure to be reliable; in the case of the 6 systems with data from WFPC2 this was checked directly. Magnitudes and colors were not derived from the deconvolved data; rather, the deconvolved data was used to clarify morphological details. The actual photometry was measured from the raw data using aperture photometry and PSF-derived aperture corrections. For further details, see Surace (1998).
Results
=======
Morphology
----------
### Large-Scale Features
Images of each ULIG in the “cool” sample in each of the four observed filters are presented in Figure 1. Figure 2 presents near-truecolor images constructed from the optical data, as by Surace et al. (1998). The B & I-band images were linearly interpolated to provide the color information, and the resulting truecolor images should appear similar to the actual colors perceived by the eye. [^5] Despite having higher spatial resolution, the near-infrared morphology is more featureless; therefore a near- infrared color image was not included.
The cool ULIGs exhibit a wide variety of morphologies, the details of which are tabulated in Table 2. At least 6 of the 14 systems (43%) have obvious double galaxy nuclei as evidenced by the manner in which tidal tails and spiral structure are centered on high surface brightness, extended emission regions. This is similar to the value of 47% found by Murphy et al. (1996) for a sample of 53 non far-infrared color-selected ULIGs. However, an additional 4 systems [*may*]{} have double nuclei, which would bring the total double nucleus fraction to as high as 72% (10/14).
Projected separations of the definite double nuclei span an order of magnitude from 25 kpc in IRAS01199$-$2307 to just 2.5 kpc in IRAS22491$-$1808. In all cases the double nucleus systems have extended tidal structure indicating that these systems have already passed initial perigalacticon and are now in an advanced merger state. None of the systems which appeared to have single nuclei from previous observations were found to have double nuclei when observed at higher spatial resolution. This may be an effect of confusion. As noted in Paper I and illustrated by the cautionary tale of Mrk 231 (Armus et al. 1994), it is difficult to differentiate true galactic nuclei (spheroidal bulge remnants) from unresolved aggregates of luminous, dusty young star-forming regions. Conversely, at the median redshift of the sample, typical spatial resolutions of 0.4 at K[$^{\prime}$]{}are $\approx$ 600 pc, so at least a few systems with projected separations of 0.5-2 kpc should be expected. Even in the most confused of cool ULIGs (IRAS22491$-$1808), the two galaxy nuclei can be clearly differentiated from the star-forming clusters on the basis of their K[$^{\prime}$]{}emission. Furthermore, the extremely high spatial resolution HST observations of three of the single nucleus objects also fail to detect additional nuclei. There are several apparently single nucleus systems which have extended optical nuclear regions with apparent bifurcating dust lanes which are very similar in appearance to the bifurcated core of IRAS05189$-$2524. It is possible that the original progenitor galaxy nuclei lie in this chaotic central region if they have not already coalesced since they are not anywhere else in the system, and it seems unlikely that either heavy dust obscuration or a remarkable alignment along the line-of-sight could hide them. In the case of the nearest such system, Arp 220, the nuclei are actually known to be hidden in the optical structure.
The failure to discover any additional systems with previously unknown double nuclei when observed with the much higher spatial resolution of the near-infrared observations is similar to the result found in Paper I for warm ULIGs newly observed by [*HST*]{}. In order to evolve from the very widely separated systems to the single nucleus merger systems, the ULIGs must pass through a stage where the nuclei are separated by 0.5-2.5 kpc. If we assume an even distribution of physical separations between 25 kpc and fully merged systems, then the probability of selecting at random a sample with as few small-separation systems as observed here is much less than 1% (based on monte carlo simulations). This implies a bimodal distribution in separations for the underlying distribution of ULIGs. This may indicate that the timescale for final coalescence of the nuclei is comparatively short, and as a result there is a natural depletion in intermediate separation systems. Alternatively, the ULIG luminosity selection criterion may be selecting two different populations — systems that are ultraluminous at large separations, and systems that are ultraluminous only after merger.
None of the cool ULIGs shows evidence for the same kind of bright, extremely compact AGN-like nuclei as are found in several of the warm ULIGs such as Mrk 231. However, the [*HST*]{} images of Mrk 273 and UGC 5101 (Figure 3) reveal that both of these systems have single, very compact (radius $\approx$ 100 pc) high surface brightness features at [*I*]{}-band in their nuclear regions. Because there is also considerable structure in the nuclei of these systems, our [ground-based]{} [*I*]{}-band images with limited spatial resolution cannot spatially separate these central nuclei from their luminous surroundings. Similarly, both our ground-based images and the [*HST*]{} data show the presence of a nearly unresolved object in IRAS 12112+0305. IRAS23365+3604 also appears to have very compact nuclear structure, although it also has one of the lowest percentages of its total emission in its central 2.5 kpc region. These results point to the presence of compact “nuclei” in a small percentage (28%) of the cool ULIGs that are morphologically similar to the most extended, low surface brightness “nuclei” in the warm ULIGs, such as IRAS 12071$-$0444 and IRAS 15206+3342. Three of these four cool ULIGs are single nucleus systems (Mrk 273, UGC 5101, and IRAS23365+3604), and all four have considerable numbers of star-forming knots visible near their nuclei in the [*HST*]{} data; all of these are suggestive of transition objects between the greater population of cool ULIGs (e.g., IRAS 14348$-$1447) and the AGN-like compact-nucleus warm ULIGs (e.g., Mrk 231). That two of these systems (Mrk 273 and UGC 5101) are the only two cool ULIGs with known Seyfert spectra makes this explanation even more compelling (Sanders et al. 1988a, Khachikian & Weedman 1974).
All 14 systems show well-developed tidal tails and plumes. Many of these tails are curved to form circular or semi-circular ring-like structures, and it is likely that many of the linear tails are similar structures seen edge-on. The tails have total projected lengths averaging 35 kpc and ranging from 9 kpc to 100 kpc; the ringlike structures are generally 15-30 kpc in radius. In almost every case, these tidal features appear to be circular rings or ring segments oriented nearly perpendicular to each other. In addition to these obvious cases where the disks and tails lie in a plane either parallel or orthogonal to the projected plane of the sky, many of the other galaxy systems can be explained as projections of this same geometry combined with additional rotation. In many cases it appears that the progenitor galaxies may have been rotating in opposite directions, judging from the opposing opening angles of the tidal tails, which have presumably inherited the angular momentum of their progenitors (e.g., IRAS 01199$-$2307, IRAS 23233+0946). Figure 4 shows an n-body simulation taken from Barnes (1992). The upper left panel shows the model disk encounter geometry, which may be similar to that of many of the cool ULIGs, while the bottom right panel shows the cool ULIG UGC 5101 at I-band for comparison.
### Star-Forming Knots and Small-Scale Structure
Many of the systems show evidence of the same clustered star formation seen in the “warm” ULIGs. Only four show no obvious high surface brightness compact knot-like features. Since these are the four systems at the highest redshifts ([*z*]{} $ > $ 0.13), this may be an effect of the limited spatial resolution achievable from the ground; however, the physical spatial resolution for IRAS00091$-$0738 is nearly as poor at [*z*]{} = 0.12, yet it shows considerable evidence for star-formation. Moreover, these systems are 4 of the 6 double nucleus systems and 3 of them have the largest projected separations, which may indicate a lack of clustered star formation in mergers with large nuclear separations. This is similar to other results which indicate that there may be a delay in star formation until some time after the first contact between galaxies (Joseph et al. 1984, Surace & Sanders 1999b). The role of star formation as characterized by far-infrared activity in very widely separated pairs is discussed widely in the literature (Bushouse et al. 1988, Haynes & Herter 1988, Surace et al. 1993). Figure 5 illustrates Richardson-Lucy deconvolved data for these star-forming regions. Figure 3 presents images of the [*HST*]{}/WFPC2 data for our sample objects, which can then be compared to the ground-based images in Figure 5.
In order to be recognized as real features in the deconvolved images, the small-scale structure must at least be recognizable in the undeconvolved images. This provides a means of allowing the real features to be discriminated from the amplified, highly- correlated noise that produces the “mottling” effect in the deconvolved data. As in Paper I, the knots are defined as compact emission sources with closed isophotes that are more than 3$\sigma$ above the local background in the undeconvolved images. This distinction is made in an attempt to discriminate between the “knots”, which appear to be compact bursts of star-formation, and the more extended “condensations” which appear to be a result of large-scale tidal structure. Examples of condensations can be seen in the southern arc of IRAS 12112+0305 and the western tail of UGC 5101. As was noted in Paper I, it is likely that all of the “knots” are actually unresolved aggregates of star- forming clusters like those seen in other, more nearby interacting galaxies. Given the even poorer spatial resolution of the ground based images (as opposed to those from [ *HST*]{}), many of the star-forming knots are likely to be even more confused than in Paper I and this limits our ability to recognize the star-forming knots. Because of this, the analyses of the luminosity functions, etc., as was carried out in Paper I, cannot be repeated here meaningfully. Table 4 gives aperture photometry for the star-forming knots that could actually be recognized as such; it probably misses many more. It also does not list photometry for features that appeared to be more extended tidal structure, although in many cases there appears to be tidal structure in the form of arms and wisps even at very small scales. Details of this additional structure are discussed in §4.2. Positions are given relative to the brightest feature in I-band and correspond to the apparent galaxy “nucleus”.
Most of the star-forming knots are within a radius of a few kpc of the nuclei, a result suspected from previous aperture photometry studies which showed systematic color changes at small galactocentric radii (Carico et al. 1990). As in the warm ULIGs, knots and condensations are also seen along the tidal features; this is particularly apparent in IRAS00091$-$0738, UGC 5101, Mrk 273, IRAS 12112+0305, IRAS14348$-$1447, and IRAS 22491$-$1808.
Luminosities
------------
The “cool” ULIGs do not show morphological evidence for the compact putative AGN found in the “warm” ULIGs. Instead, most of the “nuclei” appear to be extended with evidence of starburst activity, based on their colors as detailed below. Furthermore, many of the nuclei have complex morphological structures similar to star-forming regions. Therefore, only the luminosities of the host galaxies and the star-forming knots are discussed since a comparison between the nuclei and AGN is unwarranted.
### Host Galaxies
Luminosities were computed using the formula given in Paper I (equation 1), which corrects for distance but does not include K-corrections. As in Paper II, we consider the H-band luminosity as the best indicator of the total mass of the old stellar population, and hence the combined mass of the merger progenitors. It additionally is much less affected by extinction than shorter wavelength observations, and K-corrections at H- band are likely to be less than 0.1 magnitude based both on a modeled starburst population and also on the empirical results of Kim (1995). Unfortunately, this is somewhat more complicated in the case of the cool ULIGs than it was for the warm ULIGs and will be for quasar host galaxies (e.g. Surace (1998), Surace & Sanders (1999a)) . In particular, since the warm ULIGs each appeared to contain a compact AGN-like nucleus, it was easy to subtract this nuclear component, as well as any emission from compact star-forming knots, from the extended host galaxy. In the case of many of the cool ULIGs, there is not a clear AGN component. Instead, the “nuclei” of the cool ULIGs often appear to be diffuse, extended regions with complex structure; a large fraction of the luminosity in the nuclear regions is likely to be old starlight, in which case it should not be subtracted from the global H-band luminosity.
Several approaches are considered in computing the total H-band luminosity of the host galaxies. First, the total flux at H-band, including any star-forming knots, galaxy nuclei, etc. is considered. This will set an upper limit on the host galaxy luminosity, since it must necessarily include additional young stellar population components. The integrated photometry was derived by measuring the total flux of the system in an aperture large enough to encompass the optical extent of the galaxy at a flux level below 1 $\sigma$. Errors in total luminosity are typically 0.07 magnitudes. These values are given in Table 3. In this way it is found that the cool ULIGs are similar in luminosity to the warm ULIGs. In Paper II the H-band luminosity of an [*L*]{}$^*$ galaxy was estimated to be between [*M*]{}$_{\rm H}$=$-$23.8–24.1; we adopt here [*M*]{}$_{\rm H}$=$-$23.9. The cool ULIGs are found to have total luminosities ranging from [*M*]{}$_{\rm H}$=$-
$23.38 (Arp 220) to [*M*]{}$_{\rm H}$=$-$24.70 (IRAS14348$-$1447, IRAS22206$-$2715), with a mean value of [*M*]{}$_{\rm H}$=$-$24.23. This range is from 0.6–2.1 [*L*]{}$^*$, with a mean of 1.4 [*L*]{}$^*$ and with roughly half of the ULIGs lying between 1.5–2.5 [*L*]{}$^*$. However, there are no 5–7 [ *L*]{}$^*$ systems as there were with the warm ULIGs.
This result is complicated by the finding that the colors of the nuclear regions of the cool ULIGs are consistent with a line-of-sight extinction of [*A*]{}$_{\rm V}$ = 2–5 magnitudes (see §4.3). As was indicated in Paper II for IRAS08572+3915, it is therefore likely that the unreddened global values derived above are slight underestimates. One visual magnitude of reddening corresponds to [*A*]{}$_{\rm H}$ = 0.18 magnitudes (Reike & Lebofsky 1985). In the most extreme (hypothetical) case that all of the luminosity lay within the inner 2.5 kpc (since these reddening values were determined inside this region), then dereddening would at most increase the luminosity of the galaxies by 1 magnitude at [*H*]{}. In actuality this effect is much more modest, as can be demonstrated by dereddening just the observed luminosity in the nuclear regions and adding this to the outer galaxy luminosity; typically this increases the galaxy luminosity by 0.1 magnitudes, ranging from 0.08 in IRAS23233+0946 to 0.4 in UGC5101. The dereddened luminosities then range from [*M*]{}$_{\rm
H}$=$-$23.8 to [*M*]{}$_{\rm H}$=$-$24.9 or 0.9–2.5 [*L*]{}$^*$. Excluding the anomalous case of IRAS01003$-$2238, this is very similar to the range found for the warm ULIGs, and is consistent with the merger of two galaxies, a result also suggested by the extended morphologies of the ULIGs.
At the other extreme, it is possible to examine the luminosity of the outer galaxy by simply excluding all of the flux in the 2.5 kpc diameter nuclear region. On average, 37% of the H-band flux lies within a radius of 1.25 kpc of the center of each ULIG, varying from 52% (IRAS00091$-$0738) to 17% (IRAS23365+3604). Excluding the central regions, the luminosity of the outer regions of these galaxies range from [*M*]{}$_{\rm H}$=$-$23.0 (0.4 [*L*]{}$^*$) to [*M*]{}$_{\rm H}$=$-$24.5 (1.7 L$^*$). Obviously, these are considerable underestimates — the previous dereddened results are probably closer to the truth. Furthermore, the fraction of H- band light inside the central regions is similar to that found in the bulges of disk galaxies (Kent 1985), further indicating that the central region contains a considerable amount of old starlight.
### Star-Forming Knots
The star-forming knots have [*B*]{}-band luminosities ranging from [*M*]{}$_{\rm
B}$=$-$14.4 to [*M*]{}$_{\rm B}$=$-$18.5, which is similar to or slightly higher than those observed for the “warm” ULIGs. The higher values are likely to result from confusion. The more limited spatial resolution of the ground-based optical observations (typically 0.7–0.8at [*B*]{}) yields a physical resolution of 1 kpc at the median redshift. Any given knot optically detected from the ground is therefore likely to contain at least several of the knots detected by [*HST*]{}. An examination of Figure 3 confirms that in some cases (i.e., Mrk 273) this occurs, while in others (IRAS 22491$-$1808) this effect may be much less. Also, in Paper I it was found that the luminosity function of the knots continued to increase until reaching the detection limit of [*M*]{}$_{\rm B}$=$-$12, but confusion and limited spatial resolution prevent our reaching such low detection limits. Finally, any differences in the ages of the stellar populations of the knots would be expected to change their luminosities by several magnitudes. The total integrated luminosity of the star-forming knots (the sum of all knot luminosities) at B-band ranges from [*M*]{}$_{\rm B}$=$-$17.2 in IRAS 00091$-$0738 to [*M*]{}$_{\rm B}$=$-$19.5 in IRAS 14348$-$1447, with a median value of [*M*]{}$_{\rm B}$=$-$18.3. This is approximately 6 times more luminous than the mean integrated [*B*]{}-band luminosity of the knots in the warm ULIGs. The total fraction of the galaxy [*B*]{}-band luminosity found in the star-forming knots ranges from 6% to 25%, with a median of 11%. By comparison, the warm ULIGs vary from less than 1% (PKS 1345+12) to nearly 40% (IRAS 15206+3342), but this much larger spread in the total percentage is due to the presence of the putative active nuclei. The highest percentage (and total luminosity) are found in the two double nuclei with detectable star-forming knots: IRAS 22491$-
$1808 and IRAS 14348$-$1447. This may be indicative of the increased luminosity of the knots with younger ages, if the presence of double nuclei actually implies younger age.
Very few of the knots are detected at near-infrared wavelengths. Significant numbers of detections occur only in IRAS 12112+0305, IRAS 15250+3609, and IRAS 22491$-
$1808. The K-band luminosities for the detected knots range from [*M*]{}$_{\rm
K^{\prime}}$=$-$18.0 to [*M*]{}$_{\rm K^{\prime}}$=$-$23.0, which again is similar to those found in most of the “warm” ULIGs. The implications of the non- detections are discussed below. The total integrated luminosity of the star-forming knots detected at K[$^{\prime}$]{}ranges from [*M*]{}$_{\rm K^{\prime}}$=$-$18.0 to [*M*]{}$_{\rm
K^{\prime}}$=$-$23.5. Considering the upper limits imposed by the non-detections, the typical cool ULIG has an [*M*]{}$_{\rm K^{\prime}}$ originating in the star- forming knots of no more than $-$21.2. In those warm ULIGs with knots detectable in the near-infrared, this same number varied from $-$20.6 to $-$24.0, a very similar range.
Colors
------
### Models
A multi-color approach as used by Surace & Sanders (1999) is adopted here. The three colors ([*B$-$I*]{}),([*I$-$H*]{}),([*H$-$K[$^{\prime}$]{}*]{}) define a spectral shape. Two representations of this color space are presented in Figures 6 & 7. These 3-color diagrams are the set of all possible SEDs that can be defined (within the upper and lower bounds of the axes) by the four photometric measurements. The tracks made by the models (described below) represent the subset of all possible SEDs consistent with those models. Thus, the points on the model tracks lying closest to the real data represent the best possible fit of the data to the model. The two figures are identical and represent different rotations of the color space with respect to the line-of-sight-reddening vector. Figure 6 is rotated such that the plane of the page is orthogonal to this vector, and hence the location of a point in the projection of the 3-color space onto the page is independent of line-of-sight extinction. It is therefore possible to fit the data to the models independent of extinction. Figure 7 is rotated such that the vector lies in the plane of the page and thus allows an immediate estimate of the magnitude of line-of-sight extinction. The value of the two rotated projections is that they reduce the 3-color diagram to the more familiar 2-color diagram, only with the special property that they have separated the effects of line-of-sight extinction.
Figures 6 & 7 also show the colors of several modeled populations, as well as various reddening effects (see the figure caption). A more thorough explanation of these models is given by Surace & Sanders (1999). For comparison, the median colors of the “warm” ULIGs, which for reasons presented in Paper II are likely to be AGN viewed along a complex, lightly extinguished path, are shown with a large circle.
Rather than K-correct the data with an unknown SED, a different approach was used wherein the models were corrected instead. Since the models of the emission processes, starbursts, and QSOs have detailed SEDs, inverse K-corrections can be made to their rest-frame colors at the redshift of our targets. This is done by convolving the synthetic spectra with the known detector and filter bandpasses. The magnitude zeropoint calibration of the filters is derived using the Kurucz model spectrum of Vega (BC95). For brevity, Figures 6 and 7 are calibrated to the median ULIG redshift, z=0.1. The K-corrections affect the modeled stellar colors by $\delta$([*B$-$I,I$-$H,H$-
$K[$^{\prime}$]{}*]{})=($-$0.01,0.06,0.18) for a young (10 Myr) starburst, and by (0.42,0.11,0.24) for an old (2 Gyr) population. The effects of K-corrections can therefore be quite large depending on the modeled population, and hence the representation of stellar colors made in Figures 6 & 7 can be considered to have a sizable uncertainty attached. Larger redshift values also shift the dust emission curves to a more vertical orientation, as the rest frame filter bandpasses become bluer. Actual comparisons to the models made in the text were performed using models corrected to the appropriate redshift.
### Data
The same confusion over the definition of “nuclei” that plagues the determination of the underlying galaxy luminosity also creates problems for the color analysis. In order to help eliminate redshift-dependence, “nuclear” is defined to mean the central region of the galaxy 2.5 kpc in diameter. At the redshift of the most distant object in our sample ([*z*]{}= 0.152), this corresponds to 0.8, which under the worst conditions is roughly the size of one resolution element at optical wavelengths. The nuclear magnitudes were measured using circular aperture photometry, to which the aperture corrections derived from the PSFs were applied. The latter are the dominant source of error, and the nuclear magnitudes have uncertainties of 0.1 magnitudes. Table 3 also gives “nuclear” magnitudes in each of the 4 observed filters. The ([*B$-$I*]{}),([ *I$-$H*]{}),([*H$-$K[$^{\prime}$]{}*]{}) colors of the central 2.5 kpc regions of the cool ULIGs are shown in Figures 6 and 7.
Most of the cool ULIG nuclei have colors consistent with a young (10-100 Myr) stellar population combined with hot (800K) dust emission that contributes 30–40% of the K[$^{\prime}$]{}flux, or of a mixture of stars and absorbing dust with a total optical depth approaching [*A*]{}$_{\rm V}$ = 30–50 magnitudes. However, the more complete mixed stars and dust model with scattering that was discussed in Paper II indicates that it is difficult to achieve such reddened colors in this way, which strongly suggests that the ([*H$-$K*]{}[$^{\prime}$]{}) excess is actually due to hot dust emission. These colors are similar to the range observed by Carico et al. (1990b) in the “LIGs” ([*L*]{}$_{\rm
IR} >$ 10$^{11}$ [*L*]{}$_{\sun}$). Regardless of whether they are stars with additional hot dust emission or just stars mixed with extinguishing dust, they additionally appear to be reddened by a uniform foreground dust screen of A$_V$ = 1– 5 magnitudes, which is considerably higher than any foreground screen found in the “warm” ULIGs for either the nuclei or the star-forming knots. The presence of a greater foreground reddening screen than that in the warm ULIGs is qualitatively consistent with the evolution scenario in which an obscuring dust screen is blown away from the initial merger state. Alone among the cool ULIGs, IRAS22491$-$1808e appears to have optical/near- infrared colors almost identical to that of the median “warm” ULIG colors. This is possible evidence that IRAS 22491$-$1808 harbors an AGN, although its K[$^{\prime}$]{} luminosity ([*M*]{}$_{\rm K^{\prime}}$ = $-$22.37) is more than an order of magnitude fainter than that of a QSO (Surace 1998). It is perhaps surprising that UGC 5101 does not also have AGN-like colors. However, the [*HST*]{} images reveal structure in the vicinity of the compact [*I*]{}-band nucleus that cannot be resolved from the ground optically, and it is likely that this structure is contaminating our results. Results from NICMOS indicate that the point-like nucleus actually does have QSO-like colors (Scoville et al. 1999).
The colors of the star-forming knots are more problematic to analyze. This is because they are not as well determined as the nuclear colors due to their irregular shapes and small sizes, and because they are not detected in many cases in the near-infrared despite the superior resolution at long wavelengths. For the non-detections, the upper limits of the near-infrared luminosities of the knots constrain their colors, and hence their ages. Figure 8 shows the ([*B$-$I*]{}),([*I$-$H*]{}) colors of the BC95 instantaneous starburst used in Papers I & II. It is apparent that for starburst colors of ([*I$-
$H*]{}) $<$ 1 the modeled starburst age is constrained to less than 10 Myrs. As noted in Paper I, with detections only at B and I and upper limits in the near-infrared, it is only possible to set upper limits to the knot ages; dereddening will always [*decrease*]{} the estimated ages. Many of the cool ULIGs with detectable star-forming knots (IRAS 00091$-$0738, UGC 5101, IRAS 14348$-$1447, IRAS 20414$-$1651, IRAS 22491$-$1808 and IRAS 23365+3604) have at least several knots whose ages cannot be more than 5–7 Myrs. However, several also have knots that are sufficiently red that their age limits can only be estimated to be less than 1 Gyr (UGC 5101, IRAS 12112+0305, Mrk 273, IRAS 22491$-$1808) or a few hundred Myrs (IRAS 15250+3609 and IRAS 22491$-$1808). Thus, while we can show the presence of young stars, we cannot easily determine a lower age limit and thus demonstrate the presence of intermediate age stars. Thepresence of young stars seems to be much more prevalent among cool ULIGs than among warm ULIGs. While this could arguably be an effect of greater reddening in the warm ULIGs, the results presented here run counter to this in that the cool ULIG nuclei seem to suffer greater foreground reddening than that found in the warm ULIGs.
Relationship of Optical/Near-Infrared Emission to Bolometric Luminosity
=======================================================================
Surace & Sanders (1999) found that in many cases the putative AGN in the warm ULIGs could be made to account for the high bolometric luminosities of the systems by assuming a QSO-like SED and extrapolating from the observed optical/near-infrared luminosities. It is possible to apply similar techniques to the emission in the cool ULIGs. Starburst models are chosen since the optical/near-infrared colors appear to be characteristic of young stars. The bolometric luminosity will then be determined based on both empirical and theoretical models.
The theoretical SEDs are based on the BC95 models. As before, a modeled instantaneous starburst with a Salpeter IMF and upper and lower mass cutoffs of 0.1 and 125 was used. For any given age, a bolometric correction (BC) can be determined from the models to derive a bolometric luminosity based on the luminosity in some specific filter. Figure 9 shows the bolometric correction as a function of age for K-band. It is immediately apparent that the bolometric correction (BC) hinges critically on the age of the starburst. Prior to 5 Myrs, the luminosity is dominated by short-wavelength emission from high mass OB stars; very little of the bolometric luminosity originates in the late-type stars that emit strongly at long wavelengths. At 10 Myrs this changes radically as the most massive stars age and emit the bulk of their luminosity at progressively longer wavelengths, and hence the bolometric correction spans fully 6 magnitudes depending on the age. With this model, an ultraluminous starburst could have M$_K < -$21.2 for a young burst, or M$_K < -$27 for an old one. Ironically, the change in bolometric correction with age is much less at shorter wavelengths (Figure 9 also shows the BC as a function of age for B-band), but the uncertainty in luminosity caused by dust extinction at short wavelengths may offset any gain. It can be argued that ages of 10 Myrs and shorter for [*all*]{} of the knots are unlikely for several reasons. The cool ULIGs span a considerable range in interaction morphology. The presence of star-forming knots in most of these systems (as well as the more dynamically evolved, based on presence of single nuclei, warm ULIGs presented in Paper I), however, indicates that the star formation history for the knots as a whole must be similar to the dynamical timescale, i.e., hundreds of Myrs. Similarly, the wide range in colors seen in many of the knots in the ULIGs may be evidence for a spread in knot ages, although this may also be due to reddening.
While B-band observations are strongly affected by dust extinction (as noted earlier), we can constrain the maximum amount of foreground extinction on the basis that the starlight can only be dereddened to the colors of the bluest young starbursts. Most of the star-forming knots, therefore, cannot be dereddened by more than 2 magnitudes. This yields a fraction of the bolometric luminosity contributed by star-formation actually detected at B-band with a young (less than 10 Myr), dereddened starburst between 1% (IRAS 00091$-$0738) and 100% (IRAS 22491$-$1808), with a median value of 6%. If the starburst is older (100 Myrs), then these percentages fall by a factor of 6. An additional uncertainty results from the large geometric corrections to the luminosity discussed in Paper II. However, the scattering models indicate that it is relatively difficult to achieve the red colors observed in Figures 6 & 7 via a stellar ensemble mixed with dust alone, and that they are more likely to be a result of hot dust emission and foreground extinction. If the stars are embedded in a thick dusty medium then their luminosities are underestimated by factors of 3–6.
Results for the K-band are least affected by uncertainties in dust extinction, and hence should give the best luminosity estimate. We have examined those cases where star formation can be morphologically separated from the underlying host galaxy stellar population, i.e. those systems that show evidence for star-forming knots. The total K- band flux was dereddened by the amount indicated by the nuclear optical/near-infrared colors and then the portion attributable to starlight (typically 50–70%, again determined by the colors) was separated out. These extinction estimates are likely to be overestimates, given the high extinctions derived from the nuclear colors and the maximum extinctions derived from just the knot optical colors. As discussed above, the BC95 models cannot readily constrain the bolometric luminosity based on the K-band luminosity due to the enormous age dependence of the BC. If the star-forming knots are very young, (less than 1 Myr), then every cool ULIG with detectable knots at K[$^{\prime}$]{}could conceivably derive its entire bolometric luminosity from the star-forming knots alone. The upper limits for the knots not detected at K[$^{\prime}$]{}can only constrain the bolometric luminosity under this assumption of extremely young stellar age to being just under the measured ULIG bolometric luminosities. If the knots are more than 10 Myrs in age, then it is likely that none of the ULIGs could have contributions to their bolometric luminosities by star-forming knots much above 50%, and in most cases it would be less than 10%.
The empirical model is based on the bolometric correction from K-band to [ *L*]{}$_{\rm ir}$ found in the LIGs (Carico et al. 1990; equation 7 Paper II):
$${\rm log} L_{\rm ir}=-{{M_{\rm K^{\prime}}-6.45}\over{2.63}}$$
Assuming that nearly all ($\approx$95% )of the bolometric luminosity is emitted in the far-infrared (Sanders et al. 1988a), this is equivalent to a bolometric correction of 0-0.3 for M$_{K^{\prime}}$=$-$20 to $-$25, or roughly equivalent to the modeled value for a starburst 10 Myrs old (i.e. a BC of 0–0.3 will convert between M$_{K^{\prime}}$ to M$_{\rm bol}$ in LIGs). Using equation 1, the derived total bolometric luminosity for the detected star-forming knots ranges from 10$^{10}$ to 10$^{11.4}$ . The typical ULIG which has star-forming knots detected in [*any*]{} band has a contribution to the bolometric luminosity of not more than 10$^{10.5}$ . This falls short of the average cool ULIG bolometric luminosity by a factor of 50. It thus appears that [*nothing*]{} detected optically or in the near- infrared in the cool ULIGs is capable of generating the high bolometric luminosity assuming the kinds of SEDs we have used here. Note that this result does not preclude the existence of an ultraluminous starburst or AGN, since ultimately something must provide the known bolometric luminosity. Rather, this result implies that no such object is directly observable in the optical or near-infrared. Whatever the power source, it must be more highly obscured in the cool ULIGs than in the warm, a result supported by the estimated extinctions derived from the optical/near-infrared colors.
Conclusions
===========
We have presented high spatial resolution images of a sample of “cool” ULIGs. We find that:
1\. All of the systems are major mergers, as manifested by prominent tails and other extended tidal structure.
2\. A large fraction (at least 43% and as high as 72%) have resolvable double galactic nuclei. Their projected separations span the range 2.5–25 kpc. Double nuclei could have been detected with separations as small as 600 pc. The lack of small- separation (0.6–2.5 kpc) systems may support earlier similar findings that indicate that the time for final coalescence of the nuclei is comparatively brief.
3\. Most of the “cool” ULIGs have evidence for compact star-forming knots, with the exception of the systems with the widest separations. This may indicate that clustered formation begins in earnest only near the end stages of the merger, just before nuclear coalescence, although this result may also partly be due to the limited ground- based resolution.
4\. The nuclear colors of most of the cool ULIGs appear similar to that of a mixture of stars and extinguishing dust with a total optical depth of [*A*]{}$_{\rm V}$ = 30–50 magnitudes, or of young stars with a modest amount of the [*K*]{}[$^{\prime}$]{}emission (30%) originating in hot (800 K) dust. This hot dust emission is then further extinguished by a uniform dust screen [*A*]{}$_{\rm V}$ = 1–5 magnitudes thick. Unlike the “warm” ULIGs, the optical/near-infrared emission from the nuclear regions of the cool sample is probably stellar in nature.
5\. The dereddened [*H*]{}-band luminosities of the cool ULIG host galaxies lie in the range 0.9–2.5 [*L*]{}$^*$, and are thus essentially identical to those of the warm ULIGs and are consistent with their apparent merger origin. There are, however, no systems with [*L*]{}$_{\rm H} > $ 3 [*L*]{}$^*$, unlike 25% of the warm ULIGs.
6\. Very few of the star-forming knots are detected in the near-infrared, nor are any new knots (much like the “warm” ULIGs). Constraints imposed by the limits of their ([*I-H*]{}) colors imply very young ages ($<$ 5–7 Myrs) for many of these knots. They cannot be extinguished by more than a very mild foreground reddening screen ($<$ [*A*]{}$_{\rm V}$ = 2 magnitudes), and any additional knots must be very deeply embedded.
7\. As in the “warm” ULIGs, it appears that in most cases the star-forming knots are insufficiently luminous to be the source of the high bolometric luminosity, although in some cases they may provide a significant fraction. Although the constraints are found to be very model dependent, using assumptions similar to those used in Paper II, the observed optical/near-infrared emission observable in the knots provides typically only about 2% of the high bolometric luminosity, ranging from less than 1% to about 20%. This is very similar to the results found for the warm ULIGs. It appears unlikely that anything seen in the optical or near-infrared is related to the high bolometric luminosity, unless it has an SED much more biased towards the far-infrared.
We thank the creators of the tip/tilt system and the instruments, Kevin Jim and Gerry Luppino. We thank John Dvorak, Chris Stewart, and Rob Whitely for operating the telescope, and Andrew Pickles for helping debug numerous telescope-related problems. We thank Josh Barnes for his useful comments on early drafts of this text and Atiya Hakeem for proofreading it. We also thank Catherine Ishida and Alan Stockton for helping to supply observing time to complete these observations. We thank an anonymous referee whose comments helped improve the presentation of this paper. D.B.S was supported in part by JPL contract no. 961566 and J.A.S. was supported in part by NASA grant NAG5-3370.
Notes on Individual Objects
===========================
[*IRAS00091$-$0738*]{} — a system very similar in appearance to IRAS12112+0305 with an extremely complex nuclear core 3.5 kpc in diameter bifurcated N-S by dust lanes. A thick, perhaps edge-on tail extends 19 kpc (projected distance) to the south, while another plume-like tail extends 20 kpc to the north. At near-infrared wavelengths the core becomes a single source 0.7 in diameter, with a tidal tail to the north. This connects to and appears to be the base of the optical tidal tail which loops around back towards the south, and appears similar to the structure seen in the core of IRAS 12112+0305. Identified as having HII spectra by Veilleux et al. (1998).
[*IRAS01199$-$2307*]{} — double nucleus system separated by 25 kpc, the largest separation of any ULIG in this sample. Both nuclei are ellipsoidal in appearance and remain similar to spiral bulges; Veilleux et al. (1998) identifies these as having HII spectra. The NE nucleus is surprisingly faint in the near-infrared compared to the SW; additionally, at K[$^{\prime}$]{}it takes on the appearance of two lobes, the SW of which seems to correspond to the optical nucleus, and the NE to the arm extending from it. The western tail is 48 kpc in total length, with an apparent projected linear length of 40 kpc. The fainter eastern tail is 30 kpc long.
[*IRAS03521+0028*]{} — double nucleus system separated by 4.3 kpc. Short, stubby tails extend 9 kpc from each nucleus in an E-W direction. No additional star- forming knots or other structure are seen. This galaxy has LINER optical spectra (Veilleux et al. 1997).
[*UGC5101*]{} - a nearby system from the BGS, identified as a Seyfert 1.5 by Sanders et al. (1988a). A linear tail stretches 38 kpc to the west from the nucleus. A second tail runs clockwise from a position angle of -90 to form a nearly complete circle 225 degrees around with a radius of 17 kpc and a total length of 65 kpc. The optical morphology seems to suggest that the two sets of tails are actually similar features lying in planes perpendicular to each other. The [*HST*]{} images clearly reveal a set of spiral dust lanes to the north of the nucleus; the nucleus itself is dominated by a small (200 pc) size emission region. This spiral structure, along with that in the core of Mrk 273, may resemble that seen in the warm ULIG Mrk 231 (Paper I), only rotated out of the plane of the sky.
[*IRAS12112+0305*]{} — this galaxy appears to consist of a double-lobed core similar to that seen in IRAS00091-0738, with a bright tidal tail extending 18 kpc to the north and another arc 30 kpc long loops along the south. The orientation of the tails suggests that the northern tail is parallel to the line of sight, while the southern lies tilted by 45 . The southern arc has a blue condensation halfway along its length. A red star-like object is located 4 kpc SW of the core. Seen by Carico et al. (1990), this is unlikely to be a supernova since it has not faded noticeably in the intervening 7 years. It is not a foreground star as recent near-infrared spectroscopy (Surace, in prep) indicates that it is at the same redshift as the nucleus to its north — if it is a starburst knot or AGN then its total lack of U[$^{\prime}$]{}emission indicates a line of sight extinction of at least [*A*]{}$_{\rm v}$ = 3 magnitudes (Surace & Sanders 1999b, in press). Kim (1995) identifies this ULIG as having a LINER spectrum.
[*Mrk273*]{} — a very narrow ($\approx$ 2.5kpc) linear tidal tail extends 41 kpc south of the nucleus. Two diffuse plumes extend north and northeast 40 kpc each. The new deep images suggest that these plumes actually connect to the NE, making a complete ring nearly 100 kpc in circumference. In this respect it may be very similar to UGC5101, which it strongly resembles. The [*HST*]{} image of the galaxy core at I- band clearly shows a pattern of dust lanes running along the long axis of the nucleus, closely resembling those seen in edge-on spiral galaxies. The nucleus is dominated by a small, yet extended emission region. Khachikian & Weedman (1974) identify this as a Seyfert 2. Several authors have shown evidence for an apparent double nucleus in Mrk 273 (Majewski et al. 1993 Condon et al. 1991), but more recent adaptive optics imaging at K-band indicates that this “nucleus” is more likely to be a luminous star- forming region (Knapen et al. 1997).
[*IRAS14348$-$1447*]{} — two nuclei separated by 5.3 kpc. A plume extends north 20 kpc from the NE nucleus. A second plume stretches 17 kpc to the SW from the SW nucleus, where it merges with a fan-shaped plateau of emission extending from the NE to the SW of the galaxy. At least a dozen star-forming knots are seen in the [*HST*]{} image. The circular ring of knots surrounding the SW nucleus are well detected in our B & I images, as are the knots in the base of the northern tail. We fail to detect these (or any other) knots in either near-infrared filter. Although Nakajima et al. (1991) claimed detection of a broad-line component in the SW nucleus, Veilleux et al. (1997) fail to confirm this and designate this galaxy as a LINER.
[*IRAS15250+3609*]{} — an apparently single nucleus system, this galaxy has at least three other galaxies nearby to the north, south and west. However, it is unclear if these galaxies are physically related to the merger system. A tidal feature appears to emerge from the SW side of the nucleus and loops around on the eastern side to create a closed ring 27 kpc in diameter and 80 kpc long. Another, apparently shorter arm extends from the northeast of the nucleus. Near-infrared imaging reveals several additional knots of star formation near the nucleus. Veilleux et al. (1995) classify this as a LINER.
[*Arp 220*]{} — the closest and hence most well-studied of all ULIGs. A short tail extends approximately 30 kpc to the northwest (Sanders et al. 1988a; Hibbard 1995). Graham et al. (1990) have shown evidence for a double “nucleus” in Arp 220. Recent NICMOS imaging of the same galaxy has shown two nuclei separated by 360 pc, along with considerable star formation (Scoville et al. 1998). Because this ULIG appears be radically different from most others in terms of its high degree of variable dust obscuration, the registration between our ground-based optical data and the NICMOS data is highly uncertain. Therefore, the 2.5 kpc aperture photometry should be regarded with caution; we have assumed that the bright near-infrared peaks are spatially coincident with the optical dust lane. Kim (1995) classifies this as a LINER galaxy. The reader is directed towards a wealth of literature on this object (Sanders et al. 1988a; Graham et al. 1990; Skinner et al. 1997; Scoville et al. 1998).
[*IRAS20414$-$1651*]{} — this complex system is somewhat different from the other ULIGs. It has a horseshoe-shaped main body with some sort of extended structure “corkscrewing” 17 kpc to the south, which then bends west and meets with a very blue stellar condensation. These condensations may be superimposed background objects, or newly formed high density regions in the tails themselves. Kim (1995) identifies this as having HII spectra.
[*IRAS22206$-$2715*]{} — two nuclei separated by 9.2 kpc. The northern nucleus is circular in shape, perhaps suggesting that it is viewed face on. The southern nucleus is elongated and bar-shaped, suggesting a spiral galaxy inclined by roughly 60, an idea supported by the zig-zag nature of the tidal tail emanating from it. Each tail is approximately 20 kpc in total length. Like many of the other galaxies here, this seems to be a collision between two galaxies with high relative inclination, resulting in one broad, almost circular tidal tail, and one that is seen edge-on, or nearly so. Veilleux et al. (1998) finds this galaxy to have an HII spectrum.
[*IRAS22491$-$1808*]{} — the most extreme example of clustered star- formation in the sample. At optical wavelengths the knots of star-formation create so much confusion as to preclude identification of the galaxy nuclei. Only in the near- infrared do the two nuclei, separated by 2.5 kpc, stand out. The system has two high surface brightness tails. The first extends 12.5 kpc NE from the main body of the galaxy and is essentially featureless. The second extends NW 16 kpc from the main body, but ends in a complex, face-on circular loop 10 kpc in diameter. This tail has two very red clumps of star formation at its base, and the circular disk at the end of the tail has many blue knots of star formation. The [*HST*]{} data seem to suggest that the NE tail may also terminate in a somewhat fainter version of this disk. This galaxy has an HII spectrum (Sanders et al. 1988a; Veilleux et al. 1995).
[*IRAS23233+0946*]{} — two nuclei separated by 8.5 kpc. A tidal tail 19 kpc in linear distance and apparently 28 kpc in total length extends to the SE. Both nuclei have colors consistent with an old stellar population. Veilleux et al. (1998) classify this as a LINER.
[*IRAS23365+3604*]{} — this galaxy has a single, point-like nucleus embedded in a face-on disk 20 kpc in diameter. Four large star-forming knots are contained in this disk, and are detected even in the near-infrared. The disk itself has a twisting spiral structure. A faint linear tail extends north for 60 kpc from the nucleus. Another, higher surface brightness tail extends 20 kpc due south from the nucleus. This tail has an odd projection halfway along its length — a short (few kpc) tail jutting west from its side, not unlike the feature seen in the linear tail of Mrk 273. Veilleux et al. (1995) classify this as a LINER.
Barnes, J.E. 1992, , 393, 484 Barnes, J.E., & Hernquist, L. 1996, , 471, 115 Borne, K.D., Bushouse, H., Colina, L. & Lucas, R.A. 1997, , 191, 2102 Bruzual, G., & Charlot, S. 1993, , 405, 538 Bushouse, H.A., Lamb, & Werner, M.W. 1988. , 335, 74 Carico, D.P., Graham, J.R., Matthews, K., Wilson, T.D., Soifer, B.T., Neugebauer, G. & Sanders, D.B. 1990a, , 349, L39 Carico,D.P., Sanders, D.B., Soifer, B.T., Matthews, K., & Neugebauer, G. 1990b, , 100, 70 Condon, J.J., Huang, Z-P., Yin, Q.F., & Thuan, T.X. 1991, , 378, 65 de Grijp, M.H., Miley, G.K., Lub, J., & de Jong, T., 1985, , 314, 240 Elias, J.H., Frogel, J.A., Matthews, K. & Neugebauer, G. 1982, , 87, 1029 Graham, J.R., Carico, D.P., Matthews, K., Neugebauer, G., Soifer, B.T., & Wilson, T.D. 1990, , 354, L5 Haynes, M.P. & Herter, T.1988, , 96, 504 Hibbard, J.E. 1995, PhD. Thesis, Columbia, New York Jim, K.T. 1995, , 187, 1394 Joseph, R.D., Meikle, W., Robertson, N.A., & Wright, G.S. 1984, , 209, 111 Kandel, E.R., Schwartz, H., & Jessell, T.M., eds. 1991, Principles of Neural Science, (Elsevier: New York), 467 Kent, S.M. 1985, , 59, 115 Kim, D-C., 1995, PhD. Thesis, University of Hawaii Kim, D-C., & Sanders, D.B., 1998, , 119, 41 Kim, D-C., Veilleux, S., & Sanders, D.B., 1998, , 508, 627 Khachikian, E.Y., & Weedman, D.W., 1974, , 192, 581 Knapen, J.H., Laine, S., Yates, J.A., Robinson, A., Richards, A., Doyon, R., & Nadeua, D. 1997, , 490, L29 Landolt, A. 1982, , 88, 439 Landolt, A. 1992, , 104, 340 Malin, D. 1993, Scientific American, 269, 72 Majewski, S.R., Hereld, M., Koo, D.C., Illingworth, G.D., & Heckman, T.M., 1993, , 402, 125 Murphy, T., Armus, L., Matthews, K., Soifer, B.T., Mazzarella, J.M., Shupe, D.L., Strauss, M.A., & Neugebauer, G. 1996, , 111, 1025 Nakajima, T., Kawara, K., Nishida, M., & Gregory, B., 1991, , 373, 452 Pickles, A.J., Young, T.T., Nakamura, W., Cowie, L.L., et al. 1994, Proceedings of the SPIE, 2199, 504 Reike, G.H. & Lebofsky, M.J. 1985, , 288, 619 Sanders, D.B., Soifer, B.T., Elias, J.H., Madore, B.F., Matthews, K., Neugebauer, G., & Scoville, N.Z. 1988a, , 325, 74 Sanders, D.B., Soifer, B.T., Elias, J.H., Neugebauer, G.,& Matthews, K. 1988b, , 328, 35 Scoville, N.Z., Evans, A.S., Dinshaw, N., Thompson., R., et al. 1998, , 492, L107 Scoville, N.Z., Evans, A.S., Thompson, R., Rieke, M., Hines, D., Low, F.J., Dinshaw, N., & Surace, J.A. 1999, , in press. Skinner, C.J., Smith, H.A., Sturm, E., Barlow, M.J. et al. , 386, 472 Surace, J.A. 1998, Ph.D. Thesis, University of Hawaii Surace, J.A., Mazzarella, J.M, Soifer, B.T., & Wehrle, A.E. 1993, , 105, 864 Surace, J.A., Sanders, D.B., Vacca, W.D., Veilleux, S., & Mazzarella, J.M., 1998, , 492, 116 (Paper I) Surace, J.A., & Sanders, D.B., 1999a,, 512, 162 (Paper II) Surace, J.A., & Sanders, D.B., 1999b, , in press (Paper III) Toomre, A., 1978, in “IAU Symposium: The Large Scale Structure of the Universe”, (D. Reidel Publishing Co.: Dordrecht), 79, 109 Veilleux, S., Kim, D-C., & Sanders, D.B, Mazzarella, J.M., & Soifer, B.T., 1995, , 98, 171 Veilleux, S., Kim, D-C., & Sanders, D.B., 1998, , in press Veilleux, S., Sanders, D.B., & Kim, D-C. 1997, , 484, 92
Figures 1-5 are images and are available in JPEG format either from this preprint archive or from http://humu.ipac.caltech.edu/preprints/cool.
tab1.tex tab2.tex tab3.tex tab4.tex
[^1]: The Infrared Astronomical Satellite was developed and operated by the US National Aeronautics and Space Administration (NASA), the Netherlands Agency for Aerospace Programs (NIVR), and the UK Science and Engineering Research Council (SERC).
[^2]: $L_{\rm ir} \equiv$ L(8–1000) is computed using the flux in all four [*IRAS*]{} bands according to the prescription given by Perault (1987); see also Sanders & Mirabel (1996). Throughout this paper we use $H_{\rm o}$ = 75 km s$^{-1}$Mpc$^{-1}$, $q_{\rm o}$ = 0.5 (unless otherwise noted).
[^3]: Based on the bolometric conversion [*L*]{}$_{\rm bol}$ = 16.5 $\times
\nu$[*L*]{}$_{\nu}$(B) of Sanders et al.(1989) for PG QSOs. Elvis et al. (1994) indicates a value of 11.8 for UVSX QSOs, increasing [*M*]{}$_{\rm B}$ to -22.5.
[^4]: The quantities $f_{12}$, $f_{25}$, $f_{60}$, and $f_{100}$ represent the [*IRAS*]{} flux densities in Jy at 12, 25, 60, and 100 respectively.
[^5]: Human vision is adapted to understand three-dimensional structure in terms of simultaneous luminosity and color information, and thus such true-color images provide the most intuitive means of understanding the structure of the star-forming regions and the reddening resulting from embedded dust (Kandel et al. 1991, Malin 1993).
|
---
abstract: 'We present a newly developed moving-mesh technique for the multi-dimensional Boltzmann-Hydro code for the simulation of core-collapse supernovae (CCSNe). What makes this technique different from others is the fact that it treats not only hydrodynamics but also neutrino transfer in the language of the 3+1 formalism of general relativity (GR), making use of the shift vector to specify the time evolution of the coordinate system. This means that the transport part of our code is essentially general relativistic although in this paper it is applied only to the moving curvilinear coordinates in the flat Minknowski spacetime, since the gravity part is still Newtonian. The numerical aspect of the implementation is also described in detail. Employing the axisymmetric two-dimensional version of the code, we conduct two test computations: oscillations and runaways of proto-neutron star (PNS). We show that our new method works fine, tracking the motions of PNS correctly. We believe that this is a major advancement toward the realistic simulation of CCSNe.'
address:
- '$^1$TAPIR, Walter Burke Institute for Theoretical Physics, Mailcode 350-17, California Institute of Technology, Pasadena, CA 91125, USA'
- '$^2$Yukawa Institute for Theoretical Physics, Kyoto University, Oiwake-cho, Kitashirakawa, Sakyo-ku, Kyoto, 606-8502, Japan'
- '$^3$Advanced Research Institute for Science & Engineering, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555, Japan'
- '$^4$Center for Computational Astrophysics, National Astronimical Observatory of Japan, Mitaka, tokyo 181-8588, Japan'
- '$^5$Numazu College of Technology, Ooka 3600, Numazu, Shizuoka 410-8501, Japan'
- '$^6$Department of Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555, Japan'
- '$^7$High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 308-0801, Japan'
- '$^8$University of Tsukuba, 1-1-1, Tennodai Tsukuba, Ibaraki 305-8577, Japan'
author:
- |
Hiroki Nagakura$^{1,2}$, Wakana Iwakami$^{2,3}$, Shun Furusawa$^{4}$, Kohsuke Sumiyoshi$^{5}$, Shoichi Yamada$^{3,6}$,\
Hideo Matsufuru$^{7}$ and Akira Imakura$^{8}$
title: |
Three-Dimensional Boltzmann-Hydro Code for core-collapse in massive stars
II\. The implementation of moving-mesh for neutron star kicks
---
Introduction {#sec:intro}
============
There are a number of observational and theoretical indications that the inner engine of core collapse supernovae (CCSNe) is highly non-spherical. Recent observational developments include three-dimensional direct mapping of $^{44}{\rm Ti}$ by NuStar, which reveals that the inner parts of the ejecta of SN 1987A and Cas A have experienced large-scale mixing and convection [@2014Natur.506..339G; @2015Sci...348..670B]. This is consistent with the earlier evidence from polarimetric observations that the explosion is not spherical in general and becomes more so as ones sees deeper inside (see e.g., and references therein). High spatial velocities of pulsars are suggested to result from the recoil in asymmetric explosions .
On the theoretical side, there have been various mechanisms proposed as the cause for these multi-dimensional features, which may be important also for the explosion mechanism itself. The stellar rotation may be the simplest. Unsteady accretions of turbulent matter due to strong convection in the last stage of stellar evolution could be the seed perturbations for the asymmetry of CCSNe, which would set off different kinds of hydrodynamical instabilities during the stalled-shock phase and enhance neutrino heating and increase turbulent energies in the post-shock flow [@2015ApJ...808L..21C]. The nascent field of gravitational wave astronomy will be capable of directly investigating such asymmetric dynamics in the vicinity of proto-neutron star (PNS). The multi-dimensional modelling of CCSNe is hence indispensable to unveil the mechanism of CCSNe.
The neutrino radiation-hydrodynamic simulations of CCSNe have made a remarkable progress with ever increasing computational resources in the last few decades (for the current status, see e.g., @2008ApJ...685.1069O [@2014ApJ...786...83T; @2015arXiv150106330K; @2015ApJ...800...10D; @2014arXiv1409.5779B; @2015ApJ...807L..31L; @2012ApJ...756...84M; @2015arXiv150102999J]). Although the present interest of supernova society is directed mainly to 3D hydrodynamical aspects, neutrino transfer is certainly one of the most important ingredients in realistic modelling of CCSNe. Since neutrinos are not in thermal equilibrium with matter in general, the time evolution of the neutrino distribution function at each spatial location should be determined in principle by solving the Boltzmann equation in six-dimensional phase space with both special and general relativistic effects taken into account properly.
Both high numerical cost and technical difficulties have prevented us from conducting the ab-initio simulations, however. As a matter of fact, various approximations have been employed even in the most sophisticated multi-dimensional simulations, which include the multi-group flux-limited diffusion (MGFLD) [@1998ApJ...495..911M; @2005ApJ...626..317W; @2013ApJS..204....7Z; @2015ApJ...800...10D], isotropic diffusion source approximation (IDSA) [@2009ApJ...698.1174L; @2010PASJ...62L..49S; @2014ApJ...786...83T; @2016ApJ...817...72P], two-moment method [@2015arXiv150102999J; @2015arXiv151200113S; @2015arXiv151107443O], fast-multi-group transport (FMT) method [@2015MNRAS.448.2141M], variable Eddington tensor method [@2012ApJ...756...84M] and 2D Boltzmann transport method without $v/c$ corrections [@2004ApJ...609..277L; @2008ApJ...685.1069O]; some of them [@2012ApJ...756...84M; @2014arXiv1409.5779B; @2015ApJ...807L..31L] employ the ray-by-ray plus approximation further. It should be stressed that some of these approximations have been validated only under spherical symmetry and their performance in non-spherical situations, which no doubt prevail in the post-bounce supernova core, are still uncertain (see some also for the validation of the ray-by-ray approximation). In fact, some recent simulations have yielded outcomes, that seem at odds with each other, the reason for which may be the different approximations they adopted for neutrino transfer [@2012ApJ...756...84M; @2014ApJ...786...83T; @2014arXiv1409.5779B; @2015arXiv151200113S]. Multi-dimensional simulations with a Boltzmann solver, to which we refer in the following as Boltzmann-Hydro simulations, are hence indispensable to validate these approximations. They are obviously crucial to address the CCSNe mechanism.
Motivated by these facts, we have tackled the development of a Boltzmann-Hydro solver for the last few years. @2012ApJS..199...17S constructed a three dimensional Newtonian Boltzmann solver, which was latter coupled to a hydrodynamics solver with self-gravity and, more importantly, was extended to fully accommodate special relativity [@2014ApJS..214...16N]. The latter paper demonstrated in 1D simulations of CCSNe the capability of our new code based on two different energy grids: Lagrangian-remapping grid and laboratory-fixed grid. Having been fine-tuned for massive parallel supercomputers, this code in the 2D version is currently being run on the K supercomputer for axisymmetric simulations of CCSNe. This paper is based on our solution to the problem we encountered in these computations.
The problem is the following: the nascent PNS starts to receive random kicks shortly after core bounce when the matter that has experienced prompt convection falls onto the PNS. It is further kicked around later by the matter that has undergone hydrodynamical instabilities such as the standing accretion shock instability (SASI) or neutrino-driven convection (see also ). The PNS then moves at velocities of the order of $100$km/s and is temporarily dislocated by several kilometers from the coordinate center (see also @2012MNRAS.423.1805N). It is also important to remember that if the shock wave is successfully revived, asymmetric ejecta will certainly produce PNS kicks. The fixed polar coordinates are not appropriate to follow these proper motions of PNS, since dislocated spheres cannot be reproduced very well on this grid. Indeed, we found that the simulation without any special treatment either crashed or resulted in unphysical outcomes once the PNS moves a few km away from the mesh center.
The previous studies treated this problem rather pragmatically. For example, @2013ApJ...770...66H [@2014ApJ...786...83T; @2014arXiv1409.5779B; @2015ApJ...807L..31L; @2015MNRAS.453..287M]) restricted the motion of PNS by artificially imposing spherical symmetry in the PNS. Although no numerical problem may have occured in these methods, it might have discarded some potentially important physics along. On the other hand, MPA group employed a moving-mesh technique, adding an extra velocity to the advection terms in the equations of motion that compensates the PNS motion . In their experimental simulations @2010PhRvD..82j3016N also used a similar moving-mesh method to track the PNS motion, remapping the grid. It is stressed, however, that the latter two groups completely ignored the effects of the moving-mesh on neutrino transfer.
In this paper, we propose an entirely new method to precisely treat this issue not only for hydrodynamics but also for neutrino transfer. The basic idea is like in the previous papers to globally translate the coordinates so that the mass center of PNS should stay always very close to the mesh center. It should work, since the PNS remains almost spherical even when it oscillates or runs away[^1]. The important thing is that the basic equations both on neutrino transfer and hydrodynamics should be modified on this moving grid, since it is not an inertial system. Hydrodynamics equations are easy to extend (see e.g., ) [^2], while the modification of Boltzmann equation is more complicated because the coordinate acceleration affects the neutrino transport in non-trivial ways in the six-dimensional phase space. Then the general relativistic description is a natural and probably the unique choice for handling various effects correctly. We deal with this within the 3+1 formulation of GR, using the conservation form of the general relativistic Boltzmann equation [@2014PhRvD..89h4073S]. This means that the current upgrade of the neutrino transport module in our code is equivalent to a GR extension, which was actually planned as the next improvement to our Boltzmann solver. It should be mentioned, however, that in this paper the GR transport code is applied only to the flat Minknowski spacetime, since the treatment of gravity in our code is still Newtonian; the Newtonian version of the hydrodynamics code is employed in this work for the same reason (see also Section \[sec:Feedback\]). The GR capability of the code will be demonstrated elsewhere [@Nagainprep].
This paper is organized as follows. In Section \[sec:baseeq\], we reformulate the Boltzmann equation on the moving-mesh with the language of the 3+1 formalism of GR. We also explain the numerical implementation of this GR extension to our previous special relativistic (SR) code in Section \[sec:extension\]. Then, the feedbacks to hydrodynamics are described in Section \[sec:Feedback\]. We validate our new method with two tests: PNS oscillations around and runaways from its original position. The results are presented in Section \[sec:twodsim\]. Finally we conclude this paper with a summary in Section \[sec:summary\]. Throughout this paper, Greek and Latin subscripts denote space-time and space components, respectively. We use the metric signature of $- + + +$. Unless otherwise stated, we work in units with $c=G=1$, where $c$ and $G$ are the light speed and gravitational constant, respectively.
Basic Equations {#sec:baseeq}
===============
Following @2014PhRvD..89h4073S, we start with the conservation form of the Boltzmann equation in general relativity: $$\begin{aligned}
&& \frac{1}{\sqrt{-g}} \left. \frac{\partial}{\partial x^{\alpha}} \right|_{q_{i}}
\Biggl[ \Bigl( e^{\alpha}_{(0)} + \sum^{3}_{i=1} \ell_{i} e^{\alpha}_{i} \Bigr) \sqrt{-g} f \Biggr] \nonumber \\
&& - \frac{1}{\nu^2} \frac{\partial}{\partial \nu}( \nu^3 f \omega_{(0)} )
+ \frac{1}{{\rm sin}\bar{\theta}} \frac{\partial}{\partial \bar{\theta}}
( {\rm sin}\bar{\theta} f \omega_{(\bar{\theta})} ) \nonumber \\
&& + \frac{1}{ {\rm sin}^2 \bar{\theta}} \frac{\partial}{\partial \bar{\phi}} (f \omega_{(\bar{\phi})}) = S_{\rm{rad}}, \label{eq:basicBoltz}\end{aligned}$$ where $g, x^{\alpha}$ are the determinant of the metric, coordinates of spacetime, respectively, and $f$ is the neutrino distribution function; $e^{\alpha}_{(\mu)} (\mu = 0, 1, 2, 3)$ denote a set of the tetrad bases for a local orthonormal frame; $\ell_{i}$ are directional cosines for the direction of neutrino propagation with respect to $e^{\alpha}_{(i)}$ (see Fig. 1 in @2014PhRvD..89h4073S). The three components of $\ell_{i}$ can be written as $$\begin{aligned}
&& \ell_{(1)} = {\rm cos} \hspace{0.5mm} \bar{\theta}, \nonumber \\
&& \ell_{(2)} = {\rm sin} \hspace{0.5mm} \bar{\theta} {\rm cos} \hspace{0.5mm} \bar{\phi}, \nonumber \\
&& \ell_{(3)} = {\rm sin} \hspace{0.5mm} \bar{\theta} {\rm sin} \hspace{0.5mm} \bar{\phi}, \label{eq:el}\end{aligned}$$ where $\bar{\theta}$ and $\bar{\phi}$ stand for the polar and azimuthal angles [@1966AnPhy..37..487L]. We further define coordinates $q_{i}$ in momentum space- $q_{1}~=~\nu, q_{2}~=~\bar{\theta}$ and $q_{3}~=~\bar{\phi}$ with $\nu$ being the neutrino energy in this local orthonormal frame and also expressed as $\nu \equiv - p_{\alpha} e^{\alpha}_{(0)}$ with the four momentum of neutrino, $p^{\alpha}$. In this paper neutrinos are assumed to be massless. $\omega_{(0)}, \omega_{(\bar{\theta})}, \omega_{(\bar{\phi})}$ are given as $$\begin{aligned}
&& \omega_{(0)} \equiv \nu^{-2} p^{\alpha} p_{\beta} \nabla_{\alpha} e^{\beta}_{(0)}, \nonumber \\
&& \omega_{(\bar{\theta})} \equiv \sum^{3}_{i=1} \omega_{i} \frac{ \partial \ell_{(i)} }{\partial \bar{\theta} }, \nonumber \\
&& \omega_{(\bar{\phi})} \equiv \sum^{3}_{i=2} \omega_{i} \frac{ \partial \ell_{(i)} }{\partial \bar{\phi} }, \nonumber \\
&& \omega_{i} \equiv \nu^{-2} p^{\alpha} p_{\beta} \nabla_{\alpha} e^{\beta}_{(i)}. \label{eq:Omega}\end{aligned}$$ As shown in @2014PhRvD..89h4073S, these $\omega$’s can be expressed with the Ricci rotation coefficients. $S_{\rm{rad}}$ on the right hand side of Eq. (\[eq:basicBoltz\]) originates from the collision term for neutrino-matter interactions.
In the 3+1 formulation of GR, the line element is expressed as $$\begin{aligned}
ds^2 = (- \alpha^2 + \beta^k \beta_k ) dt^2 + 2 \beta_i dt dx^i + \gamma_{ij} dx^i dx^j, \label{eq:lineeleme}\end{aligned}$$ where $\alpha, \beta^{i}$ and $\gamma_{ij}$ denote the lapse function, shift vector and spatial 3-metric, respectively. In our extended Boltzmann code, the time-like basis $e^{\alpha}_{(0)}$ is chosen so that it should coincide with the unit vector $n^{\alpha}$ normal to the spatial hypersurface with $t={\rm const}$. This choice is a natural extension from our previous SR Boltzmann solver (see Section \[sec:extension\] for more details). Then three other spatial tetrad bases are taken so that they should be tangential to the spatial hypersurface. In this paper we assume that the spacetime is flat and is foliated with flat spatial hypersurfaces, on which we deploy the polar coordinates ($x^1 = r, x^2 = \theta, x^3 = \phi$). Then non-vanishing components of the 3 metric are $\gamma_{rr} = 1, \gamma_{\theta \theta} = r^{2}$ and $\gamma_{\phi \phi} = r^2 {\rm sin}{\theta}^2$. The spatial tetrad bases are chosen so that the $e_{(1)}$ be parallel to the radial coordinate, and $e_{(2)}$ be tangential to the surface spanned by $\partial_t$ and $\partial_{\theta}$, and $e_{(3)}$ be orthogonal to the other two: $$\begin{aligned}
&& e^{\alpha}_{(1)} = (0, \gamma^{-1/2}_{rr}, 0, 0 ) \nonumber \\
&& e^{\alpha}_{(2)} = \Biggl(0, -\frac{\gamma^{-1/2}_{r \theta}}{\sqrt{\gamma_{rr} (\gamma_{rr} \gamma_{\theta \theta} - \gamma^2_{r \theta})}}, \sqrt{ \frac{\gamma_{rr}}{ \gamma_{rr} \gamma_{\theta \theta} - \gamma^2_{r \theta} } }, 0 \Biggr) \nonumber \\
&& e^{\alpha}_{(3)} = \Biggl(0, \frac{\gamma^{r \phi}}{\sqrt{\gamma^{\phi \phi}}} , \frac{\gamma^{\theta \phi}}{\sqrt{\gamma^{\phi \phi}}}, \sqrt{\gamma^{\phi \phi}} \Biggr). \label{eq:polartetrad}\end{aligned}$$ We refer to this orthonormal frame as the O-frame in the following. In accord with the above foliation of spacetime we set $\alpha = 1$. We utilize the shift vector to deal with the motion of the spatial coordinates (see Figure \[fig:shift\]). In fact, we set $\beta^{i} = \bar{V}^i$, where $\bar{V}^i$ is approximately the velocity of PNS measured in the O-frame (see the next section for details). Note that the employment of the globally uniform shift vector in this paper should be compatible with the use of other gauge conditions for the shift vector in possible applications of the current formulation to (dynamical) curved spacetimes. This completes the description of Eq. (\[eq:basicBoltz\]). We now turn to its numerical implementations.
Numerical Implementation {#sec:extension}
========================
Shift vector {#subsec:shiftvector}
------------
Let us suppose that the basic equations are somehow finite-differenced and all hydrodynamics and space-time quantities are obtained up to the $n$-th time step. The average velocity of PNS at this time step ($V^{i(n)}$) is then given via the linear momentum (P) and mass of PNS (M) as $$\begin{aligned}
&& V^{i(n)} = \frac{P^{i(n)}}{M^{(n)}}, \label{eq:PNSvelo} \nonumber \\
&& P^{i(n)} \equiv \int \rho^{(n)} v^{i(n)}_{o} dV_{\rm{PNS}}, \nonumber \\
&& M^{(n)} \equiv \int \rho^{(n)} dV_{\rm{PNS}}, \label{eq:PNS_MomandMass}\end{aligned}$$ where $\rho$, $v^{i}_{o}$ and $dV_{\rm{PNS}}$ denote the density, 3-velocity of matter (measured in the O-frame) and volume element in PNS, respectively. The PNS is defined to be the region, where the angle-averaged density ($\bar{\rho}$) is larger than $10^{13} {\rm g/cm}^3$. The time derivative of the velocity, or the acceleration of PNS, at the same time step is given by the following relation: $$\begin{aligned}
\frac{dV^{i(n+\frac{1}{2})}}{dt} = \frac{( V^{i(n+1)} - V^{i(n)} )}{\Delta t^{(n)}}, \label{eq:accelPNS}\end{aligned}$$ where $\Delta t^{(n)}$ is the interval between the $(n+1)$-th and $n$-th time steps.
Note that we do not use these $V^i$ and $dV^i/dt$ as they are for the following reasons. First, $V^{i}$ obtained in this way shows glitches from time to time when the PNS surface traverses an interface of the radial mesh points. Second, if the tracking of PNS motions were to be perfect, the acceleration of PNS should be determined iteratively, since the velocity of PNS at the next time step should be consistent with this acceleration but is obtained only after the advancement of the step. Such iterative process would be very time-consuming. Fortunately, however, it is unnecessary to exactly trace the motion of PNS and it turns out that the following approximate treatment suffices to deal with the proper motion of PNS.
We define the approximate PNS velocity $\bar{V}^{i}$ as follows: $$\begin{aligned}
&&\bar{V}^{i(n+1)} = \bar{V}^{i(n)} + \frac{d\bar{V}^{i(n)}}{dt} \Delta t^{(n)}, \label{eq:efPNSv}\end{aligned}$$ with $$\begin{aligned}
&&\frac{d\bar{V}^{i(n)}}{dt} = \frac{dV^{i(n-\frac{1}{2})}}{dt} + C^{(n)} + D^{(n)}, \nonumber \\
&& C^{(n)} \equiv ( V^{i(n)} - \bar{V}^{i(n)} )/ T, \nonumber \\
&& D^{(n)} \equiv X^{i(n)}_{m} / T^2, \label{eq:modacc}\end{aligned}$$ $dV^{i(n-\frac{1}{2})}/dt$ is given by Eq. (\[eq:accelPNS\]) (but the backward difference); $C^{(n)}$ and $D^{(n)}$ are the terms that allow some deviations of the coordinate velocity and/or origin (denoted here by $X^{i(n)}_{m}$) from those of PNS and thus avoid the glitch; $T$ is the recovering time and is taken to be $0.1$ms in this paper. $C^{(n)}$ and $D^{(n)}$ also prevent secular drifts of PNS. In fact $C^n$ works as a damper to prohibit large differences between two velocities, whereas $D^n$ serves as an attractor to ensure that the coordinate origin tends to the mass center of the PNS. As an additional measure to ensure smooth coordinate motions, we do not update the value of $d\bar{V}^{i(n)}/dt$ when the PNS surface trespasses an interface of radial mesh points. As demonstrated later, we find that employing $\bar{V}$ as the shift vector in combination with the evaluation of $d\bar{V}^{i(n)}/dt$ given above is indeed sufficient to solve the problems with the proper motion of PNS.
Although the shift vector field thus obtained is spatially uniform, its derivatives with respect to $r, \theta$ and $\phi$ are non-vanishing, since the coordinates are curvilinear. The explicit form of the Ricci rotation coefficients are rather involved (although calculations are straightforwardly) numerically in the code. This will be useful indeed, since we are required to evaluate Ricci rotation coefficients for numerically obtained metrices in truly GR simulations.
Modifications to SR code {#subsec:modificSR}
------------------------
Although the GR Boltzmann equation, Eq. (\[eq:basicBoltz\]), has a simple form, the consistent treatment of the advection and collision terms is complicated even for the flat spacetime. In @2014ApJS..214...16N, we surmounted the difficulties by introducing two energy grids: [*Lagrangian remapped grid*]{} (LRG) and [*Laboratory fixed grid*]{} (LFG). We also devised for the SR code some other numerical techniques (e.g., a semi-implicit method for temporal sweep). It is therefore desirable in the GR extension to the current SR code that we should retain these features as much as possible. In the following, we describe how we achieved it.
We first consider the Boltzmann equation (\[eq:basicBoltz\]) in flat spacetime. The tetrad bases, Eqs. (\[eq:polartetrad\]), are reduced in this case to $$\begin{aligned}
&& e^{\alpha (F)}_{(0)} = (1, 0, 0, 0 ), \nonumber \\
&& e^{\alpha (F)}_{(1)} = (0, 1, 0, 0 ), \nonumber \\
&& e^{\alpha (F)}_{(2)} = \Biggl(0, 0, \frac{1}{r}, 0 \Biggr), \nonumber \\
&& e^{\alpha (F)}_{(3)} = \Biggl(0, 0, 0, \frac{1}{r {\rm sin} \theta } \Biggl), \label{eq:polartetrad_flat}\end{aligned}$$ where the subscripts “F” hereafter implies quantities in the flat spacetime. Then we can evaluate the $\omega$ variables in Eq. (\[eq:Omega\]) as: $$\begin{aligned}
&& \omega_{(0)}^{(F)} = 0, \nonumber \\
&& \omega_{(\bar{\theta})}^{(F)} = - \frac{{\rm sin} \bar{\theta} }{r}, \nonumber \\
&& \omega_{(\bar{\phi})}^{(F)} = - \frac{ {\rm cot}\theta }{r} {\rm sin}^3 \bar{\theta} \hspace{0.5mm} {\rm sin} \bar{\phi}. \label{eq:Omega_flat}\end{aligned}$$ Substituting these results into Eq. (\[eq:basicBoltz\]) and using the determinant of the metric for the fixed polar coordinates in the flat spacetime ($\sqrt{-g^{(F)}} = r^2 {\rm sin}\theta$), we reproduce the SR Boltzmann equation we employed in @2014ApJS..214...16N.
As we mentioned earlier, in the GR extension we want to retain the various features already implemented in our SR Boltzmann-Hydro code. This is most easily achieved by casting Eq. (\[eq:basicBoltz\]) into the following form: $$\begin{aligned}
&& \frac{V}{\sqrt{-g^{(F)}}} \left. \frac{\partial}{\partial x^{\alpha}} \right|_{q_{i}}
\Biggl[ K^{\alpha} \biggl\{ \Bigl( e^{\alpha}_{(0)} + \sum^{3}_{i=1} \ell_{i} e^{\alpha}_{i} \Bigr) \sqrt{-g} \biggr\}^{(F)} f \Biggr] \nonumber \\
&& - \frac{1}{\nu^2} \frac{\partial}{\partial \nu}( \nu^3 f \omega_{(0)} )
+ \frac{1}{{\rm sin}\bar{\theta}} \frac{\partial}{\partial \bar{\theta}}
( {\rm sin}\bar{\theta} f \omega_{(\bar{\theta})}^{(F)}
+ {\rm sin}\bar{\theta} f \Delta \omega_{(\bar{\theta})} ) \nonumber \\
&& + \frac{1}{ {\rm sin}^2 \bar{\theta}} \frac{\partial}{\partial \bar{\phi}} (f \omega_{(\bar{\phi})}^{(F)} + f \Delta \omega_{(\bar{\phi})} ) = S_{\rm{rad}}, \label{eq:basicBoltz_nume}\end{aligned}$$ with $$\begin{aligned}
&& V \equiv \frac{ \sqrt{-g^{(F)}} }{\sqrt{-g} }, \nonumber \\
&& K^{\alpha} \equiv \frac{ \biggl\{ \Bigl( e^{\alpha}_{(0)} + \sum^{3}_{i=1} \ell_{i} e^{\alpha}_{i} \Bigr) \sqrt{-g} \biggr\} }{ \biggl\{ \Bigl( e^{\alpha}_{(0)} + \sum^{3}_{i=1} \ell_{i} e^{\alpha}_{i} \Bigr) \sqrt{-g} \biggr\}^{(F)} }, \nonumber \\
&& \Delta \omega_{(\bar{\theta})} \equiv \omega_{(\bar{\theta})} - \omega_{(\bar{\theta})}^{(F)}, \nonumber \\
&& \Delta \omega_{(\bar{\phi})} \equiv \omega_{(\bar{\phi})} - \omega_{(\bar{\phi})}^{(F)}. \label{eq:correcterms}\end{aligned}$$ It should be apparent that these four variables in Eq. (\[eq:correcterms\]) can be regarded as the GR corrections to the SR equation. This allows us to directly utilize our SR Boltzmann code in the GR extension. Although we employ the moving spherical coordinates in the Minkowski spacetime in this paper, the GR-extended code can accommodate any metric and gauge conditions, evaluating various GR terms numerically. Note also that, unlike other advection terms, the energy-derivative term represents gravitational redshift, a purely GR effect, which we calculate on the LRG [@2014ApJS..214...16N][^3].
The treatment of the collision terms is also similar to that in the flat spacetime. Since the collision terms can be most easily calculated in the fluid-rest frame, we first evaluate them in this frame and then Lorentz-transform them to the laboratory frame, which is identical to the O-frame in the current formulation. This is done with the tetrads corresponding to these frames. We denote the tetrad bases of the fluid-rest frame as $\mbox{\boldmath $\hat{e}$}_{\hat{\mu}}$, which is expressed with $\mbox{\boldmath $e$}_{({\nu})}$ as $$\begin{aligned}
\mbox{\boldmath $\hat{e}$}_{(\hat{\mu})} \equiv \Lambda_{(\hat{\mu})}^{\hspace{2mm}(\nu)} \mbox{\boldmath $e$}_{({\nu})}, \label{eq:deffluidtetrad}\end{aligned}$$ where $\Lambda$ stands for the Lorentz boost transformation. The components of $\Lambda$ are given as $$\begin{aligned}
&&\Lambda_{(\hat{\mu})}^{\hspace{2mm}(\nu)} = ( \Lambda^{(\hat{\mu})}_{\hspace{2mm}(\nu)} )^{-1} , \\
&&\Lambda^{(\hat{\mu})}_{\hspace{2mm}(\nu)} =
\begin{pmatrix}
\gamma & - \gamma v^{(i)} \\
- \gamma v^{(j)} & \hspace{3mm} I^{3} + \frac{\gamma^2}{1+\gamma} v^{(i)} v^{(j)}
\end{pmatrix}
,\end{aligned}$$ where $I^{3}$ denotes the $3 \times 3$ identity matrix, $v^{(i)}$ and $\gamma$ are defined with the tetrad bases of the laboratory frame and the fluid 4-velocity $\mbox{\boldmath $u$}$ as $$\begin{aligned}
&&u_{(\mu)} \equiv \mbox{\boldmath $u$} \cdot \mbox{\boldmath $e$}_{(\mu)}, \label{eq:4velotetdn} \\
&&u^{(\mu)} = \eta^{ \mu \nu } u_{(\nu)}, \label{eq:4velotetup} \\
&&v^{(i)} \equiv \frac{u^{(i)}}{u^{(0)}} , \label{eq:3velotetup} \\
&&\gamma \equiv u^{(0)}, \label{eq:deflorentzfac}\end{aligned}$$ where $\eta^{\mu \nu}$ denotes the Minkowski metric. The 4-momentum of neutrino is also projected on $\mbox{\boldmath $\hat{e}$}_{(\hat{\mu})} $. Then the energy shift and aberration are determined by the Doppler factor given above and our SR formulation can be directly passed over to the GR-extended code (see Section 4 in @2014ApJS..214...16N).
Feedback to Matter {#sec:Feedback}
==================
In this section, we describe the feedback from neutrino interactions to hydrodynamics in detail. We first present the fully GR formulation, and then take the Newtonian limit. It is noted that the hydrodynamics part of our code is also fully GR but its Newtonian version is employed in this paper, since the self-gravity part of the code is still Newtonian.
The basic equations for matter dynamics consist of the conservation laws of baryon number, energy-momentum and electron number, which are written as, respectively, $$\begin{aligned}
(\rho_0 u^{\nu})_{;\nu} &=& 0, \label{eq:continuityeq} \\
T_{\rm{(hd)} \hspace{1.5mm} ;\nu}^{\mu \nu} &=& - G^{\mu}, \label{eq:TandGfinal} \\
N_{\rm{(e)} \hspace{1mm} ;\nu}^{\nu} &=& - \Gamma, \label{eq:NandGammafinal}\end{aligned}$$ where $\rho_0$, $T_{\rm{(hd)}}^{\mu \nu}$ and $N_{(e)}^{\nu}$ denote the rest-mass density of baryons, energy-momentum tensor of matter and the electron number 4-current, respectively. The right-hand sides of the latter two equations represent the feedback, which are related to the collision term of Boltzmann equation, Eqs (\[eq:basicBoltz\]) or (\[eq:basicBoltz\_nume\]), as follows: $$\begin{aligned}
G^{\mu} &\equiv& \sum_{\rm{i}} G_{\rm{i}}^{\mu}, \label{eq:Gsumdef} \\
G_{\rm{i}}^{\mu} &\equiv& \int p_{\rm{i}}^{\mu} \nu S_{\rm{rad} (\rm{i})} dV_p, \label{eq:Gdef} \\
\Gamma &\equiv& \Gamma_{\nu_{e}} - \Gamma_{\bar{\nu_{e}}}, \label{eq:Gammasumdef} \\
\Gamma_{\rm{i}} &\equiv& \int \nu S_{\rm{rad} (\rm{i})} dV_p. \label{eq:Gammadef}\end{aligned}$$ In these expressions, $dV_{p} (= \nu {\rm sin}\bar{\theta} d \nu d \bar{\theta} d \bar{\phi} )$ denotes the invariant volume in the momentum space. The subscript “$\rm{i}$” indicates the neutrino species, which we omit hereafter for simplicity.
In actual simulations, we first evaluate the tetrad components of $\mbox{\boldmath $G$}$ in the fluid-rest frame as $$\begin{aligned}
\hat{G}_{(\hat{\mu})} &\equiv& \int \hat{p}_{(\hat{\mu})} \hat{\nu} \hat{S}_{\rm{rad}} dV_p, \label{eq:Gfluidrest}\end{aligned}$$ where the hat indicates variables in the fluid-rest frame, i.e., $\hat{G}_{(\hat{\mu})} = \mbox{\boldmath $G$} \cdot \mbox{\boldmath $\hat{e}$}_{(\hat{\mu})}$ etc.. Then, the coordinate components of $\mbox{\boldmath $G$}$ can be expressed via $ \hat{G}_{(\hat{\nu})}$ and $\mbox{\boldmath $\hat{e}$}_{(\hat{\nu})}$ as: $$\begin{aligned}
G^{\mu} = \sum_{\hat{\nu}} \hat{G}_{(\hat{\nu})} \hat{e}_{(\hat{\nu})}^{\mu}, \label{eq:coordinatecompoAup}\end{aligned}$$ where $\hat{e}_{(\hat{\nu})}^{\mu}$ denotes the coordinate components of $\mbox{\boldmath $\hat{e}$}_{(\hat{\nu})}$.
In the 3+1 formalism the basic equations for matter dynamics can be expressed as $$\begin{aligned}
&& \partial_t {\rho}_{\ast} + \partial_j\left( {\rho}_{\ast} v^j \right) = 0 ,
\label{eq:conti3pra1} \\
&& \partial_t S_i + \partial_j\left( \alpha \sqrt{\gamma} \, T^{j}_{\rm{(hd)} \hspace{1mm} i}\right)
= \frac{1}{2} \alpha \sqrt{\gamma} \, T^{\alpha \beta} g_{\alpha \beta ,i} - G_i,
\label{eq:Mon3pra1} \\
&& \partial_t {\tau}
+ \partial_i \left( {\alpha}^2 \sqrt{\gamma} \, T^{0i}
- {\rho}_{\ast} v^i \right) = s - \alpha^2 \sqrt{\gamma} G^{0},
\label{eq:Ene3pra1} \\
&& \partial_t ({\rho}_{\ast} Y_{\rm e}) + \partial_j\left( {\rho}_{\ast} Y_{\rm e} v^j \right) = - \alpha \sqrt{\gamma} \hspace{0.5mm} \Gamma ,
\label{eq:Lepcon3pra1}\end{aligned}$$ where various variables are defined as follows: $$\begin{aligned}
v^j & \equiv & \frac{u^j}{u^t} , \label{eq:threevelodef} \\
{\rho}_{\ast} & \equiv & \alpha \sqrt{\gamma} \, \rho_0 u^t ,
\label{eq:con1def} \\
S_j & \equiv & \alpha \sqrt{\gamma} \, T^0 \! _j
= {\rho}_{\ast} h u_j ,
\label{eq:con2def} \\
\tau & \equiv & \alpha^2 \sqrt{\gamma} \, T^{00} - {\rho}_{\ast}
= {\rho}_{\ast} \alpha h u^t - \sqrt{\gamma} \, p - {\rho}_{\ast} ,
\label{eq:con3def} \\
s & \equiv & \alpha \sqrt{\gamma} \,
\biggl\{ \left( T^{00} \beta^i \beta^j + 2 T^{0i} \beta^j + T^{ij} \right) K_{ij} \nonumber \\
&& \hspace{5mm} -\left( T^{00} \beta^i + T^{0i} \right) \partial_{i} \alpha \biggr\}, \label{eq:defsourceMom}\end{aligned}$$ (see also Eq. A2 in @2008ApJ...689..391N). In the above equations, $Y_{\rm e}$, $p$, $h$, $g_{\mu \nu}$ $\gamma$, $K_{ij}$ are electron fraction, pressure and specific enthalpy of matter, 4-dimensional metric of spacetime, the determinant of 3-dimensional metric of space and extrinsic curvature, respectively.
In this paper, instead of employing these fully GR equations, we adopt their Newtonian approximations, which can be derived by taking the weak gravitational field limit, ignoring the time derivative of gravitational potential and the space derivatives of 3-dimensional space metric (see @2011ApJ...731...80N). Then basic equations can be reduced the spherical coordinates to $$\begin{aligned}
\partial_{t}{\mbox{\boldmath $Q$}} + \partial_{j}{\mbox{\boldmath $U^{j}$}} = \mbox{\boldmath $W_{h}$} + \mbox{\boldmath $W_{i}$} + \mbox{\boldmath $W_{a}$}, \label{eq:hydroConservativeform_ac}\end{aligned}$$ where each term is given as $$\begin{aligned}
\hspace{0mm} \mbox{\boldmath $Q$} =
\left(
\begin{array}{c}
\sqrt{g} \rho \\
\sqrt{g} \rho v_{r} \\
\sqrt{g} \rho v_{\theta} \\
\sqrt{g} \rho v_{\phi} \\
\sqrt{g} ( e + \frac{1}{2} \rho v^2) \\
\sqrt{g} \rho Y_{e}
\end{array}
\right),\end{aligned}$$ $$\begin{aligned}
\hspace{0mm} \mbox{\boldmath $U^{j}$} =
\left(
\begin{array}{c}
\sqrt{g} \rho v^{j} \\
\sqrt{g} (\rho v_{r} v^{j} + p \delta_r^{j})\\
\sqrt{g} (\rho v_{\theta} v^{j} + p \delta_{\theta}^{j})\\
\sqrt{g} (\rho v_{\phi} v^{j} + p \delta_{\phi}^{j})\\
\sqrt{g} ( e + p + \frac{1}{2} \rho v^2) v^{j} \\
\sqrt{g} \rho Y_{e} v^{j}
\end{array}
\right),\end{aligned}$$ $$\begin{aligned}
\mbox{\boldmath $W_{h}$} =
\left(
\begin{array}{c}
0 \\
\sqrt{g} \rho \Bigl( - \psi_{,r} + r (v^{\theta})^2 + r {\rm sin}^2\theta (v^{\phi})^2 + \dfrac{2p}{r \rho} \Bigr)\\
\sqrt{g} \rho \Bigl( - \psi_{,\theta} r^2 + {\rm sin}\theta {\rm cos}\theta (v^{\phi})^2 + \dfrac{p {\rm cos}\theta }{ \rho {\rm sin}\theta } \Bigr)\\
- \sqrt{g} \rho \psi_{,\phi} \\
- \sqrt{g} \rho v^{j} \psi_{,j} \\
0
\end{array}
\right) \label{eq:Wh},\end{aligned}$$ $$\begin{aligned}
\hspace{0mm} \mbox{\boldmath $W_i$} =
\left(
\begin{array}{c}
0 \\
- \sqrt{g} G^{r} \\
- \sqrt{g} G^{\theta} \\
- \sqrt{g} G^{\phi} \\
- \sqrt{g} G^{t} \\
- \sqrt{g} \Gamma
\end{array}
\right), \end{aligned}$$
$$\begin{aligned}
\hspace{0mm} \mbox{\boldmath $W_a$} =
\left(
\begin{array}{c}
0 \\
\sqrt{g} \rho \dot{\beta}_{r} \\
\sqrt{g} \rho \dot{\beta}_{\theta} \\
\sqrt{g} \rho \dot{\beta}_{\phi} \\
\sqrt{g} \rho v^{j} \dot{\beta}_{j} \\
0
\end{array}
\right),\end{aligned}$$
(see also Eqs.(12)-(16) in @2014ApJS..214...16N). In the above expressions, $\sqrt{g}(=r^2{\rm sin}\theta)$, $\psi$ and $\dot{\beta}_{j}$ denote the volume factor for the spherical coordinates, the Newtonian gravitational potential, and the time derivative of shift vector, respectively. Note that $\mbox{\boldmath $W_a$}$ represents the acceleration of the coordinates.
Validation {#sec:twodsim}
==========
Numerical setup and input physics {#subsec:numericalsetup}
---------------------------------
In this section, we validate our new formulation of the moving-mesh technique by performing 2D axisymmetric Boltzmann-Hydro simulations for the toy model of PNS oscillations and runaways from the original position. A thorough investigation of the code performance, including the GR capability, will be reported in the forthcoming paper [@Nagainprep]. As the initial conditions of these tests, we employ a configuration of a supernova core at 100ms after core bounce, which is obtained by a 1D spherically symmetric simulation of core-collapse, bounce and stagnation of shock wave with the same code. We map the resultant 1D configuration onto the 2D grid.
We use a 11.2$\rm{M}_{\sun}$ progenitor [@2002RvMP...74.1015W]. For the 1D simulation, a non-uniform grid of $N_{r}=384$ points in the radial direction covers the region of $0 \leq r \leq 5000 $km, while the momentum space is discritized with a grid of $N_{\nu}=20$ points in the energy region of $0 \leq \nu \leq 300$ MeV and $N_{\bar{\theta}} = 10$ points covering the polar angle from 0 to $\pi$. For the 2D simulations, on the other hand, we deploy $N_{r}=192$ radial grid points and uniformly distributed $N_{\theta} = 32$ angular grid points in the entire meridian section; for the momentum space we use $N_{\nu}=20$ energy grid points and $N_{\bar{\theta}} (= 4) \times N_{\bar{\phi}} (= 4)$ angular grid points. Although this is admittedly a rather coarse mesh both in space and momentum space, it is not a serious issue for the current purpose: a proof-of-principle kind of code tests. We take into account three species of neutrinos: $\nu_e, \nu_{\bar{e}}$, and $\nu_{x}$, which are electron-type neutrinos, electron-type anti-neutrinos and heavy-lepton neutrinos ($\mu$ and $\tau$ neutrinos and their anti-particles collectively), respectively.
As for the input physics, our current Boltzmann-Hydro code has updated some treatments of microphysics from @2012ApJS..199...17S [@2014ApJS..214...16N]. One of them is an incorporation of the multi-nuclear species equation of state (EOS) by @2011ApJ...738..178F [@2013ApJ...772...95F]. This tabulated EOS provides us with not only thermodynamics quantities but also abundances of nuclei with mass numbers up to $A \sim 1000$ in nuclear statistical equilibrium or NSE, which are then employed to obtain the rate of electron captures by heavy nuclei (see below). Incidentally the EOS includes also the information on the abundances of light elements. We are currently studying possible roles of their interactions with neutrinos in the post-bounce phase of CCSNe.
Neutrino-matter interactions have been also improved from @2012ApJS..199...17S. One of the upgrades is the full account of non-isoenergetic scatterings between neutrinos and electrons and positrons. Unlike the spherically symmetric case, the interaction rate depends on $\bar{\phi}$, the fact which prevents a direct application of the method used in @1993ApJ...410..740M [@2005ApJ...629..922S]. We hence obtain the interaction rate by direct numerical integrations combined with the Chebychev expansions of Polylogarithms [@1970Kolbig]. It is important to note that an implicit treatment of non-isoenergetic scatterings is highly expensive both in computational time and memory. We hence handle the neutrino scatterings on electrons and positrons explicitly. Since they are rather minor, having smaller reaction rates, compared with other reactions such as emissions, absorptions and scatterings on nucleons, this poses no problem.
Another important upgrade is the treatment of electron captures by nuclei as mentioned earlier. We tabulate the reaction rates based on the results by @2010NuPhA.848..454J and the approximation formula of @2000NuPhA.673..481L and @2003PhRvL..90x1102L with the mass fractions of heavy nuclei being taken from the Furusawa’s EOS.
As shown in @2012ApJ...747...73L, these two updates are critically important for CCSNe. As a matter of fact, the deleptonization would be erroneously suppressed during the infall phase if they were neglected, which would then result in a larger mass of the inner core and a stronger shock wave (see below and also @1993ApJ...410..740M [@2003PhRvL..90x1102L; @2003PhRvL..91t1102H; @2012ApJ...747...73L]).
Figures \[fig:1Dpreb\]–\[fig:variousradi\] display the result of the 1D simulation. Figure \[fig:1Dpreb\] plots the distributions of density, radial velocity, electron fraction, lepton fraction, entropy per baryon and temperature at different times in the pre-bounce phase. We find that the mass of the inner core is somewhat less than 0.6 $\rm{M}_{\sun}$ due to the unsuppressed deleptonization, which is consistent with other 1D computations (see, e.g., @2012ApJ...747...73L [@2015arXiv150807348S]). The post-bounce counter parts are shown in Figure \[fig:1Dpostb\] for different times, where ${\rm{T}_{\rm{b}}}$ denotes the time after bounce in this figure.
Figure \[fig:lightcurve\] presents the energy fluxes measured in the laboratory frame (i.e., neutrino luminosities) at $r~=~422$ km. Note that the well-known bounce feature, i.e., a slight decrease followed by a quick rise in the luminosity of electron-type neutrino reaches this radius at ${\rm{T}_{\rm{b}}} \sim 4$ms. The prominent neutronization burst of electron-type neutrinos can be clear seen in the upper panel, while the luminosities of other neutrinos start to rise somewhat later. Note that the production of electron-type anti-neutrinos is initially suppressed by high electron fractions around the neutrino sphere ($Y_e \sim 0.3$). In fact, the luminosity of heavy-lepton neutrino goes up in $\sim~10$ms, while that of electron-type anti-neutrinos increases gradually for $\sim~50$ms. Figure \[fig:aveene\] shows the time evolution of the mean energy for each neutrino species at $r~=~422$ km. The mean energy is defined as $$\begin{aligned}
E_{\rm{mean}} = \frac{\int f \nu^3 d \Omega d \nu}
{\int f \nu^2 d \Omega d \nu}. \label{eq:meanene}\end{aligned}$$
We also show the trajectories of some important radii for the post-bounce phase in Figure \[fig:variousradi\]. The shock expands initially but is stagnated around $170$km at ${\rm{T}_{\rm{b}}} \sim 80$ms, while the gain radius starts to deviate from the shock radius at ${\rm{T}_{\rm{b}}} \sim 20$ms. The trajectories of the points that have the densities of $\rho = 10^{11}, 10^{12}$ and $10^{13}\rm{g/cm}^3$ are also shown in this figure. They will serve as a rough guide to the size of PNS as a function of time. All of these features are in qualitative agreement with those observed in previous studies (see e.g., @2012ApJ...747...73L [@2014ApJ...788...82M]). The data at ${\rm{T}_{\rm{b}}}=100$ms are used as the initial condition for the subsequent 2D simulations.
As a quick validation of the core part of our code, we show in Figure \[fig:compariSumi\] some results of the comparison with another code, in which we ran 1D simulations twice for the $15 M_{\sun}$ progenitor model by @2002RvMP...74.1015W, using the current code and another 1D but fully GR Boltzmann-Hydro code developed by [@2005ApJ...629..922S]. All input physics are identical between the two computations but the latter is fully general relativistic and employs the Lagrangian formulation. In the figure, we show some key quantities at core bounce. In each panel, the red (green) line gives the result of our new (@2005ApJ...629..922S) code. Considering the differences just mentioned, we think the two results agree with each other reasonably well.
PNS Oscillation {#subsec:oscillation}
---------------
It is known that the center of PNS oscillates with velocities of the order of 100 km/s and periods of several tens of milliseconds (see e.g., @2012MNRAS.423.1805N). Mimicking such a situation, we start a 2D simulation by adding a velocity in the z-direction as $\Delta v_z=100$km/s in the region of $r < 30$km. This simulation is carried out for 100ms, which is long enough for the purpose of this study.
Figure \[fig:Oscivelo\] shows the time evolution of PNS velocity (upper panel), trajectory of the origin of moving-mesh (middle panel), and the z-coordinate of the mass center of PNS on this moving-mesh (lower panel). One can see from the middle panel that the PNS moves in the positive z direction initially owing to the velocity added at the beginning. It is also clear from the upper panel that the PNS is decelerated and the direction of motion is reversed after a few ms. It is noted the PNS moves from the original position by $\sim 1$km by the time of $t \sim 15$ms as shown in the middle panel. This is not a small distance and the subsequent evolution can not be calculated without the moving-mesh technique. As a matter of fact, we conducted the same simulation on the ordinary non-moving grid, and found that it ended up with a numerical crash with unphysical matter distributions around the coordinate origin.
At $t \sim 20$ms the PNS again changes the direction of motion and returns to the origin. Although the period is variable in time, the PNS experiences two cycles of oscillations in this simulation. It is also important to note that the moving-mesh nicely traces the motion of PNS (see the lower panel). This leads to the successful Boltzmann-Hydro simulation on the spherical polar grid. We confirmed indeed that all the unphysical features observed in the simulation on the fixed mesh have gone with the moving-mesh technique.
Runaway motions of PNS {#subsec:Kick}
----------------------
In realistic simulations the shock wave expands asymmetrically after successful shock revival and expels the envelope anisotropically. The PNS then undergoes a recoil . In this section, we mimic such runaways of PNS very crudely. The initial configurations of matter and neutrinos are the same as those used in the previous section. In this case, however, we do not add velocity perturbations. Instead, we continuously add by hand the external acceleration of $10^{11} {\rm cm}/{\rm s^2}$ in the positive z-direction within the region of $r < 30$km on the moving grid. The simulation was carried out for $t=10$ms.
Figure \[fig:Kickvelo\] is the counter part of Figure \[fig:Oscivelo\] for the present case. As expected, the PNS moves continuously in the positive z-direction. It is seen in the top panel that the PNS velocity reaches $\sim 3000$km/s at the end of the simulation, which is much larger than the realistic kick velocity of a few hundred km/s. In spite of this rather extreme runaway of PNS, the moving-mesh tracks it very well, as shown in the lower panel of the figure. In fact, the distance between the mass center of PNS and the origin of the moving-mesh remains less than $10^{-2}$km, which is close enough to avoid numerical problems. Incidentally, oscillations that are evident after $t \sim 4$ms in the lower panel are ascribed to the deformations of PNS.
A series of snapshots of the entropy distribution in the meridian plane are shown in Figure \[fig:entrocontKick\]. The low entropy region with deep blue colors in this figure corresponds to the central un-shocked part of PNS. It is apparent that the PNS moves upward with time. Note that this figure is drawn on the fixed coordinates, which coincide with the moving coordinates initially. In each panel of the same figure, we put two concentric circles, which represent the moving-mesh. It is confirmed again that they trace the PNS closely. In Figure \[fig:NuenumcontKick\], on the other hand, the (number) density contour is drawn in color for electron-type neutrinos. One can see that the neutrino density is slightly non-spherical owing to the deformation of PNS. It is more important, however, that the neutrinos are comoving with PNS. This is of course a consequence of the neutrino trapping, which occurs in the optically thick region. In the Boltzmann-Hydro simulation, however, it is highly non-trivial and is in fact ensured by the combination of the following two conditions: (1) neutrinos are isotropically distributed in the fluid-rest frame; (2) the neutrino distribution in the O-frame is related accurately with that in the fluid-rest frame by a Lorentz transformation. The result we have just presented is yet another demonstration that our code is working properly.
Last but not least, we mention the conservation of linear momentum in our code. It is a well known fact that it is difficult in general for hydrodynamics codes like ours that adopt curvilinear coordinates to enforce the conservation of linear momentum. It is evident indeed in Eqs (\[eq:Mon3pra1\]) or (\[eq:Wh\]) that these equations can not be written in the conserved form even in the absence of gravity and neutrino interactions. Not to mention, the gravity term written not in the conservation form also attributes to the violation of momentum (and energy) conservation (see also @2010ApJS..189..104M). Note, however, that as formulated in Section \[subsec:shiftvector\], we do not use the conservation law to evaluate the PNS velocity and its acceleration. Regardless, we checked quantitatively the violation of linear momentum in our code by conducting another 2D purely hydrodynamical simulation (same initial conditions as previous tests but adding 1$\%$ random density perturbation) for 100ms. We found that the numerical error is equivalent to $\sim 10$km/s of the PNS’s kick velocity, which is not negligibly small but still much smaller than the typical velocity of $\sim 100$km/s. Considering the rather coarse grid employed in this paper and the purpose of this paper, we may conclude that our code performed well.
Summary {#sec:summary}
=======
In this paper, we have presented a novel method to deal with motions of PNS on spherical polar coordinates. It is based on a moving-mesh technique, as far as the neutrino transport part is concerined, it is essentially equivalent to the general relativistic extension of the special relativistic Boltzmann solver we developed earlier. In fact, the Boltzmann equation is reformulated in the 3+1 formalism of GR although the GR code thus obtained is applied only to the flat spacetime coupled with the Newtonian hydrodynamics code and self-gravity module in this paper. The shift vector is utilized to specify the movement of spatial coordinates so that they could track the PNS motion approximately. As a matter of fact, without such a technique we encountered a numerical crash with unphysical features emerging at the coordinate origin, as the PNS is dislocated from the original position. Since the coordinate origin stays very close to the mass center of PNS with the moving-mesh technique, we expect that in more realistic simulations the extended Boltzmann-Hydro code will be able to treat the violent oscillations and ultimate runaway of PNS in the post-bounce phases of CCSNe on spherical polar coordinates.
We have also described in detail the numerical implementations of the GR extensions to our SR Boltzmann code, which was constructed on two energy grids so that it could deal with both the advection and collision terms correctly. It turns out that the extension is rather straightforward thanks to the use of appropriate tetrads. The two energy grids and the transformations between them, which are employed in the SR Boltzmann code, are nicely identified with these tetrads and their Lorentz transformations, respectively.
In Section \[sec:twodsim\], we have validated our method by applying it to two test problems: toy models of PNS oscillations and a runaway, which mimic very crudely more realistic post-bounce simulations. We have demonstrated that the code has nice tracking capabilities and can follow the evolutions without any problems such as those we encountered when the fixed spatial grid was deployed. Incidentally, the extended code is currently being applied to realistic 2D simulations of CCSNe and the results and a thorough code validation will be reported elsewhere soon. We are also planning to conduct truly GR Boltzmann simulations of neutrino transport in a black hole spacetime. It should be apparent that any (local) gauge conditions can be imposed in addition to the uniform shift we employed in this paper.
The next step is to couple the GR neutrino transport code with a solver of the Einstein equations. Note that the hydrodynamics code is already GR (see @2008ApJ...689..391N [@2009ApJ...696.2026N]) although the Newtonian version was used in this paper. Such an integrated code, once completed, will certainly broaden the scope of application much beyond CCSNe.
We are grateful to A. Juodagalvis for providing the data of electron capture rates on heavy nuclei. H.N. acknowledges to M. Shibata, Y. Sekiguchi and H. Okawa for valuable comments and discussions. H.N. also thanks Werner Marcus for proofreadings. The numerical computations were performed on the supercomputers at K, at AICS, FX10 at Information Technology Center of Tokyo University, SR16000 at YITP of Kyoto University, and SR16000 at KEK under the support of its Large Scale Simulation Program (14/15-17, 15/16-08), Research Center for Nuclear Physics (RCNP) at Osaka University. Large-scale storage of numerical data is supported by JLDG constructed over SINET4 of NII. H.N. was supported in part by JSPS Postdoctoral Fellowships for Research Abroad No. 27-348. This work was also supported by Grant-in-Aid for the Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan (15K05093, 24103006, 24740165, 24244036, 25870099) and HPCI Strategic Program of Japanese MEXT and K computer at the RIKEN (Project ID: hpci 130025, 140211, and 150225).
Boggs, S. E., Harrison, F. A., Miyasaka, H., et al. 2015, Science, 348, 670
Bruenn, S. W., Lentz, E. J., Hix, W. R., et al. 2014, arXiv:1409.5779
Buras, R., Rampp, M., Janka, H.-T., & Kifonidis, K. 2006, , 447, 1049
Buras, R., Janka, H.-T., Rampp, M., & Kifonidis, K. 2006, , 457, 281
Burrows, A., Livne, E., Dessart, L., Ott, C. D., & Murphy, J. 2007, , 655, 416
Couch, S. M., Chatzopoulos, E., Arnett, W. D., & Timmes, F. X. 2015, , 808, L21
Dessart, L., Burrows, A., Livne, E., & Ott, C. D. 2006, , 645, 534
Dolence, J. C., Burrows, A., & Zhang, W. 2015, , 800, 10
Furusawa, S., Yamada, S., Sumiyoshi, K., & Suzuki, H. 2011, , 738, 178
Furusawa, S., Sumiyoshi, K., Yamada, S., & Suzuki, H. 2013, , 772, 95
Furusawa, S., Nagakura, H., Sumiyoshi, K., & Yamada, S. 2013, , 774, 78
Grefenstette, B. W., Harrison, F. A., Boggs, S. E., et al. 2014, , 506, 339
Hanke, F., M[ü]{}ller, B., Wongwathanarat, A., Marek, A., & Janka, H.-T. 2013, , 770, 66
Hix, W. R., Messer, O. E., Mezzacappa, A., et al. 2003, Physical Review Letters, 91, 201102
Janka, H.-T., & Mueller, E. 1994, , 290, 496
Just, O., Obergaulinger, M., & Janka, H.-T. 2015, arXiv:1501.02999
Juodagalvis, A., Langanke, K., Hix, W. R., Mart[í]{}nez-Pinedo, G., & Sampaio, J. M. 2010, Nuclear Physics A, 848, 454
Kageyama, A., & Sato, T. 2004, Geochemistry, Geophysics, Geosystems, 5, Q09005
Kolbig, K., Mignaco, J., & Remiddi, E. 1970, Bol., Inst. Tonantzintta, 10, 38
Kuroda, T., Takiwaki, T., & Kotake, K. 2015, arXiv:1501.06330
Langanke, K., & Mart[í]{}nez-Pinedo, G. 2000, Nuclear Physics A, 673, 481
Langanke, K., Mart[í]{}nez-Pinedo, G., Sampaio, J. M., et al. 2003, Physical Review Letters, 90, 241102
Lentz, E. J., Mezzacappa, A., Messer, O. E. B., et al. 2012, , 747, 73
Lentz, E. J., Bruenn, S. W., Hix, W. R., et al. 2015, , 807, L31
Liebend[ö]{}rfer, M., Whitehouse, S. C., & Fischer, T. 2009, , 698, 1174
Lindquist, R. W. 1966, Annals of Physics, 37, 487
Livne, E., Burrows, A., Walder, R., Lichtenstadt, I., & Thompson, T. A. 2004, , 609, 277
Mezzacappa, A., & Bruenn, S. W. 1993, , 410, 740
Mezzacappa, A., Calder, A. C., Bruenn, S. W., et al. 1998, , 495, 911
Motch, C., Pires, A. M., Haberl, F., Schwope, A., & Zavlin, V. E. 2009, , 497, 423
M[ü]{}ller, B., Janka, H.-T., & Dimmelmeier, H. 2010, , 189, 104
M[ü]{}ller, B., Janka, H.-T., & Marek, A. 2012, , 756, 84
M[ü]{}ller, B., & Janka, H.-T. 2014, , 788, 82
M[ü]{}ller, B., & Janka, H.-T. 2015, , 448, 2141
M[ü]{}ller, B. 2015, , 453, 287
Nagakura, H., & Yamada, S. 2008, , 689, 391
Nagakura, H., & Yamada, S. 2009, , 696, 2026
Nagakura, H., Ito, H., Kiuchi, K., & Yamada, S. 2011, , 731, 80
Nagakura, H., Sumiyoshi, K., & Yamada, S. 2014, , 214, 16
Nagakura et al. in prep
Nordhaus, J., Brandt, T. D., Burrows, A., Livne, E., & Ott, C. D. 2010, , 82, 103016
Nordhaus, J., Brandt, T. D., Burrows, A., & Almgren, A. 2012, , 423, 1805
O’Connor, E., & Couch, S. 2015, arXiv:1511.07443
Ott, C. D., Burrows, A., Dessart, L., & Livne, E. 2008, , 685, 1069
Pan, K.-C., Liebend[ö]{}rfer, M., Hempel, M., & Thielemann, F.-K. 2016, , 817, 72
Scheck, L., Kifonidis, K., Janka, H.-T., [ Muuml]{}ller, E. 2006, , 457, 963
Shibata, M., Nagakura, H., Sekiguchi, Y., & Yamada, S. 2014, , 89, 084073 d
Skinner, M. A., Burrows, A., & Dolence, J. C. 2015, arXiv:1512.00113
Sullivan, C., O’Connor, E., Zegers, R. G. T., Grubb, T., & Austin, S. M. 2015, arXiv:1508.07348
Sumiyoshi, K., Yamada, S., Suzuki, H., et al. 2005, , 629, 922
Sumiyoshi, K., & Yamada, S. 2012, , 199, 17
Suwa, Y., Kotake, K., Takiwaki, T., et al. 2010, , 62, L49
Takiwaki, T., Kotake, K., & Suwa, Y. 2014, , 786, 83
Walder, R., Burrows, A., Ott, C. D., et al. 2005, , 626, 317
Wang, L., & Wheeler, J. C. 2008, , 46, 433
Walder, R., Burrows, A., Ott, C. D., et al. 2005, , 626, 317
Wongwathanarat, A., Hammer, N. J., Muller, E. 2010, , 514, A48
Woosley, S. E., Heger, A., & Weaver, T. A. 2002, Reviews of Modern Physics, 74, 1015
Zhang, W., Howell, L., Almgren, A., et al. 2013, , 204, 7
[^1]: This is because the recoil velocity is normally much lower than the sound velocity in PNS.
[^2]: In the Newtonian case, we add terms including the acceleration in the momentum and energy conservation equations. For GR cases, we do not need to modify the basic equations, since the acceleration has already been included through the shift vector. See the text for more details.
[^3]: Other advection terms are calculated on LFG (see @2014ApJS..214...16N for more details). Note also that the energy derivative term disappears in the current study for the moving-mesh in the flat spacetime, since $\omega_{(0)}$ becomes trivially zero.
|
---
abstract: |
In this work, we compare the two approximations of a path-connected space $X$, by the Ganea spaces $G_n(X)$ and by the realizations $\|\Lambda_\bullet X\|_{n}$ of the truncated simplicial resolutions emerging from the loop-suspension cotriple $\Sigma\Omega$. For a simply connected space $X$, we construct maps\
$\|\Lambda_\bullet X\|_{n-1}\to G_n(X)\to \|\Lambda_\bullet X\|_{n}$ over $X$, up to homotopy. In the case $n=2$, we prove the existence of a map $G_2(X)\to\|\Lambda_\bullet X\|_{1}$ over $X$ (up to homotopy) and conjecture that this map exists for any $n$.
address:
- |
Centro de Matem[á]{}tica\
Universidade do Minho\
Campus de Gualtar\
4710-057 Braga\
Portugal
- |
Mathematisches Institut\
Freie Universität Berlin\
Arnimallee 2–6\
D–14195 Berlin\
Germany
- |
Département de Mathematiques\
UMR 8524\
Université de Lille 1\
59655 Villeneuve d’Ascq Cedex\
France
- |
Centro de Matem[á]{}tica\
Universidade do Minho\
Campus de Gualtar\
4710-057 Braga\
Portugal
author:
- Thomas Kahl
- Hans Scheerer
- Daniel Tanré
- Lucile Vandembroucq
bibliography:
- 'newbiblio.bib'
title: Simplicial resolutions and Ganea fibrations
---
We use the category $\Top$ of well pointed compactly generated spaces having the homotopy type of CW-complexes. We denote by $\Omega$ and $\Sigma$ the classical loop space and (reduced) suspension constructions on $\Top $.
Let $X\in \Top$. First we recall the construction of the Ganea fibrations $G_n(X)\to X$ where $G_n(X)$ has the same homotopy type as the $n$-th stage, $B_n\Omega X$, of the construction of the classifying space of $\Omega X$:
1. the first Ganea fibration, $p_1\colon G_1(X)\rightarrow X$, is the associated fibration to the evaluation map $\ev_X\colon \Sigma\Omega X\rightarrow X$;
2. given the $n$[th]{}-fibration $p_n\colon G_n(X)\rightarrow X$, let $F_n(X)$ be its homotopy fiber and let $G_n(X)\cup {\cC}(F_n(X))$ be the mapping cone of the inclusion $F_n(X)\rightarrow G_n(X)$. We define now a map ${p'}_{n+1}\colon G_n(X)\cup {\cC}(F_n(X))\rightarrow X$ as $p_n$ on $G_n(X)$ and that sends the (reduced) cone $\cC(F_n(X))$ on the base point. The $(n+1)$-[[st]{}]{}-fibration of Ganea, $p_{n+1}\colon G_{n+1}(X)\rightarrow X$, is the fibration associated to ${p'}_{n+1}$.
3. Denote by $G_\infty(X)$ the direct limit of the canonical maps $G_n(X)\to G_{n+1}(X)$ and by $p_\infty\colon G_\infty(X)\to X$ the map induced by the $p_n$’s.
From a classical theorem of Ganea [@Gan67a], one knows that the fiber of $p_n$ has the homotopy type of an $(n+1)$-fold reduced join of $\Omega X$ with itself. Therefore the maps $p_n$ are higher and higher connected when the integer $n$ grows. As a consequence, if $X$ is path-connected, the map $p_\infty\colon G_\infty(X)\to X$ is a homotopy equivalence and the total spaces $G_n(X)$ constitute approximations of the space $X$.
The previous construction starts with the couple of adjoint functors $\Omega$ and $\Sigma$. From them, we can construct a *simplicial space* $\Lambda_{\bullet} X$, defined by $\Lambda_nX=(\Sigma\Omega )^{n+1}X$ and augmented by $d_0=\ev_X\colon \Sigma\Omega X\rightarrow X$. Forgetting the degeneracies, we have a *facial space* (also called [restricted simplicial space]{} in [@DD77 3.13]). Denote by $\|\Lambda_{\bullet} X\|$ the realization of this facial space (see [@Seg74] or [Section \[sec:facial\]]{}). An adaptation of the proof of Stover (see [@Sto90 Proposition 3.5]) shows that the augmentation $d_0$ induces a map $\|\Lambda_\bullet X\|\to X$ which is a homotopy equivalence. If we consider the successive stages of the realization of the facial space $\Lambda_\bullet X$, we get maps $\|\Lambda_\bullet X\|_n\to X$ which constitute a second sequence of approximations of the space $X$. In this work, we study the relationship between these two sequences of approximations and prove the following results.
\[thm:main\] Let $X\in \Top$ be a simply connected space. Then there is a homotopy commutative diagram $$\xymatrix{
\|\Lambda_\bullet X\|_{n-1}\ar[r]\ar[ddr]&
G_n(X)\ar[r]\ar[dd]_{p_n}&
\|\Lambda_\bullet X\|_{n}\ar[ldd]\\
&&\\
&X&
}$$
The hypothesis of simply connectivity is used only for the map $G_n(X)\to \|\Lambda_\bullet X\|_n$, see [Theorem \[thm:easyway\]]{} and [Theorem \[thm:hardway\]]{}. In the case $n=2$, the situation is better.
\[thm:main2\] Let $X\in\Top$. Then there are homotopy commutative triangles $$\xymatrix{
\|\Lambda_\bullet X\|_{1}\ar@<1ex>[rr]\ar[dr]&&
G_2(X)\ar[ld]^{p_2}\ar[ll]\\
&X&
}$$
We conjecture the existence of maps $\xymatrix@1{\|\Lambda_{\bullet} X\|_{n-1}\ar@<1ex>[r]&
G_n(X)\ar[l]
}$ over $X$ up to homotopy, for any $n$.
This work may also be seen as a comparison of two constructions: an iterative fiber-cofiber process and the realization of progressive truncatures of a facial resolution. More generally, for any cotriple, we present an adapted fiber-cofiber construction (see [Definition \[defi:TGanea\]]{}) and ask if the results obtained in the case of $\Sigma\Omega$ can be extended to this setting.
Finally, we observe that a variation on a theorem of Libman is essential in our argumentation, see [Theorem \[Libman\]]{}. A proof of this result, inspired by the methods developed by R. Vogt (see [@Vogt73]), is presented in an Appendix.
This program is carried out in Sections 1-8 below, whose headings are self-explanatory:
Facial spaces {#sec:facial}
=============
A *facial object* in a category $\bf C$ is a sequence of objects $X_0, X_1, X_2, \dots$ together with morphisms $d_i: X_n \to
X_{n-1}$, $0 \leq i \leq n$, satisfying the *facial identities* $d_id_j = d_{j-1}d_i$ $(i < j)$. $$\xymatrix{
X_{0} &\ar@<2pt>[l]^{d_1} \ar@<-2pt>[l]_{d_0}X_{1}
&\ar@<4pt>[l]^{d_2} \ar[l]|{d_1}\ar@<-4pt>[l]_{d_0}X_{2} &\cdots
&X_{n-1} &\ar@<4pt>[l]^-{d_n}
\ar@{}[l]|-{:}\ar@<-5pt>[l]_-{d_0}X_{n} &\ar@<4pt>[l]
\ar@{}[l]|-{:}\ar@<-5pt>[l]\cdots
}$$ The morphisms $d_i$ are called *face operators*. We shall use notation like $X_{\bullet}$ to denote facial objects. With the obvious morphisms the facial objects in $\bf C$ form a category which we denote by $d{\bf C}$. An *augmentation* of a facial object $X_{\bullet}$ in a category $\bf C$ is a morphism $d_0: X_0 \to X$ with $d_0 \circ d_0 = d_0 \circ d_1$. The facial object $X_{\bullet}$ together with the augmentation $d_0$ is called a *facial resolution of $X$* and is denoted by $X_{\bullet}
\stackrel{d_0}{\to} X$.\
Realization(s) of a facial space
--------------------------------
As usual, $\Delta ^n$ denotes the standard $n$-simplex of $\R^{n+1}$ and the inclusions of faces are denoted by $\delta^i:\Delta ^n\to \Delta^{n+1}$. We consider the point $(0,\dots, 0,1) \in \R^{n+1}$ as the base-point of the standard $n$-simplex $\Delta ^n$. If $X$ and $Y$ are in $\Top$, we denote by $X\rtimes Y$ the half smashed product $X\rtimes Y =X\times Y/*\times Y$.\
A *facial space* is a facial object in $\Top$. The *realization* of a facial space $X_{\bullet}$ is the direct limit $$\|X_{\bullet}\|_{\infty} = \lim \limits_{\longrightarrow} \|X_{\bullet}\|_n$$ where the spaces $\|X_{\bullet}\|_n$ are inductively defined as follows. Set $\|X_{\bullet}\|_0 = X_0$. Suppose we have defined $\|X_{\bullet}\|_{n-1}$ and a map $\chi_{n-1}: X_{n-1}\rtimes \Delta^{n-1} \to \|X_{\bullet}\|_{n-1}$ ($\chi_0$ is the obvious homeomorphism). Then $\|X_{\bullet}\|_{n}$ and $\chi_n$ are defined by the pushout diagram $$\xymatrix{
X_n\rtimes \partial \Delta ^n\ar[r]^{\varphi_n}_{} \ar@{
>->}[d]^{}_{} & \|X_{\bullet}\|_{n-1}\ar@{ >->}[d]^{}_{}
\\
X_n\rtimes \Delta ^n\ar[r]^{}_{\chi_n} & \|X_{\bullet}\|_{n} }$$ where $\varphi_n$ is defined by the following requirements, for any $i\in\left\{0,1,\ldots,n\right\}$, $$\varphi_n\circ (X_n\rtimes \delta^i) = \chi_{n-1}\circ (d_i \rtimes \Delta ^{n-1}):
X_n\rtimes \Delta^{n-1} \to \|X_{\bullet}\|_{n-1}.$$ It is clear that $\varphi_1$ is a well-defined continuous map. For $\varphi_n$ with $n\geq 2$, this is assured by the facial identities $d_id_j = d_{j-1}d_i\; (i < j)$.
We also consider another realization of the facial space $X_{\bullet}$. The *free realization* of $X_{\bullet}$ is the direct limit $$|X_{\bullet}|_{\infty} = \lim \limits_{\longrightarrow} |X_{\bullet}|_n$$ where the spaces $|X_{\bullet}|_n$ are inductively defined as follows. Set $|X_{\bullet}|_0 = X_0$. Suppose we have defined $|X_{\bullet}|_{n-1}$ and a map $\bar \chi_{n-1}: X_{n-1}\times
\Delta^{n-1} \to |X_{\bullet}|_{n-1}$ ($\bar \chi_0$ is the obvious homeomorphism). Then $|X_{\bullet}|_{n}$ and $\bar \chi_n$ are defined by the pushout diagram $$\xymatrix{
X_n\times \partial \Delta ^n\ar[r]^{\bar \varphi_n}_{} \ar@{
>->}[d]^{}_{} & |X_{\bullet}|_{n-1}\ar@{ >->}[d]^{}_{}
\\
X_n\times \Delta ^n\ar[r]^{}_{\bar\chi_n} & |X_{\bullet}|_{n} }$$ where $\bar \varphi_n$ is defined by the following requirements, for any $i\in\left\{0,1,\ldots,n\right\}$, $$\bar \varphi_n\circ (X_n\times \delta^i) = \bar \chi_{n-1}\circ (d_i \times \Delta ^{n-1}):
X_n\times \Delta^{n-1} \to |X_{\bullet}|_{n-1}.$$ Again the facial identities $d_id_j = d_{j-1}d_i\; (i < j)$ assure that $\bar \varphi_n$ is a well-defined continuous map. Since $\bar\chi_{n-1}$ is base-point preserving, so is $\bar \varphi_n$ and hence $\bar\chi_n$.\
We sometimes consider facial spaces with upper indexes $X^{\bullet}$. In such a case, the realizations up to $n$ are denoted by $||X^{\bullet}||^n$ and $|X^{\bullet}|^n$.\
Let $X_{\bullet} \stackrel{d_0}{\to} X$ be a facial resolution of a space $X$. We define a sequence of maps $\|X_{\bullet}\|_n \to X$ as follows. The map $\|X_{\bullet}\|_0 \to X$ is the augmentation. Suppose we have defined $\|X_{\bullet}\|_{n-1} \to X$ such that the following diagram is commutative: $$\xymatrix{
X_{n-1}\rtimes \Delta ^{n-1} \ar[r]^-{\chi_{n-1}}_{} \ar[d]^{}_{\pr}
& \|X_{\bullet}\|_{n-1} \ar [d]^{}_{}
\\
X_{n-1} \ar[r]^{}_{(d_0)^n}
& X,
}$$ where $(d_0)^n$ denotes the $n$-fold composition of the face operator $d_0$. Consider the diagram $$\xymatrix{
X_{n}\rtimes \Delta ^{n-1} \ar[r]^{d_i\rtimes \Delta^{n-1}}_{} \ar[d]^{}_{X_n\rtimes \delta ^i}
& X_{n-1}\rtimes \Delta ^{n-1} \ar [d]^{\chi_{n-1}}
\\
X_{n}\rtimes \partial \Delta ^{n} \ar[r]^{\varphi_{n}}_{}
\ar[d]^{}_{\pr} & \|X_{\bullet}\|_{n-1} \ar [d]^{}_{}
\\
X_{n} \ar[r]^{}_{(d_0)^{n+1}}
& X.
}$$ The upper square is commutative for all $i$ and so is the outer diagram. It follows that the lower square is commutative. We may therefore define $\|X_{\bullet}\|_n \to X$ to be the unique map which extends $\|X_{\bullet}\|_{n-1} \to X$ and which, pre-composed by $\chi _n$, is the composite $\xymatrix@1{X_n\rtimes \Delta ^n\ar[r]^-{\pr}&X_n\ar[rr]^{(d_0)^{n+1}}&&X}$. Similarly, we define a sequence of maps $|X_{\bullet}|_n \to X$. We refer to the maps $\|X_{\bullet}\|_n \to X$ and $|X_{\bullet}|_n \to X$ as the *canonical maps* induced by the facial resolution $X_{\bullet} \to X$. The next statement relates these two realizations; its proof is straightforward.
\[pointedvsfree\] Let $X_{\bullet}$ be a facial space. Then for each $n\in \N$, the canonical map $|X_{\bullet}|_n \to X$ factors through the canonical map $\|X_{\bullet}\|_n \to X$
Facial resolutions with contraction
-----------------------------------
A *contraction* of a facial resolution $X_{\bullet} \stackrel{d_0}{\to} X$ consists of a sequence of morphisms $s: X_{n-1} \to X_n \quad (X_{-1} = X)$ such that $d_0 \circ s = \id$ and $d_i\circ s = s\circ d_{i-1}$ for $i\geq 1$.
\[quotient\] Let $X_{\bullet}\stackrel{d_0}{\to} X$ be a facial resolution which admits a contraction $s: X_{n-1} \to X_n \quad (X_{-1} = X)$. For any $n\geq 0$, $|X_{\bullet}|_{n}$ can be identified with the quotient space $X_n\times \Delta^n /\sim$ where the relation is given by $$(x,t_0,...,t_k,...,t_n)\sim (sd_kx,0,t_0,...,\hat{t}_k,...,t_n),\quad \text{if } t_k=0.$$ As usual, the expression $\hat{t}_k$ means that $t_k$ is omitted. Under this identification the canonical map $|X_{\bullet}|_{n}\to
X$ is given by $[x,t_0,...,t_k,...,t_n]\mapsto (d_0)^{n+1}(x)$ and the inclusion $|X_{\bullet}|_{n}{\rightarrowtail }|X_{\bullet}|_{n+1}$ is given by $[x,t_0,...,t_k,...,t_n]\mapsto
[sx,0,t_0,...,t_k,...,t_n]$.
We first note that the simplicial identities together with the contraction properties guarantee that the relation is unambiguously defined if various parameters are zero and also that the two maps $$\begin{array}{rcl}
X_n\times \Delta^n /\sim & \to & X_{n+1}\times \Delta^{n+1}/\sim\\
\left[x,t_0,...,t_k,...,t_n\right]& \mapsto
&\left[sx,0,t_0,...,t_k,...,t_n\right]
\end{array}$$ and $$\begin{array}{rcl}
X_n\times \Delta^n /\sim & \to & X\\
\left[x,t_0,...,t_k,...,t_n\right]& \mapsto & (d_0)^{n+1}(x)
\end{array}$$ that we will denote by $\iota_n$ and $\varepsilon_n$ respectively are well-defined.
Beginning with $\xi_0=\id$, we next construct a sequence of homeomorphisms $\xi_n:|X_{\bullet}|_{n} \to X_n\times \Delta^n /\sim$ inductively by using the universal property of pushouts in the diagram $$\xymatrix{
X_n\times \partial \Delta ^n \ar[r]^{\bar\varphi_n}_{} \ar@{
>->}[dd]^{}_{}
& |X_{\bullet}|_{n-1}\ar@{ >->}[dd]^{}_{} \ar[rd]^{\xi_{n-1}}\\
&&X_{n-1}\times \Delta^{n-1} /\sim \ar[dd]^{\iota_{n-1}}_{}
\\
X_n\times \Delta ^n\ar[r]^{}^{\bar\chi_n} \ar[rrd]_{q_n}
& |X_{\bullet}|_{n}\ar@{.>}[dr]^{\xi_n} &\\
&&X_n\times \Delta^n /\sim }$$ where $q_n$ is the identification map. If $t_k=0$, the construction up to $n-1$ implies $$\xi_{n-1}\circ
\bar\varphi_n(x,t_0,...,t_n)=q_{n-1}\circ (d_k\times
\Delta^{n-1})=[d_kx,t_0,...\hat{t}_k,...,t_n].$$ Therefore, we see that the diagram $$\xymatrix{
X_n\times \partial \Delta ^n\ar[rr]^-{\xi_{n-1}\circ\bar\varphi_n}_{}
\ar@{
>->}[d]^{}_{} && X_{n-1}\times \Delta^{n-1} /\sim
\ar[d]^{\iota_{n-1}}_{}
\\
X_n\times \Delta ^n\ar[rr]^{}_{q_n} && X_n\times \Delta^n /\sim }$$ is commutative and, by checking the universal property, that it is a pushout. Thus $\xi_n$ exists and is a homeomorphism. Through this sequence of homeomorphisms, $\iota_n$ corresponds to the inclusion $|X_{\bullet}|_{n}{\rightarrowtail }|X_{\bullet}|_{n+1}$ and $\varepsilon_n$ to the canonical map $|X_{\bullet}|_{n}\to X$.
\[facialres\] Let $X_{\bullet}\stackrel{d_0}{\to} X$ be a facial resolution which admits a natural contraction $s: X_{n-1} \to X_n \quad (X_{-1} = X)$. For any $n\geq 0$, the canonical map $|X_{\bullet}|_n\to X$ admits a (natural) section $\sigma_n:X\to |X_{\bullet}|_n$ and the inclusion $|X_{\bullet}|_{n-1}{\rightarrowtail }|X_{\bullet}|_{n}$ is naturally homotopic to $\sigma_n$ pre-composed by the canonical map: $$\xymatrix{
|X_{\bullet}|_{n-1}\ar[rr]\ar[rd] && |X_{\bullet}|_n\\
& X \ar[ur]_{\sigma_n}& }$$ In particular, if the facial resolution $X_{\bullet} \to \ast$ admits a natural contraction then the inclusions $|X_{\bullet}|_{n-1} {\rightarrowtail }|X_{\bullet}|_{n}$ are naturally homotopically trivial.
Through the identification established in [Proposition \[quotient\]]{}, the section $\sigma_n:X\to |X_{\bullet}|_n$ is given by $$\sigma_n(x)=[(s)^{n+1}(x),0,...,0,1].$$ Using the fact that $$sd_nsd_{n-1}\cdots sd_2sd_1 s= (s)^{n+1}(d_0)^{n},$$ we calculate that the (well-defined) map $H:|X_{\bullet}|_{n-1}\times I\to |X_{\bullet}|_{n-1}$ given by $$H([x,t_0,...,t_{n-1}],u)=[sx,u,(1-u)t_0,...,(1-u)t_{n-1}]$$ is a homotopy between the inclusion and $\sigma_n$ pre-composed by the canonical map $|X_{\bullet}|_{n-1}\to X$.
First part of [Theorem \[thm:main\]]{}: the map $\|\Lambda_{\bullet} X\|_{n-1} \to G_n(X)$
==========================================================================================
Let $X\in\Top$. We consider the facial resolution $\Lambda_{\bullet}(X) \to X$ where $\Lambda_{n}(X) = (\Sigma \Omega)^{n+1} X$, the face operators $d_i : (\Sigma \Omega)^{n+1} X \to (\Sigma \Omega)^n X$ are defined by $d_i = (\Sigma \Omega )^i(\ev_{(\Sigma \Omega)^{n-i}X})$, and the augmentation is $d_0=\ev_X : \Sigma \Omega X \to X$.
\[thm:easyway\] Let $X\in\Top$. For each $n\in\N$, the canonical map $\|\Lambda_\bullet X\|_{n-1}\to X$ factors through the Ganea fibration $G_n(X)\to X$.
The proof uses the next result.
\[lem:Puppetrick\] Given a pushout $$\xymatrix{
\Sigma A\rtimes \partial \Delta^n\ar[rr]\ar@{ >->}[d]^{}_{}&&Y\ar[d]^f\\
\Sigma A\rtimes \Delta^n\ar[rr]&&Y'\\}$$ where the left-hand vertical arrow is a cofibration, then there exists a cofiber sequence $\xymatrix@1{\Sigma A\land \partial \Delta^n\ar[r]&Y\ar[r]^f&Y'}$.
With the Puppe trick, we construct a commutative diagram $$\xymatrix{
\Sigma A\vee \left(\Sigma A\wedge \partial \Delta^n\right)\ar@{ >->}[d]&&
\ar[ll]_-{\sim}\left(\Sigma A\rtimes \partial \Delta^n\right)\ar[d]\\
\Sigma A\vee \left(\Sigma A\wedge \Delta^n\right)&&
\ar[ll]^-{\sim}\left(\Sigma A\rtimes \Delta^n\right)\\}$$ from which we obtain a commutative diagram $$\xymatrix{
\Sigma A\vee \left(\Sigma A\wedge \partial \Delta^n\right)\ar@{ >->}[d]\ar[rr]^-{\sim}&&
\left(\Sigma A\rtimes \partial \Delta^n\right)\ar[d]\\
\Sigma A\vee \left(\Sigma A\wedge \Delta^n\right)\ar[rr]_-{\sim}&&
\left(\Sigma A\rtimes \Delta^n\right)\\}$$ because the left-hand vertical arrow is a cofibration. We form now $$\xymatrix{
\Sigma A\wedge \partial \Delta^n\ar[r]\ar@{ >->}[dd]\ar[r]&
\Sigma A\vee \left(\Sigma A\wedge \partial \Delta^n\right)\ar@{ >->}[dd]\ar[rr]^{\sim}&&
\Sigma A\rtimes \partial \Delta^n\ar[ld]\ar[rr]\ar[dd]&&Y\ar[dl]\ar[dd]\\
&&\bullet_1\ar[rr]\ar[rd]^{\sim}&&\bullet_2\ar[rd]^{\sim}&\\
\Sigma A\wedge \Delta^n\ar[r]&\Sigma A\vee \left(\Sigma A\wedge \Delta^n\right)\ar[ru]^{\sim}\ar[rr]^{\sim}&&
\Sigma A\rtimes \Delta^n\ar[rr]&&Y'\\}$$ where $\bullet_1$ and $\bullet_2$ are built by pushout and the left-hand square is a pushout. The map $\bullet_2\rightarrow Y'$ is a weak equivalence because it is induced between pushouts by the weak equivalence $\bullet_1\rightarrow \Sigma A\rtimes \Delta^n$.
We suppose that $\Phi_{n-2}\colon \|\Lambda_\bullet X\|_{n-2}\to G_{n-1}(X)$ has been constructed over $X$ and observe that the existence of $\Phi_0$ is immediate. We consider the following commutative diagram $$\xymatrix{
(\Sigma\Omega)^{n}(X)\wedge \partial \Delta^{n-1}\ar@{-->}[rr]^-{\hat{\Phi}_{n-2}}\ar[d]_{\tilde{v}_{n-2}}
&&F_{n-1}(X)\ar[d]\\
\|\Lambda_\bullet X\|_{n-2}\ar[rr]^{\Phi_{n-2}}\ar[d]_{v_{n-2}}
\ar@/^1pc/[ddr]^-{\lambda_{n-2}}
&&G_{n-1}(X)\ar[ddl]^{p_{n-1}}\\
\|\Lambda_\bullet X\|_{n-1} \ar@/_1pc/[rd]_{\lambda_{n-1}}&&\\
&X&\\}$$ where the left-hand column is a cofibration sequence by [Lemma \[lem:Puppetrick\]]{}. From the equalities $$\begin{aligned}
p_{n-1}\circ \Phi_{n-2}\circ \tilde{v}_{n-2}&=&
\lambda_{n-2}\circ \tilde{v}_{n-2}\\
&=& \lambda_{n-1}\circ v_{n-2}\circ \tilde{v}_{n-2}\simeq\ast,\end{aligned}$$ we deduce a map $\hat{\Phi}_{n-2}\colon (\Sigma\Omega)^{n}(X)\land \partial \Delta^{n-1}\rightarrow F_{n-1}(X)$ making the diagram homotopy commutative. From the definition of $G_{n}(X)$ as a cofiber, this gives a map $\Phi_{n-1}\colon \|\Lambda_\bullet X\|_{n-1}\rightarrow G_{n}(X)$ over $X$.
Instead of the explicit construction above, we can also observe that the cone length of $ \|\Lambda_\bullet X\|_{n-1}$ is less than or equal to $n$ and deduce [Theorem \[thm:easyway\]]{} from basic results on Lusternik-Schnirelmann category, see [@CLOT].
The facial space ${\mathcal G} _{\bullet} (X)$
==============================================
For a space $X$ we denote by $P'X$ the Moore path space and by $\Omega 'X$ the Moore loop space. Path multiplication turns $\Omega
'X$ into a topological monoid. Given a space $X$, we define the facial space ${\mathcal G} _{\bullet} (X)$ by ${\mathcal G} _{n} (X)
= (\Omega 'X)^{n}$ with the face operators $d_i : (\Omega 'X)^{n}
\to (\Omega 'X)^{n-1}$ given by $$d_i(\alpha_1,...,\alpha_n)=\left\{
\begin{array}{lc}
(\alpha_2,...,\alpha_n) & i=0\\
(\alpha_1,...,\alpha_{i-1},\alpha_i\alpha_{i+1},...,\alpha_n) & 0<i<n\\
(\alpha_1,...,\alpha_{n-1}) & i=n.\\
\end{array}\right.$$ *The purpose of this section is to compare the free realization of ${\mathcal G} _{\bullet} (X)$ to the construction of the classifying space of $\Omega 'X$.*
We work with the following construction of $B\Omega 'X$. The classifying space $B\Omega' X$ is the orbit space of the contractible $\Omega' X$-space $E\Omega 'X$ which is obtained as the direct limit of a sequence of $\Omega' X$-equivariant cofibrations $E_n\Omega 'X {\rightarrowtail }E_{n+1}\Omega 'X$. The spaces $E_n\Omega 'X$ are inductively defined by $E_0\Omega 'X = \Omega 'X$, $E_{n+1}\Omega 'X = E_n\Omega 'X
\cup_{\theta} (\Omega 'X\times CE_n\Omega 'X)$ where $\theta$ is the action $\Omega 'X\times E_n\Omega 'X \to E_n\Omega 'X$ and $C$ denotes the free (non-reduced) cone construction. The orbit spaces of the $\Omega 'X$-spaces $E_n\Omega ' X$ are denoted by $B_n\Omega 'X$. For each $n\in \N$ this construction yields a cofibration $B_n\Omega 'X {\rightarrowtail }B\Omega 'X$. It is well known that for simply connected spaces this cofibration is equivalent to the $n$th Ganea map $G_n(X) \to X$.
\[cat\] For each $n \in \N$ there is a natural commutative diagram $$\xymatrix{
B_n\Omega 'X\ar[r]^{}_{} \ar[d]^{}_{}
& |{\mathcal G} _{\bullet} (X)|_n\ar [d]^{}_{}
\\
B\Omega 'X\ar[r]^{}_{}
& |{\mathcal G} _{\bullet} (X)|_{\infty}
}$$ in which the bottom horizontal map is a homotopy equivalence.
We obtain the diagram of the statement from a diagram of $\Omega 'X$-spaces by passing to orbit spaces. Consider the facial $\Omega'X$-space $P_{\bullet}(X)$ in which $P _{n} (X)$ is the free $\Omega'X$-space $\Omega' X\times (\Omega 'X)^{n}$ and the face operators $d_i : (\Omega 'X)^{n+1} \to (\Omega 'X)^{n}$ (which are equivariant) are given by $$d_i(\alpha_0,...,\alpha_n)=\left\{
\begin{array}{lc}
(\alpha_0,...,\alpha_{i-1},\alpha_i\alpha_{i+1},...,\alpha_n) & 0\leq i<n\\
(\alpha_0,...,\alpha_{n-1}) & i=n.\\
\end{array}\right.$$ The maps $s\colon P_{n-1}(X)\to P_n(X)$ given by $s(\alpha_0,\ldots,\alpha_{n-1})=(\ast,\alpha_0,\ldots,\alpha_{n-1})$ constitute a natural contraction of the facial resolution $P_\bullet(X)\to\ast$. By [Proposition \[facialres\]]{}, the maps $|P_\bullet(X)|_{n-1}\to |P_\bullet(X)|_n$ are hence naturally homotopically trivial.
The construction of the realization of $P_{\bullet}(X)$ yields $\Omega'X$-spaces. We construct a natural commutative diagram of equivariant maps $$\xymatrix{
E_0\Omega 'X \ar@{ >->}[r]^{}_{} \ar[d]_{g_0}_{}
& E_1\Omega 'X\ar@{ >->}[r]^{}_{} \ar[d]^{g_1}_{} & \cdots \ar@{ >->}[r]^{}_{}
& E_n\Omega 'X\ar@{ >->}[r]^{}_{} \ar[d]^{g_n}_{} & \cdots
\\
|P_{\bullet}(X)|_0 \ar@{ >->}[r]^{}_{}
& |P_{\bullet}(X)|_1 \ar@{ >->}[r]^{}_{} & \cdots \ar@{ >->}[r]^{}_{}
& |P_{\bullet}(X)|_n \ar@{ >->}[r]^{}_{} & \cdots
}$$ inductively as follows: The map $g_0$ is the identity $\Omega 'X \stackrel{=}{\to} \Omega 'X$. Suppose that $g_n$ is defined. Since the map $|P_{\bullet}(X)|_n {\rightarrowtail }|P_{\bullet}(X)|_{n+1}$ is naturally homotopically trivial, it factors naturally through the cone $C|P_{\bullet}(X)|_n$. Extend this factorization equivariantly to obtain the following commutative diagram of $\Omega 'X$-spaces: $$\xymatrix{
\Omega 'X\times |P_{\bullet}(X)|_n\ar[r]^{}_{} \ar[d]^{}_{}
& |P_{\bullet}(X)|_n\ar [d]^{}_{}
\\
\Omega 'X\times C|P_{\bullet}(X)|_n\ar[r]^{}_{}
& |P_{\bullet}(X)|_{n+1}.
}$$ Define $g_{n+1}$ to be the composite $$\begin{aligned}
\lefteqn{E_n\Omega 'X\cup_{\Omega 'X\times E_n\Omega 'X}(\Omega 'X\times CE_n\Omega 'X)}\\
&\to& |P_{\bullet}(X)|_n\cup_{\Omega 'X\times |P_{\bullet}(X)|_n}(\Omega 'X\times C|P_{\bullet}(X)|_n)\\
&\to& |P_{\bullet}(X)|_{n+1}.\end{aligned}$$ It is clear that $g_{n+1}$ is natural. In the direct limit we obtain a natural equivariant map $g : E\Omega 'X \to |P_{\bullet}(X)|_{\infty}$. This map is a homotopy equivalence. Indeed, $E\Omega 'X$ is contractible and, since each inclusion $|P_{\bullet}(X)|_n {\rightarrowtail }|P_{\bullet}(X)|_{n+1}$ is homotopically trivial, $|P_{\bullet}(X)|_{\infty}$ is contractible, too. For each $n \in \N$ we therefore obtain the following natural commutative diagram of $\Omega'X$-spaces: $$\xymatrix{
E_n\Omega 'X\ar[r]^{}_{} \ar[d]^{}_{}
& |P _{\bullet} (X)|_n\ar [d]^{}_{}
\\
E\Omega 'X\ar[r]^{\sim}_{}
& |P _{\bullet} (X)|_{\infty}.
}$$ Passing to the orbit spaces, we obtain the diagram of the statement. It follows for instance from [@LSabstract 1.16] that the map $B\Omega 'X \to |{\mathcal G} _{\bullet} (X)|_{\infty}$ is a homotopy equivalence.
Note that the upper horizontal map in the diagram of [Proposition \[cat\]]{} is not a homotopy equivalence in general. Indeed, for $X = *$, $B_1\Omega'X$ is contractible but $|{\mathcal G}_{\bullet}(X)|_1 \simeq S^1$. It can, however, be shown that there also exists a diagram as in [Proposition \[cat\]]{} with the horizontal maps reversed.
The facial resolution $\Omega'\Lambda_{\bullet}X \to \Omega' X$ admits a contraction
====================================================================================
Consider the natural map $\gamma _X \colon \Omega 'X \to \Omega' \Sigma \Omega X$, $\gamma _X(\omega ,t) = (\nu (\omega ,t),t)$ where $\nu (\omega ,t): \R^+ \to \Sigma \Omega X$ is given by $$\nu (\omega ,t)(u) = \left\{ \begin{array}{ll}
\left[\omega _t, \frac{u}{t} \right], & u < t,\\
\left[c_*,0\right], & u \geq t.
\end{array}\right.$$ Here, $c_*$ is the constant path $u \mapsto *$ and $\omega_t \colon I \to X$ is the loop defined by $\omega_t(s) = \omega (ts)$.
The map $\gamma_X$ is continuous.
It suffices to show that the map $\nu ^{\flat} : \Omega 'X \times \R^+ \to \Sigma \Omega X$, $(\omega ,t,u) \mapsto \nu(\omega ,t)(u)$ is continuous. Consider the subspace $W = \{\omega \in X^{\R^+}: \omega (0) = \ast\}$ of $X^{\R^+}$ and the continuous map $\rho : W \times \R^+ \to X^{\R^+}$ given by $$\rho (\omega ,t)(u) = \left\{ \begin{array}{ll}
\omega (u), & u\leq t,\\
\omega (t), & u\geq t.
\end{array}\right.$$ Note that if $(\omega ,t) \in P'X$ then $\rho(\omega ,t) = \omega$. Consider the continuous map $$\phi : W \times \R^+ \times [0,\frac{\pi}{2}] \to \Sigma P'X$$ defined by $$\phi(\omega ,r, \theta) = \left\{ \begin{array}{ll}
\left[\rho(\omega, r\cos \theta), r\cos \theta, \tan \theta \right], & \theta \leq \frac{\pi}{4},\\
\left[c_*, 0, 0 \right], & \theta \geq \frac{\pi}{4}.
\end{array}\right.$$ When $r = 0$, we have $\phi(\omega ,r, \theta) = [c_*,0,0]$ for any $\theta$. Therefore $\phi$ factors through the identification map $$W\times \R^+ \times [0,\frac{\pi}{2}] \to W\times \R^+ \times \R^+, (\omega ,r,\theta) \mapsto (\omega ,r\cos \theta, r\sin \theta)$$ and induces a continuous map $\psi : W \times \R^+ \times \R^+ \to \Sigma P'X$. Explicitly, $$\psi(\omega ,t, u) = \left\{ \begin{array}{ll}
\left[\rho(\omega, t), t, \frac{u}{t} \right], & u < t,\\
\left[c_*, 0, 0 \right], & u \geq t.
\end{array}\right.$$ Consider the continuous map $\xi : P'X \to PX$ defined by $\xi(\omega ,t)(s) = \omega (ts)$. Note that $\xi (\omega, t) = \omega _t$ if $(\omega,t) \in \Omega'X$ and, in particular, that $\xi (c_*,0) = c_*$. The restriction of $\Sigma \xi \circ \psi$ to $\Omega 'X \times \R^+$ factors through the subspace $\Sigma \Omega X$ of $\Sigma PX$ and the continuous map $$\Omega 'X \times \R^+ \to \Sigma \Omega X, (\omega ,t, u) \mapsto (\Sigma \xi \circ \psi) (\omega ,t, u)$$ is exactly $\nu ^{\flat}$.
\[omegacontraction\] The maps $s = \gamma_{(\Sigma \Omega)^nX} : \Omega'(\Sigma \Omega)^nX \to \Omega'(\Sigma \Omega)^{n+1}X$ define a contraction of the facial resolution $\Omega'\Lambda_{\bullet}X \to \Omega 'X$.
We have $(\Omega '(\ev_X)\circ \gamma_X)(\omega ,t) = \Omega '(\ev_X)(\nu (\omega ,t),t) = (\beta (\omega ,t),t)$ where $$\beta (\omega ,t)(u) = \left\{ \begin{array}{ll}
\omega _t(\frac{u}{t}) = \omega (u), & u < t,\\
\ast = \omega (u), & u \geq t.
\end{array}\right.$$ Hence $(\Omega '(\ev_X)\circ \gamma_X) = \id_{\Omega'X}$.
In the same way one has $(\Omega '(\ev_{(\Sigma \Omega)^nX})\circ \gamma_{(\Sigma \Omega)^nX}) = \id_{(\Sigma \Omega)^nX}$. This shows the relation $d_0 \circ s = \id$. It remains to check that $d_j \circ s = s\circ d_{j-1}$, for $j \geq 1$. For $(\omega ,t) \in \Omega '(\Sigma \Omega)^{n}X$ we have $(d_j \circ s)(\omega ,t) = (\Omega'(\Sigma \Omega)^j(\ev_{(\Sigma \Omega)^{n-j}X})\circ \gamma_{(\Sigma \Omega )^nX})(\omega ,t) = (\sigma (\omega ,t),t)$ where $$\sigma (\omega ,t)(u) = \left\{ \begin{array}{ll}
(\Sigma \Omega)^j(\ev_{(\Sigma \Omega)^{n-j}X})\left[\omega _t, \frac{u}{t}\right] = \left[(\Sigma \Omega)^{j-1}(\ev_{(\Sigma \Omega)^{n-j}X})\circ \omega _t, \frac{u}{t}\right], & \!\!\!\!u < t,\\
(\Sigma \Omega)^j(\ev_{(\Sigma \Omega)^{n-j}X})\left[c_*,0\right] = \left[c_*,0\right], &
\!\!\!\! u \geq t.
\end{array}\right.$$ On the other hand, $(s\circ d_{j-1})(\omega ,t) = (\gamma_{(\Sigma \Omega )^{n-1}X}\circ \Omega'(\Sigma \Omega)^{j-1}(\ev_{(\Sigma \Omega)^{n-j}X}))(\omega ,t) = (\tau (\omega ,t),t)$ where $$\tau (\omega ,t)(u) = \left\{ \begin{array}{ll}
\left[((\Sigma \Omega)^{j-1}(\ev_{(\Sigma \Omega)^{n-j}X})\circ \omega )_t, \frac{u}{t}\right], & u < t,\\
\left[c_*,0\right], & u \geq t.
\end{array}\right.$$ This shows that $d_j \circ s = s\circ d_{j-1}$ $(j \geq 1)$.
Second part of [Theorem \[thm:main\]]{}: the map $G_n(X)\to \|\Lambda_{\bullet}X\|_n$
======================================================================================
A *bifacial space* is a facial object in the category $d{\bf Top}$ of facial spaces. We will use notations like $Z_{\bullet}^{\bullet}$ to denote bifacial spaces and refer to the upper index as the column index and to the lower index as the row index. In this way, a bifacial space can be represented by a diagram of the following type:
$$\xymatrix{
\vdots \ar@<4pt>[d]^-{\del_{n+1}} \ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0}
&
\vdots \ar@<4pt>[d]^-{\del_{n+1}} \ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0}
&
\vdots \ar@<4pt>[d]^-{\del_{n+1}} \ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0}
&&
\vdots \ar@<4pt>[d]^-{\del_{n+1}} \ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0}
&
\vdots \ar@<4pt>[d]^-{\del_{n+1}} \ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0}
&\\
Z_{n}^{0 }
\ar@<4pt>[d]^-{\del_n} \ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0}
&\ar@<2pt>[l]^{d_1} \ar@<-2pt>[l]_{d_0} Z_{n}^{1}
\ar@<4pt>[d]^-{\del_n} \ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0}
&\ar@<4pt>[l]^{d_2} \ar[l]|{d_1}\ar@<-4pt>[l]_{d_0} Z_{n}^{2}
\ar@<4pt>[d]^-{\del_n} \ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0}
&\cdots
&Z_{n}^{p-1}\ar@<4pt>[d]^-{\del_n} \ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0}
&\ar@<4pt>[l]^-{d_p} \ar@{}[l]|-{:}\ar@<-5pt>[l]_-{d_0}Z_{n}^{p}
\ar@<4pt>[d]^-{\del_n} \ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0}
&\cdots\\
\vdots \ar@<4pt>[d]^-{\del_2} \ar[d]\ar@<-4pt>[d]_-{\del_0}
&\vdots \ar@<4pt>[d]^-{\del_2} \ar[d]\ar@<-4pt>[d]_-{\del_0}
&\vdots \ar@<4pt>[d]^-{\del_2} \ar[d]\ar@<-4pt>[d]_-{\del_0}
&\cdots
&\vdots \ar@<4pt>[d]^-{\del_2} \ar[d]\ar@<-4pt>[d]_-{\del_0}
&\vdots \ar@<4pt>[d]^-{\del_2} \ar[d]\ar@<-4pt>[d]_-{\del_0}
&\cdots\\
Z_{1}^{0 }
\ar@<2pt>[d]^-{\del_1} \ar@<-2pt>[d]_-{\del_0}
&\ar@<2pt>[l]^{d_1} \ar@<-2pt>[l]_{d_0}Z_{1}^{1}
\ar@<2pt>[d]^-{\del_1} \ar@<-2pt>[d]_-{\del_0}
&\ar@<4pt>[l]^{d_2} \ar[l]|{d_1}\ar@<-4pt>[l]_{d_0}Z_{1}^{2}
\ar@<2pt>[d]^-{\del_1} \ar@<-2pt>[d]_-{\del_0}
&\cdots
&Z_{1}^{p-1}\ar@<2pt>[d]^-{\del_1} \ar@<-2pt>[d]_-{\del_0}
&\ar@<4pt>[l]^-{d_p} \ar@{}[l]|-{:}\ar@<-5pt>[l]_-{d_0}Z_{1}^{p}
\ar@<2pt>[d]^-{\del_1} \ar@<-2pt>[d]_-{\del_0}
&\cdots\\
Z_{0}^{0 }
&\ar@<2pt>[l]^{d_1} \ar@<-2pt>[l]_{d_0}Z_{0}^{1}
&\ar@<4pt>[l]^{d_2} \ar[l]|{d_1}\ar@<-4pt>[l]_{d_0}Z_{0}^{2}
&\cdots
&Z_{0}^{p-1}
&\ar@<4pt>[l]^-{d_p} \ar@{}[l]|-{:}\ar@<-5pt>[l]_-{d_0}Z_{0}^{p}
&\cdots
}$$ As in this diagram we shall reserve the notation $\del_i$ for the face operators of a column facial space and the notation $d_i$ for the face operators of a row facial space. For any $k$, $|Z^k_{\bullet}|_{m}$ (resp. $|Z_k^{\bullet}|^{m}$) is the realization up to $m$ of the $k$th column (resp. $k$th row) and $|Z^{\bullet}_{\bullet}|_{m}$ (resp. $|Z_{\bullet}^{\bullet}|^{m}$) is the facial space obtained by realizing each column (resp. each row) up to $m$.\
The construction of the map $G_n(X)\to \|\Lambda_{\bullet}X\|_n$ relies heavily on the following result which is analogous to a theorem of A. Libman [@LibmanII]. As A. Libman has pointed out to the authors, this result can be derived from [@LibmanII] (private communication). For the convenience of the reader, we include, in an appendix, an independent proof of the particular case we need.
\[Libman\] Consider a facial space $Z_{\bullet}^{-1 }$ and a facial resolution $Z_{\bullet}^{\bullet} \stackrel{d_0}{\to} Z_{\bullet}^{-1}$ such that each row $Z_{k}^{\bullet} \stackrel{d_0}{\to} Z_{k}^{-1}$ admits a contraction. Then, for any $n$, there exists a not necessarily base-point preserving continuous map $|Z_{\bullet}^{-1}|_n \to ||Z_{\bullet}^{\bullet}|^n|_n$ which is a section up to free homotopy of the canonical map $||Z_{\bullet}^{\bullet}|_n|^n\to |Z_{\bullet}^{-1}|_n$.
The second part of [Theorem \[thm:main\]]{} can be stated as follows.
\[thm:hardway\] Let $X\in \Top$ be a simply connected space. For each $n \in \N$ the $n$th Ganea map $G_n(X) \to X$ factors up to (pointed) homotopy through the canonical map $\|\Lambda _{\bullet}X\|_n \to X$.
Consider the column facial space $Z_{\bullet}^{-1} = {\mathcal G}_{\bullet}(X)$ and the facial resolution $Z_{\bullet}^{-1} \leftarrow Z_{\bullet}^{\bullet} $ where $Z_{i}^{j} = {\mathcal G}_{i}(\Lambda_{j}X)$. Each row facial resolution $$Z_{i}^{-1} =
{\mathcal G}_{i}(X) \leftarrow Z_{i}^{\bullet} = {\mathcal G}_{i}(\Lambda_{\bullet}X)$$ admits a contraction. Since ${\mathcal G}_{0}(\Lambda_{\bullet}X) = \ast$, this is clear for $i = 0$. For $i > 0$, ${\mathcal G}_{i}(\Lambda_{\bullet}X) = (\Omega '\Lambda_{\bullet}X)^i$. Indeed, since, by [Proposition \[omegacontraction\]]{}, the facial resolution $\Omega 'X \leftarrow \Omega'\Lambda_{\bullet}X$ admits a contraction, its $i$th power also admits a contraction.
For $n \in \N$ consider the commutative diagram $$\xymatrix{
B_n\Omega 'X\ar[r]^{}_{} \ar[d]^{}_{} &
|{\mathcal G}_{\bullet}(X)|_n \ar[d] &
||{\mathcal G}_{\bullet}(\Lambda_{\bullet}X)|_n|^n \ar[l] \ar[d]
\\
B\Omega 'X\ar[r]^{}_{} &
|{\mathcal G}_{\bullet}(X)|_{\infty} &
||{\mathcal G}_{\bullet}(\Lambda_{\bullet}X)|_{\infty}|^n \ar[l]
}$$ in which the left-hand square is the natural square of [Proposition \[cat\]]{}. Recall that the lower left horizontal map is a homotopy equivalence. Since $X$ is simply connected, $X$ is naturally weakly equivalent to $B\Omega 'X$ and hence to $|{\mathcal G}_{\bullet}(X)|_{\infty}$. It follows that the map $||{\mathcal G}_{\bullet}(\Lambda_{\bullet}X)|_{\infty}|^n \to |{\mathcal G}_{\bullet}(X)|_{\infty}$ is weakly equivalent to the map $|\Lambda_{\bullet}X|_n \to X$. Since this last map factors through the map $\|\Lambda_{\bullet}X\|_n \to X$ and since, by [Theorem \[Libman\]]{}, the upper right horizontal map of the diagram above admits a free homotopy section, we obtain a diagram $$\xymatrix{
B_n\Omega 'X\ar[r]^{}_{} \ar[d]^{}_{}
& \|\Lambda_{\bullet}X\|_n\ar [d]^{}_{}
\\
B\Omega 'X\ar[r]^{f}_{}
& X
}$$ which is commutative up to free homotopy and in which $f$ is a (pointed) homotopy equivalence. Since the left hand vertical map is equivalent to the Ganea map $G_n(X) \to X$, there exists a diagram $$\xymatrix{
G_n(X) \ar[r]^{}_{} \ar[d]^{}_{}
& \|\Lambda_{\bullet}X\|_n\ar [d]^{}_{}
\\
X\ar[r]^{g}_{}
& X
}$$ which is commutative up to free homotopy and in which $g$ is a (pointed) homotopy equivalence. This implies that the Ganea map $G_n(X) \to X$ factors up to free homotopy through the canonical map $\|\Lambda _{\bullet}X\|_n \to X$. Since $X$ is simply connected and $\|{\Lambda}_{\bullet}X\|_n$ is connected, the Ganea map $G_n(X) \to X$ also factors up to pointed homotopy through the canonical map $\|\Lambda _{\bullet}X\|_n \to X$.
Proof of [Theorem \[thm:main2\]]{}
==================================
Recall the homotopy fiber sequence $$\xymatrix @1{\Omega X\ast \Omega X\ar[r]^-h&\Sigma\Omega X\ar[r]^-{d_0}&X
}$$ where $h$ is the Hopf map. This sequence is natural in $X$ and the space $G_2(X)$ is equivalent to the pushout of $\xymatrix @1{\cC(\Omega X\ast \Omega X)&\Omega X\ast\Omega X\ar[l]\ar[r]&\Sigma\Omega X
}$, where $\cC(Y)$ denotes the (reduced) cone over a space $Y$. We use the following diagram $$\xymatrix@=10pt{
(2)&\cC(\Omega X\ast \Omega X)&&
\cC(\Omega\Sigma\Omega X\ast \Omega\Sigma\Omega X)\ar[ll]_{d_0}&
\cC(\Omega\left(\Sigma\Omega\right)^2X\ast \Omega\left(\Sigma\Omega\right)^2X)
\ar@<2pt>[l]^-{d_1} \ar@<-2pt>[l]_-{d_0}&\\
(1)&\Omega X\ast \Omega X\ar[u]\ar[d]_h&&
\Omega\Sigma\Omega X\ast \Omega\Sigma\Omega X\ar[u]\ar[ll]_{d_0}\ar[d]&
\Omega\left(\Sigma\Omega\right)^2X\ast \Omega\left(\Sigma\Omega\right)^2X\ar[u]\ar[d]
\ar@<2pt>[l]^-{d_1} \ar@<-2pt>[l]_-{d_0}&\\
(0)&\Sigma\Omega X\ar[dd]_(.4){d_0}&&
\left(\Sigma\Omega\right)^2 X\ar[ll]_{d_0}\ar[dd]_(.4){d_0}&
\left(\Sigma\Omega\right)^3 X\ar[dd]_(.4){d_0}
\ar@<2pt>[l]^-{d_1} \ar@<-2pt>[l]_-{d_0}&\\
&&\ar@{--}[uuu]\ar@{--}[rrr]&&&\\
(-1)&X&&\Sigma\Omega X\ar[ll]_{d_0}&
\left(\Sigma\Omega\right)^2X
\ar@<2pt>[l]^-{d_1} \ar@<-2pt>[l]_-{d_0}&\\
}$$ We observe that
- the image of Line (-1) by $\Omega$ has a contraction in the obvious sense;
- Line (0) is the image of Line (-1) by $\Sigma\Omega$ therefore Line (0) admits a contraction;
- the face operators of Line (1) are the maps $\Omega d_i\ast\Omega d_i$ with the face operators $d_i$ of Line (-1), thus Line (1) admits a contraction;
- Line (2) admits a contraction induced by the previous one.
From the expression of the Hopf map $h\colon \Omega X\ast\Omega X\to \Sigma\Omega X$, $h([\alpha, t,\beta])=[\alpha^{-1}\beta,t]$, we observe that the map $H\colon(\Omega X\ast\Omega X)\times [0,1]\to X$, defined by $H([\alpha,t,\beta],s)=\alpha^{-1}\beta(st)$, induces a natural extension of $d_0\circ h$ to $\cC(\Omega X\ast \Omega X)$. Therefore, we can complete the diagram by maps from Line (2) to Line (-1) which are compatible with face operators.\
Denote by $\tilde{G}$ the homotopy colimit of the framed part of the diagram. We have a commutative square: $$\xymatrix{
G_2(X)\ar[d]&\tilde{G}\ar[l]\ar[d]\\
X&\|\Lambda_\bullet X\|_1\ar[l]
}$$ [Lemma \[lem:petitlibman\]]{} provides a homotopy section of the map $\tilde{G}\to G_2(X)$. Thus we obtain a map $$G_2(X)\to \|\Lambda_\bullet X\|_1$$ up to homotopy over $X$.
\[lem:petitlibman\] We consider the following diagram in $\Top$, satisfying $d_0\circ d_0=d_0\circ d_1$ and the obvious commutativity conditions.$$\xymatrix{
&\ar@{--}[rrr]\ar@{--}[dddd]
&&&\ar@{--}[dddd]\\
A_{-1}&&
A_0\ar[ll]_(.4){d_0}&
A_1
\ar@<2pt>[l]^-{d_1} \ar@<-2pt>[l]_-{d_0}&\\
B_{-1}\ar[u]_{\alpha_{-1}}\ar[d]^{\beta_{-1}}&&
B_0\ar[ll]_(.4){d_0}\ar[u]_{\alpha_{0}}\ar[d]^{\beta_{0}}&
B_1\ar[u]_{\alpha_{1}}\ar[d]^{\beta_{1}}
\ar@<2pt>[l]^-{d_1} \ar@<-2pt>[l]_-{d_0}&\\
C_{-1}&&
C_0\ar[ll]_(.4){d_0}&
C_1
\ar@<2pt>[l]^-{d_1} \ar@<-2pt>[l]_-{d_0}&\\
&\ar@{--}[rrr]&&&
}$$ Let $\tilde{G}$ be the homotopy colimit of the framed part and $G_{-1}$ be the homotopy colimit of the first column. We denote by $\tilde{d}\colon \tilde{G}\to G_{-1}$ the map induced by $d_0$. If the lines of the previous diagram admit contractions in the obvious sense, then the map $\tilde{d}$ has a (pointed) homotopy section.
This is a special case of a dual of a result of Libman in [@LibmanII]. It is not covered by the proof of the last section but this situation is simple and we furnish an ad-hoc argument for it.
First we construct maps $f\colon A_{-1}\to \|A_\bullet \|_1$, $g\colon B_{-1}\to \|B_\bullet \|_1$ and $k\colon C_{-1}\to \|C_\bullet \|_1$ such that $ \|\alpha_\bullet \|_1\circ g\simeq f\circ \alpha_{-1}\text{ and }
k\circ \beta_{-1}\simeq \|\beta_\bullet \|_1\circ g$. With the same techniques as in [Proposition \[quotient\]]{}, it is clear that $\|A_\bullet\|_1$ is homeomorphic to the quotient $A\rtimes \Delta^1$ by the relation $(a,t_0,t_1)\sim (sd_ia,0,1)$ if $t_i=0$. So, we define $f$, $g$ and $k$ by $$f(a)=[s_As_A(a),0,1], g(b)=[s_Bs_B(b),0,1]\text{ and }
k(c)=[s_Cs_C(c),0,1].$$ A computation gives: $$\begin{aligned}
\|\alpha_\bullet \|_1\circ g(b)&=&[\alpha_1s_Bs_B(b),0,1]\\
&=& [s_Ad_0\alpha_1s_Bs_B(b),0,1]\\
&=&[s_A\alpha_0d_0s_Bs_B(b),0,1]\\
&=& [s_A\alpha_0s_B(b),0,1]\\
f\circ \alpha_1(b)&=&
[s_As_A\alpha_{-1}(b),0,1]\\
&=&
[s_As_Ad_0\alpha_0s_B(b),0,1]\\
&=&[s_Ad_1s_A\alpha_0s_B(b),0,1]\\
&=&[s_A\alpha_0s_B(b),1,0],\end{aligned}$$ the last equality coming from our construction of $ \|A_\bullet \|_1$. These two points, $ \|\alpha_\bullet \|_1\circ g(b)$ and $f\circ \alpha_1(b)$, are canonically joined by a path that reduces to a point if $b=*$. The same argument gives the similar result for $k$. We observe now that these homotopies give a map between the two mapping cylinders which is a section up to pointed homotopy.
Open questions
==============
The main open question after these results concerns the existence of maps over $X$ up to homotopy, $G_n(X)\to \|\Lambda_\bullet X\|_{n-1}$ for any $n$. This question is related to the Lusternik-Schnirelman category (LS-category in short) $\cat X$ of a topological space $X$. Recall that $\cat X\leq n$ if and only if the Ganea fibration $G_n(X)\to X$ admits a section. The truncated resolutions bring a new homotopy invariant $\ell_{\Sigma\Omega}(X)$ defined in a similar way as follows: $$\ell_{\Sigma\Omega}(X)\leq n \text{ if the map } \|\Lambda_\bullet X\|_{n-1}\to X \text{ admits a homotopical section.}$$ From [Theorem \[thm:main\]]{} and [Theorem \[thm:main2\]]{}, we know that this new invariant coincides with the LS-category for spaces of LS-category less than or equal to 2 and satisfies $$\cat X \leq \ell_{\Sigma\Omega}(X)\leq 1+ \cat X.$$ Grants to the result in dimension 2, $ \ell_{\Sigma\Omega}(X)$ does not coincide with the cone length. We conjecture its equality with the LS-category and the existence of maps $G_n(X)\to \|\Lambda_\bullet X\|_{n-1}$ over $X$ up to homotopy.
We now extend our study by considering a cotriple $T$. Recall that a cotriple $(T,\eta,\varepsilon)$ on $\Top$ is a functor $T :
\Top\rightarrow
\Top$ together with two natural transformations $\eta_X\colon T(X)\rightarrow X$ and $\varepsilon_X\colon T(X)\rightarrow T^2(X)$ such that:\
$\varepsilon_{F(X)}\circ \varepsilon_X=F(\varepsilon_X)\circ \varepsilon_X$ and $\eta_{T(X)}\circ \varepsilon_X=T(\eta_X)\circ \varepsilon_X=\id_{T(X)}$.
It is well known that $T$ gives a simplicial space $\Lambda^T_\bullet X$ defined by $\Lambda^T_n X =T^{n+1}(X)$. From it, we deduce a facial space and the truncated realizations $\|\Lambda^T_\bullet X\|_n$. If $T$ satisfies $T(*)\sim *$, takes its values in suspensions and $\Omega'(\Lambda^T_\bullet X)$ admits a contraction, a careful reading of the proofs in this work shows that we get the same conclusions as in [Theorem \[thm:main\]]{} and [Theorem \[thm:main2\]]{} with the Ganea spaces $G_n(X)$ and the realizations $\|\Lambda^T_\bullet X\|_i$.
We could also use a construction of the Ganea spaces adapted to the cotriple $T$ as follows.
\[defi:TGanea\] Let $T$ be a cotriple and $X$ be a space, the *$n$th fibration of Ganea associated to $T$ and $X$* is defined inductively by:
– $p_1^T\colon G_1^T(X)\rightarrow X$ is the associated fibration to $\eta_X\colon T(X)
\rightarrow X$,
– if $p^T_n\colon G_n^T(X)\rightarrow X$ is defined, we denote by $F_n^T(X)$ its homotopy fiber and build a map ${p'}_{n+1}^T\colon G_n^T(X)\cup {\cC}(T(F_n^T(X))\rightarrow X$ as $p^T_n$ on $G_n^T(X)$ and sending the cone $\cC(T(F_n^T(X))$ on the base point. The fibration $p_{n+1}^T$ is the associated fibration to ${p'}_{n+1}^T$.
The results of this paper and the questions above have their analog in this setting. New approximations of spaces arise from the truncated realizations $\|\Lambda^T_\bullet X\|_i$ and from the adapted fiber-cofiber constructions. One natural problem is to look for a comparison between them. These questions can also be stated in terms of LS-category. For instance, does the Stover resolution (see [@Sto90]) of a space by wedges of spheres give the $s$-category defined in [@Sc-Ta99b]?
Appendix: Proof of [Theorem \[Libman\]]{}
=========================================
The purpose of this appendix is to give a proof of [Theorem \[Libman\]]{}. This proof is contained in the [Subsection \[proof\]]{} below and uses the constructions and notation of the following subsection.
$n$-facial spaces and $n$-rectifiable maps
------------------------------------------
Let $n\geq 0$ be an integer. A facial space $X_{\bullet}$ is a *$n$-facial space* if, for any $k\geq n+1$, $X_k=*$. To any facial space $Y_{\bullet}$, we can associate an $n$-facial space $T^n_{\bullet}(Y)$ by setting $T^n_{k}(Y)=Y_k$ if $k\leq n$ and $T^n_{k}(Y)=*$ if $k\geq n+1$. Obviously, for any $k\leq n$, we have $|T^n_{\bullet}(Y)|_k=|Y_{\bullet}|_k$.\
Let $Y_{\bullet}$ be a facial space with face operators $\del_i:Y_k\to Y_{k-1}$. We associate to $Y_{\bullet}$ two $n$-facial spaces $I^n_{\bullet}(Y)$ and $J^n_{\bullet}(Y)$ and morphisms $\eta,\zeta,\pi,\overline{\pi}$ which induce homotopy equivalences between the realizations up to $n$ and such that the following diagram is commutative: $$\xymatrix{
T^n_{\bullet}(Y)
\ar[r]^{\eta}\ar[rd]_{\id}&I^n_{\bullet}(Y)\ar[d]_{{\pi}} &
J^n_{\bullet}(Y) \ar[l]_{\zeta} \ar[ld]^{\overline{\pi}}\\
&T^n_{\bullet}(Y).& }$$ For any integer $k\geq 1$ we denote by $\del_{\underline{k}}$ the set $\{\del_0,...,\del_k\}$ of the $(k+1)$ face operators $\del_i:Y_k\to Y_{k-1}$ and, for any integer $l\geq
k$, we set $\del_{\underline{k}\,:\,\underline{l}}:=
\del_{\underline{k}}\times \del_{\underline{k+1}}\times...\times \del_{\underline{l}}$.\
#### **The $n$-facial space $J^n_{\bullet}(Y)$.**
For $0\leq k\leq n$, consider the space: $$\left(Y_k\times \Delta^0\right) \coprod \coprod_{m=1}^{n-k}\left(\del_{\underline{k+1}\,:\,\underline{k+m}}\times Y_{k+m}\times \Delta^m\right).$$ An element of this space will be written $(\del_{i_1},...,\del_{i_m},y,t_0,...,t_m)$ with the convention $(\del_{i_1},...,\del_{i_m},y,t_0,...,t_m)=(y,1)$ if $m=0$. Set $$J^n_{k}(Y):=\left(\left(Y_k\times \Delta^0\right) \coprod \coprod_{m=1}^{n-k}\left(\del_{\underline{k+1}\,:\,\underline{k+m}}\times Y_{k+m}\times \Delta^m\right)\right)/\sim$$ where the relations are given by $$(\del_{i_1},...,\del_{i_m},y,t_0,...,t_m)\sim (\del_{i_1},...,\del_{i_{m-1}},\del_{i_m}y,t_0,...,t_{m-1}), \quad \mbox{if } t_m=0,$$ and $$(\del_{i_1},...,\del_{i_p},\del_{i_{p+1}},...\del_{i_m},y,t_0,...,t_m)\sim
(\del_{i_1},...,\del_{i_{p+1}-1},\del_{i_{p}},...\del_{i_m},y,t_0,...,t_m),$$ if $t_p=0$ and $i_p<i_{p+1}$.
Together with the face operators $J{\del}_i:J^n_{k}(Y)\to
J^n_{k-1}(Y)$, $0\leq i\leq k$, defined by $$J{\del}_i(\del_{i_1},...,\del_{i_m},y,t_0,...,t_m)=(\del_i,\del_{i_1},...,\del_{i_m},y,0,t_0,...,t_m),$$ $J^n_{\bullet}(Y)$ is a $n$-facial space.\
#### **The $n$-facial space $I^n_{\bullet}(Y)$.**
For $0\leq k\leq n$, we consider now the space: $$\left(Y_k\times \Delta^1\right) \coprod \coprod_{m=1}^{n-k}\left(\del_{\underline{k+1}\,:\,\underline{k+m}}\times Y_{k+m}\times \Delta^{m+1}\right).$$ We write $(\del_{i_1},...,\del_{i_m},y,t_0,...,t_{m+1})$ the elements of that space with the convention $(\del_{i_1},...,\del_{i_m},y,t_0,...,t_{m+1})=(y,t_0,t_1)$ if $m=0$. The space $I^n_k(Y)$ is defined to be the quotient $$I^n_k(Y):=\left(\left(Y_k\times \Delta^1\right) \coprod \coprod_{m=1}^{n-k}\left(\del_{\underline{k+1}\,:\,\underline{k+m}}
\times Y_{k+m}\times \Delta^{m+1}\right)\right)/\sim$$ with respect to the relations $$(\del_{i_1},...,\del_{i_m},y,t_0,...,t_{m+1})\sim (\del_{i_1},...,\del_{i_{m-1}},\del_{i_m}y,t_0,...,t_m), \quad \mbox{if } t_{m+1}=0,$$ and $$(\del_{i_1},...,\del_{i_p},\del_{i_{p+1}},...\del_{i_m},y,t_0,...,t_{m+1})\sim
(\del_{i_1},...,\del_{i_{p+1}-1},\del_{i_{p}},...\del_{i_m},y,t_0,...,t_{m+1}),$$ if $t_{p+1}=0$ and $i_p<i_{p+1}$.
Together with the face operators $I{\del}_i:I^n_k(Y)\to
I^n_{k-1}(Y)$, $0\leq i\leq k$, defined by $$I{\del}_i(\del_{i_1},...,\del_{i_m},y,t_0,t_1,...,t_{m+1})=(\del_i,\del_{i_1},...,\del_{i_m},y,t_0,0,t_1,...,t_{m+1}),$$ $I^n_{\bullet}(Y)$ is a $n$-facial space.\
#### **The morphisms $\eta,\zeta,\pi,\overline{\pi}$**
The facial maps $\eta: T^n_{\bullet}(Y)\to I^n_{\bullet}(Y)$, $\zeta: J^n_{\bullet}(Y)\to I^n_{\bullet}(Y)$, $\pi:I^n_{\bullet}(Y)
\to T^n_{\bullet}(Y)$ and $\overline{\pi}:J^n_{\bullet}(Y)\to
T^n_{\bullet}(Y)$ are respectively defined (for $k\leq n$) by: $$\begin{array}{l}
\eta_k(y)=(y,1,0),\\[.2cm]
\zeta_k(\del_{i_1},...,\del_{i_m},y,t_0,...,t_m)=(\del_{i_1},...,\del_{i_m},y,0,t_0,...,t_m),\\[.2cm]
\pi_k(\del_{i_1},...,\del_{i_m},y,t_0,...,t_{m+1})=\del_{i_1}\cdots\del_{i_m}y \quad \mbox{and} \quad \pi_k(y,t_0,t_1)=y,\\[.2cm]
\overline{\pi}_k=\pi_k\circ \zeta_k.
\end{array}$$
We have $\pi_k\circ \eta_k=\id$ so that the following diagram is commutative: $$\xymatrix{
T^n_{\bullet}(Y)
\ar[r]^{\eta}\ar[rd]_{\id}&I^n_{\bullet}(Y)\ar[d]_{{\pi}} &
J^n_{\bullet}(Y) \ar[l]_{\zeta} \ar[ld]^{\overline{\pi}}\\
&T^n_{\bullet}(Y).& }$$ In order to see that these morphisms induce homotopy equivalences between the realizations up to $n$, it suffices to see that, for any $k$, $0\leq k\leq n$, the maps $\eta_k,\zeta_k,\pi_k,\overline{\pi}_k$ are homotopy equivalences. Thanks to the commutativity of the diagram above we just have to check it for the maps $\pi_k$ and $\overline{\pi}_k$. These two maps admit a section: we have already seen that $\pi_k\circ \eta_k=\id$ and, on the other hand, the map $\varphi_k: T^n_{k}(Y) \to
J^n_{k}(Y)$ given by $\varphi_k(y)=(y,1)$ (which does not commute with the face operators) satisfies $\overline{\pi}_k\circ
\varphi_k=\id$. The conclusion follows then from the fact that the two homotopies $$\begin{array}{rcl}
H_k:I^n_k(Y)\times I&\to &I^n_k(Y)\\
((\del_{i_1},...,\del_{i_m},y,t_0,...,t_{m+1}),u)&\mapsto&
(\del_{i_1},...,\del_{i_m},y,u+(1-u)t_0),\\
&&\hskip 2.7cm (1-u)t_1,..,(1-u)t_{m+1})
\\[.2cm]
\overline{H}_k:J^n_{k}(Y)\times I&\to &J^n_{k}(Y)\\
((\del_{i_1},...,\del_{i_m},y,t_0,...,t_{m}),u)&\mapsto&
(\del_{i_1},...,\del_{i_m},y,u+(1-u)t_0,\\
&&\hskip 2.7cm (1-u)t_1,..,(1-u)t_{m})
\end{array}$$ satisfy $H_k(-,0)=\id$, $H_k(-,1)=\eta_k\circ \pi_k$ and $\overline{H}_k(-,0)=\id$, $\overline{H}_k(-,1)=\varphi_k\circ \overline{\pi}_k$.\
#### **$n$-rectifiable map.**
We write $\varphi:T^n_{\bullet}(Y)\dasharrow J^n_{\bullet}(Y)$ to denote the collection of maps $\varphi_k: T^n_k(Y) \to J^n_k(Y)$ given by $\varphi_k(y)=(y,1)$. Recall that $\varphi$ is not a morphism of facial spaces since it does not satisfy the usual rules of commutation with the face operators. In the same way we write $\psi:Y_{\bullet}\dasharrow
Z_{\bullet}$ for a collection of maps $\psi_k:Y_{k}\dasharrow Z_{k}$ which do not satisfy the usual rules of commutation with the face operators and we say that $\psi$ is an *$n$-rectifiable map* if there exists a morphism of facial spaces $\overline{\psi}:J^n_{\bullet}(Y)\to T^n_{\bullet}(Z)$ such that $\overline{\psi}_k\circ\varphi_k=\psi_k$ for any $k\leq
n$. So, an $n$-rectifiable map $\psi:Y_{\bullet}\dasharrow
Z_{\bullet}$ induces a map between the realizations up to $n$ of the facial spaces $Y_{\bullet}$ and $Z_{\bullet}$.\
Proof of [Theorem \[Libman\]]{} {#proof}
-------------------------------
Let $Z_{\bullet}^{\bullet} \stackrel{d_0}{\to} Z_{\bullet}^{-1}$ be a facial resolution of a facial space $Z_{\bullet}^{-1}$ such that each row $Z_{k}^{\bullet} \stackrel{d_0}{\to} Z_{k}^{-1}$ admits a contraction and let $n\geq 0$. We first note that the realization of $Z_{\bullet}^{\bullet}$ up to $p$ along the rows and up to $n$ along the columns leads to two canonical maps: $$||Z_{\bullet}^{\bullet}|^p|_ n\to |Z_{\bullet}^{-1}|_n \qquad ||Z_{\bullet}^{\bullet}|_n|^ p\to |Z_{\bullet}^{-1}|_n.$$ Induction on $p$ and standard colimit arguments show that these two maps are equal (up to homeomorphism). Here we prove that $||Z_{\bullet}^{\bullet}|^p|_ n\to |Z_{\bullet}^{-1}|_n$ admits a homotopy section.\
For any $k$, we denote by $s_k$ the contraction of the $k$th row $$\xymatrix{
Z_{k}^{-1 } & \ar[l]_{d_0} Z_{k}^{0 } &\ar@<2pt>[l]^{d_1}
\ar@<-2pt>[l]_{d_0}Z_{k}^{1} &\ar@<4pt>[l]^{d_2}
\ar[l]|{d_1}\ar@<-4pt>[l]_{d_0}X_{k}^{2} &\cdots &Z_{k}^{n-1}
&\ar@<4pt>[l]^-{d_n} \ar@{}[l]|-{:}\ar@<-4pt>[l]_-{d_0}Z_{k}^{n}
}$$ and, in order to simplify the notation we will write $L_k$ for the realization up to $n$ of this facial space. That is, $L_k=|Z_k^{\bullet}|^n$. Recall, from [Proposition \[quotient\]]{}, that the existence of the contraction permits the following description of $L_k$: $$L_k=Z_k^n\times \Delta^n /\sim$$ where the relation is given by $$(z,t_0,...,t_i,...,t_n)\sim (s_kd_iz,0,t_0,...,\hat{t}_i,...,t_n)\quad \mbox{if } t_i=0.$$ With respect to this description, the canonical map $L_k\to
Z_k^{-1}$ is given by\
$[z,t_0,...,t_i,...,t_n]\mapsto d_0^{n+1}z$ and is denoted by $\varepsilon_n$ (without reference to $k$).\
Realizing all the lines, we obtain a facial map: $$\xymatrix{
\vdots \ar@<4pt>[d]^-{\del_{n+1}} \ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0}&
\vdots \ar@<4pt>[d]^-{\del_{n+1}} \ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0}
\\
Z_n^{-1}\ar@<4pt>[d]^-{\del_n}
\ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0} & L_n\ar@<4pt>[d]^-{\del_n}
\ar@{}[d]|-{..}\ar@<-4pt>[d]_-{\del_0} \ar[l]_{\varepsilon_n}
\\
\vdots \ar@<4pt>[d]^-{\del_2} \ar[d] \ar@<-4pt>[d]_-{\del_0}
&\vdots \ar@<4pt>[d]^-{\del_2} \ar[d]\ar@<-4pt>[d]_-{\del_0} \\
Z_1^{-1}\ar@<2pt>[d]^-{\del_1} \ar@<-2pt>[d]_-{\del_0}
&L_1\ar@<2pt>[d]^-{\del_1} \ar@<-2pt>[d]_-{\del_0} \ar[l]_{\varepsilon_n}\\
Z_0^{-1} &L_0 \ar[l]_{\varepsilon_n} }$$
The face operators $\del_i:L_k\to L_{k-1}$ are given by $\del_i[z,t_0,...,t_n]=[\del_i z,t_0,...,t_n]$. Our aim is thus to see that the map obtained after realization (and always denoted by $\varepsilon_n$) $$\xymatrix{
|Z_{\bullet}^{-1}|_n& |L_{\bullet}|_n \ar[l]_{\varepsilon_n} }$$ admits a section up to homotopy.
For each $k$, the map $\varepsilon_n:L_k\to Z_k^{-1}$ admits a (strict) section given by $z\mapsto [s_k^{n+1}z,0,0,...,0,1]$ which we denote by $\psi_k$. The collection $\psi$ of these maps does not define a facial map since the contraction $s_k$ are not required to commute with the face operators $\del_i$ of the columns. The key is that $\psi:Z_{\bullet}^{-1}\dasharrow L_{\bullet}$ is an $n$-rectifiable map. We can indeed consider, for each $k\leq n$, the (well-defined) map $\overline{\psi}_k:J_k^n({Z^{-1}})\to L_k$ given by: $$\overline{\psi}_k(\del_{i_1},...,\del_{i_m},z,t_0,...,t_m)=
[s_k^{n+1-m}\del_{i_1}s_{k+1}\del_{i_2}s_{k+2}...\del_{i_m}s_{k+m}
z,0,...,0,t_0,...,t_m].$$ Straightforward calculation shows that the maps $\overline{\psi}_k$ commute with the face operators $\del_i$ so that the collection $\overline{\psi}$ is a facial map. This morphism also satisfies $\overline{\psi}_k\circ \varphi_k=\psi_k$ for any $k\leq n$ (which implies that $\psi$ is an $n$-rectifiable map) and $\varepsilon_n\overline{\psi}=\overline{\pi}$. We have hence the following commutative diagram:
$$\xymatrix{
T^n_{\bullet}(Z^{-1})
\ar[r]^{\eta}\ar[rd]_{\id}&I^n_{\bullet}(Z^{-1})\ar[d]_{{\pi}} &
J^n_{\bullet}(Z^{-1}) \ar[l]_{\zeta} \ar[ld]_{\overline{\pi}}\ar[r]^{\overline{\psi}}&T^n_{\bullet}(L)\ar[lld]^{\varepsilon_n}\\
&T^n_{\bullet}(Z^{-1}).& }$$
Since the morphisms $\eta$, $\zeta$, $\pi$ and $\overline{\pi}$ induce homotopy equivalence between the realizations up to $n$, we get the following situation after realization: $$\xymatrix{
|T^n_{\bullet}(Z^{-1})|_n
\ar[r]^{\sim}\ar[rd]_{\id}&|I^n_{\bullet}(Z^{-1})|_n\ar[d]_{{\sim}} &
|J^n_{\bullet}(Z^{-1})|_n \ar[l]_{\sim} \ar[ld]_{\sim}\ar[r]^{\overline{\psi}}&|T^n_{\bullet}(L)|_n\ar[lld]^{\varepsilon_n}\\
&|T^n_{\bullet}(Z^{-1})|_n.& }$$ Since $|T^n_{\bullet}(Z^{-1})|_n=|Z_{\bullet}^{-1}|_n$ and $|T^n_{\bullet}(L)|_n=|L_{\bullet}|_n$, we obtain that the map $|L_{\bullet}|_n\to|Z_{\bullet}^{-1}|_n $ admits a homotopy section. $\Box$
|
---
abstract: |
We show that the complete symmetric digraph $K_{2m}^\ast$ admits a resolvable decomposition into directed cycles of length $m$ for all odd $m$, $5 \le m \le 49$. Consequently, $K_{n}^\ast$ admits a resolvable decomposition into directed cycles of length $m$ for all $n \equiv 0 \pmod{2m}$ and odd $m$, $5 \le m \le 49$.
[*Keywords:*]{} Directed Oberwolfach Problem; complete symmetric digraph; resolvable directed cycle decomposition; Mendelsohn design
author:
- |
Andrea Burgess\
[University of New Brunswick]{}\
\
Nevena Francetić\
[University of Ottawa]{}\
\
Mateja Šajna[^1]\
[University of Ottawa]{}
title: |
On the directed Oberwolfach Problem\
with equal cycle lengths: the odd case
---
Introduction
============
In this paper, we are concerned with the problem of decomposing the complete symmetric digraph $K_n^\ast$ into spanning subdigraphs, each a vertex-disjoint union of directed cycles of length $m$. Thus, we are interested in the following problem.
\[prob\] Determine the necessary and sufficient conditions on $m$ and $n$ for the complete symmetric digraph $K_n^\ast$ to admit a resolvable decomposition into directed $m$-cycles.
In the design-theoretic literature, such decompositions have also been called Mendelsohn designs [@ColDin]. Problem \[prob\] can also be viewed as the directed version of the well-known Oberwolfach Problem with uniform cycle lengths, which was completely solved in [@AlsHag; @AlsSch].
It is easily seen that $K_n^\ast$ admits a resolvable decomposition into directed $m$-cycles only if $m$ divides $n$, and this condition is obviously sufficient if $m=2$. Problem \[prob\] has also been solved previously for $m=3$ [@BerGerSot] and for $m=4$ [@BenZha; @AdaBry]: the necessary conditions are sufficient except for $(m,n)=(3,6)$ and $(4,4)$. More recently, two of the present authors showed the following.
[@BurSaj]\[the:BurSaj\] Let $m$ and $n$ be integers with $5 \le m \le n$. Then the following hold.
1. Let $m$ be even, or $m$ and $n$ be both odd. Then there exists a resolvable decomposition of $K_n^\ast$ into directed $m$-cycles if and only if $m$ divides $n$ and $(m,n)\ne(6,6)$.
2. If there exists a resolvable decomposition of $K_{2m}^\ast$ into directed $m$-cycles, then there exists a resolvable decomposition of $K_n^\ast$ into directed $m$-cycles whenever $n \equiv 0 \pmod{2m}$.
In the same paper, we also posed the following conjecture.
[@BurSaj]\[conj\] Let $m$ be a positive odd integer. Then $K_{2 m}^\ast$ admits a resolvable directed $m$-cycle decomposition if and only if $m\ge 5$.
Observe that proving Conjecture \[conj\] (which appears to be difficult) would complete the solution to Problem \[prob\]. In this paper, we confirm the above conjecture for all $m \le 49$. Thus, we prove the following result.
\[the:main\] Let $m$ be an odd integer, $5 \le m \le 49$. Then $K_{2m}^\ast$ admits a resolvable decomposition into directed $m$-cycles.
Except for the smallest case $m=5$, the above theorem is proved using a general construction that is complemented with a computational result. We expect that with more computing power, this approach can be used to extend our result to even larger values of $m$.
Theorems \[the:BurSaj\] and \[the:main\] immediately yield the following.
Let $m$ be an odd integer, $5 \le m \le 49$. Then $K_n^\ast$ admits a resolvable decomposition into directed $m$-cycles whenever $n \equiv 0 \pmod{2m}$.
Preliminaries
=============
In this paper, the term [*digraph*]{} will mean a directed graph with no loops or multiple arcs. The symbol $K_n^\ast$ denotes the [*complete symmetric digraph*]{} of order $n$; that is, the digraph with $n$ vertices, and with arcs $(u,v)$ and $(v,u)$ for each pair of distinct vertices $u$ and $v$.
For a digraph $D=(V,A)$, a subset $V' \subseteq V$ of its vertex set, and subset $A'\subseteq A$ of its arc set, the symbols $D[V']$ and $D-A'$ will denote the subdigraph of $D$ induced by $V'$, and the subdigraph obtained from $D$ by deleting all arcs in $A'$, respectively. If $D$ is a spanning subdigraph of the complete symmetric digraph $K_n^\ast$ and $A' \subseteq A(K_n^\ast)-A(D)$, then $D+A'$ will denote the digraph $(V(D),A(D) \cup A')$.
A [*decomposition*]{} of a digraph $D$ is a collection $\{ H_1, H_2, \ldots, H_k \}$ of subdigraphs of $D$ whose arc sets partition the arc set of $D$. If each of the digraphs $H_i$ is isomorphic to a digraph $H$, then $\{ H_1, H_2, \ldots, H_k \}$ is called an [*$H$-decomposition*]{} of the digraph $G$.
A [*resolution class*]{} (or [*parallel class*]{}) of a decomposition ${\cal D}=\{ H_1, H_2, \ldots, H_k \}$ of $D$ is a subset $\{ H_{i_1}, H_{i_2}, \ldots, H_{i_t} \}$ of $\cal D$ with the property that the vertex sets of the digraphs $H_{i_1}, H_{i_2}, \ldots, H_{i_t} $ partition the vertex set of $D$. A decomposition is called [*resolvable*]{} if it can be partitioned into resolution classes.
By $\vec{C}_m$ we shall denote the directed cycle of length $m$. The terms $\vec{C}_m$-decomposition and resolvable $\vec{C}_m$-decomposition will be abbreviated as $\vec{C}_m$-D and R$\vec{C}_m$-D, respectively.
For a positive integer $m$ and $S \subseteq {\mathbb{Z}}_m^\ast$, the digraph with vertex set ${\mathbb{Z}}_m$ and arc set $\{ (i,i+d): i\in {\mathbb{Z}}_m, d \in S \}$, denoted ${{\rm Circ}}(m;S)$, is called the [*directed circulant of order $m$ with connection set $S$*]{}. The following result, a direct corollary of [@BerFavMah], will be an important ingredient in our constructions.
[@BerFavMah]\[lem:BerFavMah\] Let $m$ be a positive integer and $S \subseteq {\mathbb{Z}}_m^*$. Assume $S$ can be partitioned into sets of the form
- $\{ d \}$ such that $\gcd(d,m)=1$ and
- $\{\pm d_i, \pm d_j\}$ such that $\gcd(d_i,d_j,m)=1$.
Then the directed circulant ${{\rm Circ}}(m;S)$ can be decomposed into directed $m$-cycles.
Results
=======
\[lem:5-10\] There exists a R$\vec{C}_{5}$-D of $K_{10}^\ast$.
Label the vertices of $K_{10}^\ast$ by $x_0,x_1,\ldots,x_9$. It can be verified that the following resolution classes (obtained by a computer search) form a R$\vec{C}_{5}$-D of $K_{10}^\ast$. $$\begin{aligned}
R_0 &=& \{ x_0 x_1 x_2 x_3 x_4 x_0, x_5 x_6 x_7 x_8 x_9 x_5\} \\
R_1 &=& \{ x_0 x_2 x_1 x_3 x_5 x_0, x_4 x_6 x_8 x_7 x_9 x_4\} \\
R_2 &=& \{ x_0 x_3 x_1 x_4 x_2 x_0, x_5 x_7 x_6 x_9 x_8 x_5\} \\
R_3 &=& \{ x_0 x_4 x_1 x_5 x_8 x_0, x_2 x_6 x_3 x_9 x_7 x_2\} \\
R_4 &=& \{ x_0 x_5 x_2 x_8 x_3 x_0, x_1 x_7 x_4 x_9 x_6 x_1\} \\
R_5 &=& \{ x_0 x_6 x_2 x_5 x_9 x_0, x_1 x_8 x_4 x_3 x_7 x_1\} \\
R_6 &=& \{ x_0 x_7 x_3 x_8 x_6 x_0, x_1 x_9 x_2 x_4 x_5 x_1\} \\
R_7 &=& \{ x_0 x_8 x_2 x_9 x_1 x_0, x_3 x_6 x_4 x_7 x_5 x_3\} \\
R_8 &=& \{ x_0 x_9 x_3 x_2 x_7 x_0, x_1 x_6 x_5 x_4 x_8 x_1\} \\\end{aligned}$$
The rest of the proof of Theorem \[the:main\] is divided into two main cases, $m \not\equiv 0 \pmod 3$, which is dealt with in Proposition \[pro:<>0\], and $m \equiv 0 \pmod 3$, which is considered in Proposition \[pro:=0\], as well as two small cases, $m=11$ and $m=9$, which require a modification of the general approach. All of these cases, however, have the following construction in common.
\[cons\]
Let $m \ge 5$ be an odd integer, and write $m=2k+1$. Let the vertex set of $D=K_{2m}^\ast$ be $X \cup Y$, where $X=\{x_0,x_1,\ldots,x_{2k} \}$ and $Y=\{ y_0,y_1,\ldots,y_{2k} \}$. We shall call arcs of the form $(x_i, x_{i+d})$ and $(y_i, y_{i+d})$ arcs of [*pure left*]{} and [*pure right difference*]{} $d$, respectively, and arcs of the form $(x_i, y_{i+d})$ and $(y_i, x_{i+d})$ arcs of [*mixed difference*]{} $d$. All subscripts will be evaluated modulo $m=2k+1$.
Start by defining directed $m$-cycles $$C_0=x_0 y_0 x_1 y_1 x_2 y_2 \ldots x_k x_0 \qquad \mbox{ and } \qquad
C_0'=y_k x_{k+1} y_{k+1} \ldots y_{2k} y_k.$$ For $i \in {\mathbb{Z}}_{m}$, obtain $C_i$ and $C_i'$ from $C_0$ and $C_0'$, respectively, by adding $i$ to the subscripts of the vertices in $X$, and $2i$ to the subscripts of the vertices in $Y$, and form resolutions classes $$R_i= \{ C_i, C_i' \}, \qquad \mbox{ for } i \in {\mathbb{Z}}_{m}.$$ Observe that $R_0,\ldots,R_{m-1}$ use up all arcs of pure left difference $k+1$, all arcs of pure right difference $k+1$, and all arcs of mixed differences except for the arcs $$(x_{k+i}, y_{k+2i}) \quad \mbox{ and } \quad (y_{2k+2i}, x_{i}) \quad \mbox{ for all } i \in {\mathbb{Z}}_{m}. \eqno{(\ast)}$$ [ ]{}
Next, we examine the case $m=11$, which requires a modified construction, but serves as a good introduction to the general approach in the case $m \not\equiv 0 \pmod{3}$ that will be described in Proposition \[pro:<>0\].
\[lem:11\] There exists a R$\vec{C}_{11}$-D of $K_{22}^\ast$.
With $m=11$, adopt the notation and define resolution classes $R_0,\ldots,R_{10}$ as in Construction \[cons\]. It can be verified that the 22 leftover arcs of mixed differences in ($\ast$) form a directed 22-cycle $$C=x_5 y_5 \ldots x_5.$$ We decompose $C$ into the following directed paths: $$\begin{aligned}
P_1 = &x_5 y_5 \ldots x_6, &
\qquad P_2 = x_6 y_7,\\
P_3 = &y_7 x_4, &
\qquad P_4 = x_4 y_3,\\
P_5 = &y_3 \ldots y_2, &
\qquad P_6 = y_2 x_7, \\
P_7 = &x_7 y_9, &
\qquad P_8 = y_9 x_5,\end{aligned}$$ where $P_1$ and $P_5$ are of length 10 and 6, respectively. Use the $P_i$ for $i$ odd to form the resolution class $$R_{11}=\{ P_1 x_6 x_5, P_3 x_4 x_7 P_7 y_9 y_3 P_5 y_2 y_7 \}.$$ We shall use the $P_i$ for $i$ even in the next resolution class. Notice that in $D[Y]$ we have used all arcs of right pure difference 6 and two arcs — namely, $(y_9, y_3)$ and $(y_2, y_7)$ — of right pure difference 5. The remaining arcs of right pure difference 5 form a directed $(y_3,y_2)$-path $Q_1'$ of length 2, and a directed $(y_7,y_9)$-path $Q_2'$ of length 7. If we can find vertex-disjoint directed $(x_7,x_4)$-path of length 7 (call it $Q_1$) and $(x_5,x_6)$-path of length 2 (call it $Q_2$) in $D[X]$, then the next resolution class will be $$R_{12}=\{ P_2Q_2'P_8Q_2, P_4Q_1'P_6Q_1 \}.$$ What will then remain of $D[Y]$ is a ${{\rm Circ}}(11;\{\pm 1,\pm 2,\pm 3, \pm 4 \})$, which admits a $\vec{C}_{11}$-D by Lemma \[lem:BerFavMah\]. It thus suffices to appropriately decompose the remaining subdigraph of $D[X]$. In particular, it suffices to find a set of differences $S \subseteq {\mathbb{Z}}_{11}^\ast$ such that
[($X_1$)]{} $6 \not\in S$, as left pure difference 6 has already been used;
[($X_2$)]{} $3, 10 \in S$, as only arcs $(x_6,x_5)$ and $(x_4,x_7)$ of these left pure differences have already been used;
[($X_3$)]{} ${{\rm Circ}}(11;{\mathbb{Z}}_{11}^\ast-S-\{6\})$ admits a decomposition into directed 11-cycles; and
[($X_4$)]{} ${{\rm Circ}}(11;S)-\{(6,5),(4,7)\}$ admits a decomposition into directed 11-cycles, and vertex-disjoint directed paths: a $(5,6)$-path of length 2 and a $(7,4)$-path of length 7.
Such a set $S$ was found using a computer search. The set $S$, as well as a suitable decomposition, is shown in the appendix.
\[pro:<>0\] Let $m$ be an odd integer such that $m \not\equiv 0 \pmod{3}$, $m \ge 7$, and $m \ne 11$. Let $k=\frac{m-1}{2}$, and define parameters $d,s_i',t_i',s_i,t_i$ (for $i=1,2$) as indicated below.
Parameter $\setminus$ Case $k \equiv 0 \pmod{4}$ $k \equiv 1 \pmod{4}$ $k \equiv 2 \pmod{4}$ $k \equiv 3 \pmod{4}$
---------------------------- ----------------------- ----------------------- ----------------------- -----------------------
$d$ $(7k+8)/{4}$ $(5k+7)/{4}$ $({3k+6})/{4}$ $({k+5})/{4}$
$s_1'$ ${k}/{4}$ $({3k+1})/{4}$ $({5k+2})/{4}$ $({7k+3})/{4}$
$s_2'$ $({3k+4})/{4}$ $({k+3})/{4}$ $({7k+6})/{4}$ $({5k+5})/{4}$
$t_2'$ $({k-2})/{2}$ $({3k-1})/{2}$ $({k-2})/{2}$ $({3k-1})/{2}$
$t_1$ $({3k+2})/{2}$ $({k+1})/{2}$ $({3k+2})/{2}$ $({k+1})/{2}$
In addition, let $t_1'=s_2=k$, $s_1=2k-1$, and $t_2=t_2'$.
Then $\gcd(d,m)=1$, and hence for each $i=1,2$, there exists a unique $r_i \in {\mathbb{Z}}_m$ such that $s_i'+r_i d = t_i'$ (in ${\mathbb{Z}}_m$). Furthermore, define $a_i=(t_i,s_i)$ and $d_i^Y=s_i-t_i$ (in ${\mathbb{Z}}_m$).
Now assume there exists a set $S \subseteq {\mathbb{Z}}_m^\ast$ such that:
[($Y_1$)]{} $k+1 \not\in S$;
[($Y_2$)]{} $d_1^Y, d_2^Y \in S$;
[($Y_3$)]{} ${{\rm Circ}}(m;{\mathbb{Z}}_m^\ast-(S \cup \{ k+1 \}))$ admits a $\vec{C}_m$-D; and
[($Y_4$)]{} ${{\rm Circ}}(m;S)-\{ a_1,a_2 \}$ admits a decomposition into directed $m$-cycles and two vertex-disjoint directed paths: an $(s_1,t_1)$-path of length $r_1$ and an $(s_2,t_2)$-path of length $r_2$.
Then $K_{2m}^\ast$ admits a R$\vec{C}_m$-D.
Adopt the notation and define resolution classes $R_0,\ldots,R_{m-1}$ as in Construction \[cons\]. It can be verified that, since $m \not\equiv 0 \pmod{3}$, the $2m$ leftover arcs of mixed differences in ($\ast$) form a directed $2m$-cycle $$C=y_k \ldots x_k y_k.$$ Write $C=P_1 P_2 \ldots P_8$ as a concatenation of directed paths such that $P_1$ is of length $m-1$, $P_5$ is of length $m-5$, and the rest are of length 1. It can be shown for each congruency class of $k$ modulo 4 that the paths are $$\begin{aligned}
P_1 = &y_{s_2} \ldots y_{t_2}, &
\qquad P_2 = y_{t_2} x_{s_1'},\\
P_3 = &x_{s_1'} y_{t_1}, &
\qquad P_4 = y_{t_1} x_{s_2'},\\
P_5 = &x_{s_2'} \ldots x_{t_2'}, &
\qquad P_6 = x_{t_2'} y_{s_1}, \\
P_7 = &y_{s_1} x_{t_1'}, &
\qquad P_8 = x_{t_1'} y_{s_2},\end{aligned}$$ where the parameters $s_i,t_i,s_i',t_i'$ (for $i=1,2$) are as defined in the statement of the proposition. We use the $P_i$ for $i$ odd, together with 4 linking arcs (two of pure left, and two of pure right difference) to form the resolution class
$$R_{m}=\{ P_1 y_{t_2} y_{s_2}, P_5 x_{t_2'} x_{s_1'} P_3 y_{t_1} y_{s_1} P_7 x_{t_1'} x_{s_2'} \}.$$ The linking arcs are: $$(x_{t_2'}, x_{s_1'}) \mbox{ and } (x_{t_1'}, x_{s_2'}) \mbox{ of pure left difference } d=s_1'-t_2'=s_2'-t_1',$$ $$a_1=(y_{t_1}, y_{s_1}) \mbox{ of pure right difference } d_1^Y=s_1-t_1, \mbox{ and}$$ $$a_2=(y_{t_2}, y_{s_2}) \mbox{ of pure right difference } d_2^Y=s_2-t_2,$$ with $d, d_1^Y, d_2^Y$ as defined in the statement of the proposition. Since $m\ne 11$, observe that none of these pure differences are equal to $k+1$ (which has already been used in $R_0, \ldots, R_{m-1}$).
The $P_i$ for $i$ even will be used in the next resolution class as follows. Since $m \not\equiv 0 \pmod{3}$, it can be shown that $\gcd(2k+1,d)=1$. Therefore, the arcs of pure left difference $d$ form a directed $m$-cycle, and in particular, those that have not been used in $R_m$ form a directed $(x_{s_1'}, x_{t_1'})$-path $Q_1'$ of length $r_1$ and a directed $(x_{s_2'}, x_{t_2'})$-path $Q_2'$ of length $r_2$, where $r_1$ and $r_2$ are as defined in the statement of the proposition.
Now let $S$ be a subset of ${\mathbb{Z}}_m^\ast$ satisfying Conditions ($Y_1$)–($Y_4$) of the proposition, and let $Q_1$ and $Q_2$ be the corresponding vertex-disjoint directed $(y_{s_1},y_{t_1})$-path of length $r_1$ and $(y_{s_2},y_{t_2})$-path of length $r_2$, respectively. We then let the next resolution class be $$R_{m+1}= \{ P_2Q_1'P_8Q_2, P_4Q_2'P_6Q_1 \}.$$ All arcs of mixed differences have now been used in resolution classes $R_0,\ldots,R_{m+1}$. In $D[X]$, we have also used up all arcs of differences $k+1$ and $d$. Since $\gcd(2k+1,k+1)=\gcd(2k+1,d)=1$, Lemma \[lem:BerFavMah\] now guarantees that the remaining subdigraph of $D[X]$ admits a $\vec{C}_m$-D.
In $D[Y]$, however, we have used up:
- all arcs of difference $k+1$;
- arcs $a_1$ and $a_2$ of differences $d_1^Y$ and $d_2^Y$, respectively; and
- arcs used in the directed paths $Q_1$ and $Q_2$.
Assumptions ($Y_1$)–($Y_4$) now guarantee that the remaining subdigraph of $D[Y]$ admits a $\vec{C}_m$-D. Finally, the directed $m$-cycles from the remaining subdigraphs of $D[X]$ and $D[Y]$ can be arranged into resolution classes that complete our R$\vec{C}_m$-D of $K_{2m}^\ast$.
We now turn our attention to the case $m \equiv 0 \pmod{3}$. As before, a small case ($m=9$) requires a modified construction and will also serve as an introduction to the general approach.
\[lem:9\] There exists a R$\vec{C}_{9}$-D of $K_{18}^\ast$.
Adopt the notation and construction of resolution classes $R_0,\ldots,R_8$ from Construction \[cons\]. The $18$ leftover arcs of mixed differences from ($\ast$) now form three directed $6$-cycles, which we write as a concatenation of directed paths of length $2$ and linking arcs as follows: $$\begin{aligned}
C_{(1)} &=& x_{0} y_{5} x_3 y_2 x_6 y_8 x_{0}= P_{1}^X x_{3} y_{2} P_{1}^Y y_{8}x_0, \\
C_{(2)} &=& x_{1} y_{7} x_4 y_4 x_7 y_1 x_{1}=P_{2}^X x_{4} y_{4} P_{2}^Y y_{1}x_1, \\
C_{(3)} &=& x_{2} y_{0} x_5 y_6 x_8 y_{3} x_{2}=P_{3}^X x_{5} y_{6} P_{3}^Y y_{3}x_2.\end{aligned}$$
We use the directed paths $P_{i}^X, P_{i}^Y$ (for $i=1,2,3$), together with 6 linking arcs of pure differences, to form the resolution class $R_{9}$: $$R_9=\{ P_{1}^X x_{3} x_1 P_{2}^X x_{4} x_2 P_{3}^X x_{5} x_0, P_{1}^Y y_{8}y_{6} P_{3}^Y y_{3}y_{4} P_{2}^Y y_{1}y_{2} \}.$$ We have thus used the following linking arcs: $$\begin{aligned}
b^X_1 &=& (x_{3},x_1) \quad \mbox{ of pure left difference } d_1^X=7, \\
b^X_2 &=& (x_{4},x_2) \quad \mbox{ of pure left difference } d_1^X=7, \\
b^X_3 &=& (x_{5},x_0) \quad \mbox{ of pure left difference } d_2^X=4, \\
b^Y_1 &=& (y_{1},y_{2}) \quad \mbox{ of pure right difference } d_1^Y=1, \\
b^Y_2 &=& (y_{3},y_{4}) \quad \mbox{ of pure right difference } d_1^Y=1, \\
b^Y_3 &=& (y_{8},y_{6}) \quad \mbox{ of pure right difference } d_2^Y=7.\end{aligned}$$ Note that none of these differences are equal to $5$, which has been used in $R_0,\ldots,R_{8}$.
We have now used up all arcs of mixed differences except for the arcs $(x_3,y_2),(x_4,y_4)$, $(x_5,y_6)$ and arcs $(y_{8},x_0), (y_1,x_1),(y_3,x_2)$.
To form the resolution class $R_{10}$, we want to find three vertex-disjoint directed paths with sources $x_0, x_1, x_2$ and terminals $x_{3}, x_{4}, x_{5}$ using some of the remaining arcs in $D[X]$, and three vertex-disjoint directed paths with sources $y_2,y_4,y_6$ and terminals $y_{8}, y_1, y_3$ using some of the remaining arcs in $D[Y]$; these paths, together with all the remaining arcs of mixed differences, will form two vertex-disjoint directed $9$-cycles. In particular, we can define $$R_{10}=\{ Q_1'x_3y_2Q_1y_3x_2Q_2'x_4y_4Q_2y_1x_1, Q_3'x_5y_6Q_3y_8x_0 \}$$ as long as we have suitable directed paths $$\begin{aligned}
Q_1': && (x_1,x_3) \mbox{-path of length } 1, \\
Q_2': && (x_2,x_4) \mbox{-path of length } 1, \\
Q_3': && (x_0,x_5) \mbox{-path of length } 4, \\
Q_1: && (y_2,y_3) \mbox{-path of length } 1, \\
Q_2: && (y_4,y_1) \mbox{-path of length } 2, \mbox{ and} \\
Q_3: && (y_6,y_8) \mbox{-path of length } 3\end{aligned}$$ that use only hitherto unused arcs of pure differences. More precisely, it suffices to find sets $S^X, S^Y \subseteq {\mathbb{Z}}_9^\ast$ such that the following hold.
[($X_1$)]{} $5 \not\in S^X$, as left pure difference 5 has already been used;
[($X_2$)]{} $4,7 \in S^X$, as arcs $(x_3,x_1),(x_4,x_2),(x_5,x_0)$ have already been used;
[($X_3$)]{} ${{\rm Circ}}(9;{\mathbb{Z}}_{9}^\ast-S^X-\{5\})$ admits a decomposition into directed 9-cycles; and
[($X_4$)]{} ${{\rm Circ}}(9;S^X)-\{(3,1),(4,2),(5,0)\}$ admits a decomposition into directed 9-cycles and pairwise vertex-disjoint directed $(1,3)$-path of length 1, $(2,4)$-path of length 1, and $(0,5)$-path of length 4;
[($Y_1$)]{} $5 \not\in S^Y$, as right pure difference 5 has already been used;
[($Y_2$)]{} $1,7 \in S^Y$, as arcs $(y_1,y_2),(y_3,y_4),(y_8,y_6)$ have already been used;
[($Y_3$)]{} ${{\rm Circ}}(9;{\mathbb{Z}}_{9}^\ast-S^Y-\{5\})$ admits a decomposition into directed 9-cycles; and
[($Y_4$)]{} ${{\rm Circ}}(9;S^Y)-\{(1,2),(3,4),(8,6)\}$ admits a decomposition into directed 9-cycles and pairwise vertex-disjoint directed paths: a $(2,3)$-path of length 1, a $(4,1)$-path of length 2, and a $(6,8)$-path of length 3.
Such sets $S^X$ and $S^Y$ were found using a computer search. These sets, as well as suitable decompositions, are shown in the appendix.
\[pro:=0\] Let $m$ be an odd integer such that $m \equiv 0 \pmod{3}$, $m \ge 15$. Let $k=\frac{m-1}{2}$, and define parameters $s_1$ and $t_1$ as indicated in the table below.
Parameter $\setminus$ Case $k \equiv 0 \pmod{4}$ $k \equiv 1 \pmod{4}$ $k \equiv 2 \pmod{4}$ $k \equiv 3 \pmod{4}$
---------------------------- ----------------------- ----------------------- ----------------------- -----------------------
$s_1$ ${k}/{2}$ $({3k+1})/{2}$ ${k}/{2}$ $({3k+1})/{2}$
$t_1$ ${3k}/{4}$ $({k-1})/{4}$ $({7k+2})/{4}$ $({5k+1})/{4}$
In addition, for $i=1,2$, let $s_{1+i}=s_1+2i$ and $t_{1+i}=t_1+i$ (all evaluated in ${\mathbb{Z}}_m$).
Furthermore, define arcs: $$\begin{aligned}
b_1^X =(t_1,1), \qquad & \qquad b_1^Y =(1,s_1), \qquad & \qquad c_1 =(t_1,0),\\
b_2^X =(t_2,2), \qquad & \qquad b_2^Y =(3,s_2), \qquad & \qquad c_2 =(t_2,1),\\
b_3^X =(t_3,0), \qquad & \qquad \;\; b_3^Y =(-1,s_3), \qquad & \qquad c_3 =(t_3,2).\end{aligned}$$ Now assume there exist sets $S^X,S^Y \subseteq {\mathbb{Z}}_m^\ast$ such that:
[($X_1$)]{} $k+1,-t_1 \not\in S^X$;
[($X_2$)]{} $1-t_1, -2-t_1 \in S^X$;
[($X_3$)]{} ${{\rm Circ}}(m;{\mathbb{Z}}_m^\ast-(S^X \cup \{ k+1,-t_1 \}))$ admits a $\vec{C}_m$-D;
[($X_4$)]{} ${{\rm Circ}}(m;S^X)-\{ b_1^X,b_2^X,b_3^X \}+\{c_1,c_2,c_3 \}$ admits a $\vec{C}_m$-D;
[($Y_1$)]{} $k+1 \not\in S^Y$;
[($Y_2$)]{} $s_1-1, s_1+5 \in S^Y$;
[($Y_3$)]{} ${{\rm Circ}}(m;{\mathbb{Z}}_m^\ast-(S^Y \cup \{ k+1 \}))$ admits a $\vec{C}_m$-D; and
[($Y_4$)]{} ${{\rm Circ}}(m;S^Y)-\{ b_1^Y,b_2^Y,b_3^Y \}$ admits a decomposition into directed $m$-cycles and three pairwise vertex-disjoint directed paths: an $(s_1,-1)$-path of length $\frac{2m}{3}-1$, an $(s_2,3)$-path of some length $q \in \{ 1,\ldots,\frac{m}{3}-3 \}$, and an $(s_3,1)$-path of length $\frac{m}{3}-2-q$.
Then $K_{2m}^\ast$ admits a R$\vec{C}_m$-D.
Adopt the notation and construction of resolution classes $R_0,\ldots,R_{m-1}$ from Construction \[cons\]. It can be verified that, since $m \equiv 0 \pmod{3}$, the $2m$ remaining arcs of mixed differences in ($\ast$) form three directed $\frac{2m}{3}$-cycles, which we write as a concatenation of directed paths of length $\frac{m}{3}-1$ and linking arcs as follows: $$\begin{aligned}
C_{(1)} &=& x_{0} y_{k+1} \ldots y_{-1} x_{0}= P_{1}^X x_{t_1} y_{s_1} P_{1}^Y y_{-1}x_0, \\
C_{(2)} &=& x_{1} y_{k+3} \ldots y_{1} x_{1}=P_{2}^X x_{t_2} y_{s_2} P_{2}^Y y_{1}x_1, \\
C_{(3)} &=& x_{2} y_{k+5} \ldots y_{3} x_{2}=P_{3}^X x_{t_3} y_{s_3} P_{3}^Y y_{3}x_2.\end{aligned}$$ It can be verified that, for each congruency class of $k$ modulo 4, the parameters $s_i,t_i$ (for $i=1,2,3$) have values as defined in the statement of the proposition.
We use the directed paths $P_{i}^X, P_{i}^Y$ (for $i=1,2,3$), together with 6 linking arcs of pure differences, to form the resolution class $R_{m}$: $$R_m=\{ P_{1}^X x_{t_1} x_1 P_{2}^X x_{t_2} x_2 P_{3}^X x_{t_3} x_0, P_{1}^Y y_{-1}y_{s_3} P_{3}^Y y_{3}y_{s_2} P_{2}^Y y_{1}y_{s_1} \}.$$ We have thus used the following linking arcs: $$\begin{aligned}
b^X_1 &=& (x_{t_1},x_1) \quad \mbox{ of pure left difference } d_1^X=1-t_1, \\
b^X_2 &=& (x_{t_2},x_2) \quad \mbox{ of pure left difference } d_1^X=1-t_1, \\
b^X_3 &=& (x_{t_3},x_0) \quad \mbox{ of pure left difference } d_2^X=-2-t_1, \\
b^Y_1 &=& (y_{1},y_{s_1}) \quad \mbox{ of pure right difference } d_1^Y=s_1-1, \\
b^Y_2 &=& (y_{3},y_{s_2}) \quad \mbox{ of pure right difference } d_1^Y=s_1-1, \\
b^Y_3 &=& (y_{-1},y_{s_3}) \quad \mbox{ of pure right difference } d_2^Y=s_1+5.\end{aligned}$$ Note that, in all cases, none of these differences are equal to $k+1$.
We have now used up all arcs of mixed differences except for the arcs $(x_{t_i}, y_{s_i})$ for $i=1,2,3$, and arcs $(y_{-1},x_0), (y_1,x_1),(y_3,x_2)$.
To form the resolution class $R_{m+1}$, we want to find three vertex-disjoint directed paths of appropriate lengths with sources $x_0, x_1, x_2$ and terminals $x_{t_1}, x_{t_2}, x_{t_3}$ using some of the remaining arcs in $D[X]$, and three vertex-disjoint directed paths with sources $y_{s_1},y_{s_2},y_{s_3}$ and terminals $y_{-1}, y_1, y_3$ using some of the remaining arcs in $D[Y]$; these paths, together with all the remaining arcs of mixed differences, will form two vertex-disjoint directed $m$-cycles.
It can be shown in each case that $\gcd(m,t_1)=3$, so the following are indeed directed $(\frac{m}{3}-1)$-paths in $D[X]$ with the required sources and terminals: $$\begin{aligned}
Q_1' &=& x_0 x_{-t_1} x_{-2t_1} \ldots x_{t_1}, \\
Q_2' &=& x_1 x_{1-t_1} x_{1-2t_1} \ldots x_{t_2}, \mbox{ and} \\
Q_3' &=& x_2 x_{2-t_1} x_{2-2t_1} \ldots x_{t_3}.\end{aligned}$$ Observe that these paths use all arcs of difference $d^X=-t_1$ except for arcs $c_1=(x_{t_1}, x_0)$, $c_2=(x_{t_2}, x_1)$, and $c_3=(x_{t_3}, x_2)$.
Now let $S^X,S^Y \subseteq {\mathbb{Z}}_m^\ast$ be two sets satisfying Assumptions ($X_1$)–($X_4$),($Y_1$)–($Y_4$) of the proposition. Furthermore, let $Q_1,Q_2,Q_3$ be the pairwise vertex-disjoint directed paths in $D[Y]$ whose existence is assured by Condition ($Y_4$), so that $$Q_1 \mbox{ is a directed } (y_{s_1},y_{-1}) \mbox{-path of length } \textstyle{\frac{2m}{3}-1},$$ $$Q_2 \mbox{ is a directed } (y_{s_2},y_3) \mbox{-path of length } q, \mbox{ for some } \textstyle{q \in \{ 1,\ldots,\frac{m}{3}-3 \}}, \mbox{ and}$$ $$Q_3 \mbox{ is a directed } (y_{s_3},y_1) \mbox{-path of length } \textstyle{\frac{m}{3}-2-q}.$$ We may then define our next resolution class as $$R_{m+1}=\{ Q_1'x_{t_1}y_{s_1} Q_1y_{-1}x_0,
Q_2'x_{t_2}y_{s_2}Q_2y_{3}x_2Q_3'x_{t_3}y_{s_3}Q_3y_{1}x_1 \}.$$ Now, all arcs of mixed differences have been used in resolution classes $R_1,\ldots,R_{m+1}$. In addition, we have also used up in $D[X]$:
- all arcs of difference $k+1$;
- arcs $b_i^X$, for $i=1,2,3$ (of differences $1-t_1$ and $-2-t_1$); and
- all arcs of difference $-t_1$ except $c_i$, for $i=1,2,3$.
Assumptions ($X_1$)–($X_4$) now guarantee that the remaining subdigraph of $D[X]$ admits a $\vec{C}_m$-D. In $D[Y]$, however, we have used up:
- all arcs of difference $k+1$;
- arcs $b_i^Y$, for $i=1,2,3$ (of differences $s_1-1$ and $s_1+5$); and
- arcs used in the directed paths $Q_i$, for $i=1,2,3$.
Assumptions ($Y_1$)–($Y_4$) now guarantee that the remaining subdigraph of $D[Y]$ admits a $\vec{C}_m$-D. The directed $m$-cycles from the remaining subdigraphs of $D[X]$ and $D[Y]$ can be arranged into resolution classes that complete our R$\vec{C}_m$-D of $K_{2m}^\ast$.
\[the:main\]. Let $m$ be an odd integer, $5 \le m \le 49$. Then $K_{2m}^\ast$ admits a R$\vec{C}_m$-D by Lemma \[lem:5-10\] if $m=5$, by Lemma \[lem:11\] if $m=11$, and by Lemma \[lem:9\] if $m=9$. It can be verified that the computational results in Appendix A show that the conditions of Proposition \[pro:<>0\] hold for all odd $m$, $7 \le m \le 49$, $m \not\equiv 0 \pmod{3}$, $m \ne 11$; hence $K_{2m}^\ast$ admits a R$\vec{C}_m$-D for all such $m$. Finally, Appendix B shows that the conditions of Proposition \[pro:=0\] hold for all odd $m$, $15 \le m \le 45$, $m \equiv 0 \pmod{3}$; hence $K_{2m}^\ast$ admits a R$\vec{C}_m$-D for all such $m$ as well. Therefore, the statement holds for all odd $m$, $5 \le m \le 49$.
Acknowledgements {#acknowledgements .unnumbered}
----------------
The authors gratefully acknowledge support by the Natural Sciences and Engineering Research Council of Canada. Special thanks to Patrick Niesink and Ryan Murray, who wrote most of the code used to obtain the computational results, and to Aras Erzurumluoğlu for careful proofreading of the manuscript.
[99]{}
P. Adams, D. Bryant, Resolvable directed cycle systems of all indices for cycle length 3 and 4, unpublished.
B. Alspach, R. Häggkvist, Some observations on the Oberwolfach problem, [*J. Graph Theory*]{} [**9**]{} (1985), 177–187.
B. Alspach, P. J. Schellenberg, D. R. Stinson, D. Wagner, The Oberwolfach problem and factors of uniform odd length cycles, [*J. Combin. Theory Ser. A*]{} [**52**]{} (1989), 20–43.
F. E. Bennett, X. Zhang, Resolvable Mendelsohn designs with block size $4$, [*Aequationes Math.*]{} [**40**]{} (1990), 248–260.
J.-C. Bermond, O. Favaron, M. Mahéo, Hamiltonian decomposition of Cayley graphs of degree 4, [*J. Combin. Theory Ser. B*]{} [**46**]{} (1989), 142–153.
J.-C. Bermond, A. Germa, D. Sotteau, Resolvable decomposition of $K_n^\ast$, [*J. Combin. Theory Ser. A*]{} [**26**]{} (1979), 179–185.
A. Burgess, M. Šajna, On the directed Oberwolfach Problem with equal cycle lengths, [*Elec. J. Comb.*]{} [**21**]{} (2014), P1.15 (14 pages).
C. J. Colbourn, J. H. Dinitz (editors), [*Handbook of combinatorial designs,*]{} Chapman and Hall/CRC, Boca Raton, FL, 2007.
Computational results — Case $m \not\equiv 0 \pmod{3}$
======================================================
For each value of $m$ we give a set $S \subseteq {\mathbb{Z}}_m^\ast$ satisfying Conditions ($Y_1$) – ($Y_4$) of Proposition \[pro:<>0\] (if $m \ne 11$), or Conditions ($X_1$) – ($X_4$) from the proof of Lemma \[lem:11\] (if $m=11$). The required differences appear in bold type. In addition, we give a desired decomposition into directed $m$-cycles $C_i$ and vertex-disjoint directed paths $Q_1$ and $Q_2$. If $m$ is not prime, we also give a partition of ${\mathbb{Z}}_m^\ast-(S \cup \{ \frac{m+1}{2} \})$ satisfying the assumptions of Lemma \[lem:BerFavMah\].
- $m=7$\
$S =\{ 2,\mathbf{3,6} \}$\
$Q_1 = (5, 0, 2)$\
$Q_2 = (3, 6, 1, 4)$\
$C_1 =(0,3,5,4,6,2,1,0)$\
$C_2 =(0,6,5,1,3,2,4,0)$
- $m=11$\
$S=\{ \mathbf{3},4,9, \mathbf{10} \}$\
$Q_1=(7, 10, 9, 2, 0, 3, 1, 4)$\
$Q_2=(5, 8, 6)$\
$C_1=(0, 10, 2, 6, 9, 8, 1, 5, 4, 3, 7, 0)$\
$C_2=(0, 4, 8, 7, 6, 10, 3, 2, 5, 9, 1, 0)$\
$C_3=(0, 9, 7, 5, 3, 6, 4, 2, 1, 10, 8, 0)$
- $m=13$\
$S=\{ \mathbf{1}, 2, 3, \mathbf{4} \}$\
$Q_1=(11, 1, 5, 7, 10)$\
$Q_2=(6, 9, 0, 3, 4, 8, 12, 2)$\
$C_1=(0, 1, 2, 4, 5, 6, 8, 9, 10, 12, 3, 7, 11, 0)$\
$C_2=(0, 4, 7, 8, 11, 2, 5, 9, 12, 1, 3, 6, 10, 0)$\
$C_3=(0, 2, 3, 5, 8, 10, 1, 4, 6, 7, 9, 11, 12, 0)$
- $m=17$\
$S=\{ 1,\mathbf{2}, 3, \mathbf{5} \}$\
$Q_1=(15, 16, 1, 4, 7, 9, 14, 2, 5, 6, 11, 13)$\
$Q_2=(8, 10, 12, 0, 3)$\
$C_1=(0, 2, 4, 6, 8, 9, 12, 13, 14, 15, 1, 3, 5, 7, 10, 11, 16, 0)$\
$C_2=(0, 5, 8, 11, 14, 16, 2, 3, 4, 9, 10, 13, 1, 6, 7, 12, 15, 0)$\
$C_3=(0, 1, 2, 7, 8, 13, 16, 4, 5, 10, 15, 3, 6, 9, 11, 12, 14, 0)$
- $m=19$\
$S=\{ 2,\mathbf{12}, \mathbf{15}\}$\
$Q_1=(17, 0, 15, 11, 7, 3, 5)$\
$Q_2=(9, 2, 4, 6, 8, 10, 12, 14, 16, 18, 1, 13)$\
$C_1=(0, 12, 8, 4, 16, 9, 5, 1, 3, 18, 11, 13, 15, 17, 10, 6, 2, 14, 7, 0)$\
$C_2=(0, 2, 17, 13, 6, 18, 14, 10, 3, 15, 8, 1, 16, 12, 5, 7, 9, 11, 4, 0)$
- $m=23$\
$S=\{ 1,2,\mathbf{15, 18} \}$\
$Q_1=(21, 22, 17, 9, 1, 19, 20, 12, 7, 8, 10, 5, 0, 2, 4, 6)$\
$Q_2=(11, 3, 18, 13, 14, 15, 16)$\
$C_1=(0, 15, 7, 22, 14, 9, 4, 19, 11, 6, 1, 2, 3, 5, 20, 21, 16, 17, 18, 10, 12, 13, 8, 0)$\
$C_2=(0, 18, 19, 14, 16, 8, 9, 10, 11, 13, 15, 17, 12, 4, 5, 6, 7, 2, 20, 22, 1, 3, 21, 0)$\
$C_3=(0, 1, 16, 18, 20, 15, 10, 2, 17, 19, 21, 13, 5, 7, 9, 11, 12, 14, 6, 8, 3, 4, 22, 0)$
- $m=25$\
$S=\{ 1, 2, \mathbf{4, 7}\} $\
$Q_1=(23, 2, 6, 10, 14, 15, 16, 17, 19)$\
$Q_2=(12, 13, 20, 21, 0, 7, 8, 9, 11, 18, 22, 24, 1, 3, 4, 5)$\
$C_1=(0, 4, 8, 12, 16, 20, 24, 6, 7, 11, 15, 19, 1, 2, 3, 10, 17, 21, 22, 23, 5, 9, 13, 14, 18, 0)$\
$C_2=(0, 1, 5, 6, 8, 15, 22, 4, 11, 13, 17, 24, 3, 7, 14, 16, 18, 20, 2, 9, 10, 12, 19, 21, 23, 0)$\
$C_3=(0, 2, 4, 6, 13, 15, 17, 18, 19, 20, 22, 1, 8, 10, 11, 12, 14, 21, 3, 5, 7, 9, 16, 23, 24, 0)$\
Partition contains: $\{\pm 3,\pm 5\}$, $\{\pm 6,\pm 10\}$, and $\{ e \}$ for each remaining difference $e$
- $m=29$\
$S=\{ 1, 2, \mathbf{5, 8}\} $\
$Q_1=(27, 28, 7, 8, 9, 11, 16, 21, 23, 25, 26, 5, 10, 12, 13, 15, 17, 18, 20, 22)$\
$Q_2=(14, 19, 24, 0, 1, 2, 3, 4, 6)$\
$C_1=(0, 5, 13, 18, 23, 28, 4, 9, 14, 22, 1, 6, 7, 15, 20, 25, 27, 3, 8, 16, 24, 26, 2, 10, 11, 12, 17,$ $ 19, 21, 0)$\
$C_2=(0, 8, 10, 15, 23, 2, 7, 9, 17, 22, 24, 25, 4, 12, 20, 21, 26, 28, 1, 3, 5, 6, 11, 13, 14, 16, 18,$ $ 19, 27, 0)$\
$C_3=(0, 2, 4, 5, 7, 12, 14, 15, 16, 17, 25, 1, 9, 10, 18, 26, 27, 6, 8, 13, 21, 22, 23, 24, 3, 11, 19,$ $ 20, 28, 0)$
- $m=31$\
$S=\{1, \mathbf{21, 24} \}$\
$Q_1=(29, 19, 9, 30, 23, 13, 14, 4, 28, 18, 8)$\
$Q_2=(15, 16, 17, 10, 11, 12, 5, 6, 7, 0, 1, 2, 3, 24, 25, 26, 27, 20, 21, 22)$\
$C_1=(0, 21, 11, 4, 5, 26, 16, 9, 10, 3, 27, 17, 7, 28, 29, 22, 12, 2, 23, 24, 14, 15, 8, 1, 25, 18, 19,$ $ 20, 13, 6, 30, 0)$\
$C_2=(0, 24, 17, 18, 11, 1, 22, 23, 16, 6, 27, 28, 21, 14, 7, 8, 9, 2, 26, 19, 12, 13, 3, 4, 25, 15, 5,$ $ 29, 30, 20, 10, 0)$
- $m=35$\
$S=\{ 1, \mathbf{24, 27} \}$\
$Q_1=(33, 22, 14, 3, 27, 16, 5, 32, 21, 13, 2, 29, 18, 10, 34, 26, 15, 7, 8, 0, 1, 28, 20, 9)$\
$Q_2=(17, 6, 30, 19, 11, 12, 4, 31, 23, 24, 25)$\
$C_1=(0, 24, 16, 8, 9, 1, 25, 26, 27, 28, 17, 18, 19, 20, 12, 13, 14, 6, 7, 31, 32, 33, 34, 23, 15, 4,$ $ 5, 29, 21, 10, 2, 3, 30, 22, 11, 0)$\
$C_2=(0, 27, 19, 8, 32, 24, 13, 5, 6, 33, 25, 14, 15, 16, 17, 9, 10, 11, 3, 4, 28, 29, 30, 31, 20, 21,$ $ 22, 23, 12, 1, 2, 26, 18, 7, 34, 0)$\
Partition contains: $\{\pm 5,\pm 7\}$, $\{\pm 10,\pm 14\}$, $\{\pm 15,\pm 2\}$, and $\{ e \}$ for each remaining difference $e$
- $m=37$\
$S=\{ 1, \mathbf{7, 10} \}$\
$Q_1=(35, 36, 0, 1, 11, 12, 13, 14, 15, 25, 26, 27, 28)$\
$Q_2=(18, 19, 29, 2, 9, 10, 20, 21, 22, 23, 30, 3, 4, 5, 6, 16, 17, 24, 31, 32, 33, 34, 7, 8)$\
$C_1=(0, 7, 14, 21, 28, 1, 8, 15, 22, 29, 36, 9, 16, 26, 33, 6, 13, 23, 24, 34, 35, 5, 12, 19, 20, 30,$ $31, 4, 11, 18, 25, 32, 2, 3, 10, 17, 27, 0)$\
$C_2=(0, 10, 11, 21, 31, 1, 2, 12, 22, 32, 5, 15, 16, 23, 33, 3, 13, 20, 27, 34, 4, 14, 24, 25, 35, 8, 9,$ $ 19, 26, 36, 6, 7, 17, 18, 28, 29, 30, 0)$
- $m=41$\
$S=\{ 1, \mathbf{8, 11} \}$\
$Q_1=(39, 6, 14, 15, 23, 24, 32, 40, 7, 18, 26, 34, 1, 2, 10, 11, 19, 27, 35, 36, 3, 4, 12, 13, 21, 22,$ $ 30, 31)$\
$Q_2=(20, 28, 29, 37, 38, 5, 16, 17, 25, 33, 0, 8, 9)$\
$C_1=(0, 11, 12, 23, 31, 1, 9, 17, 28, 36, 6, 7, 8, 19, 20, 21, 32, 2, 3, 14, 22, 33, 34, 35, 5, 13, 24,$ $ 25, 26, 37, 4, 15, 16, 27, 38, 39, 40, 10, 18, 29, 30, 0)$\
$C_2=(0, 1, 12, 20, 31, 32, 33, 3, 11, 22, 23, 34, 4, 5, 6, 17, 18, 19, 30, 38, 8, 16, 24, 35, 2, 13, 14,$ $ 25,36, 37, 7, 15, 26, 27, 28, 39, 9, 10, 21, 29, 40, 0)$
- $m=43$\
$S=\{ 1, \mathbf{30, 33} \}$\
$Q_1=(41, 28, 15, 2, 32, 19, 6, 36, 23, 10, 0, 33, 34, 24, 11)$\
$Q_2=(21, 22, 12, 13, 3, 4, 5, 35, 25, 26, 16, 17, 18, 8, 9, 42, 29, 30, 20, 7, 37, 38, 39, 40, 27, 14,$ $ 1, 31)$\
$C_1=(0, 30, 31, 32, 33, 20, 10, 11, 1, 34, 21, 8, 38, 28, 18, 19, 9, 39, 29, 16, 3, 36, 26, 27, 17, 4,$ $ 37, 24, 25, 12, 2, 35, 22, 23, 13, 14, 15, 5, 6, 7, 40, 41, 42, 0)$\
$C_2=(0, 1, 2, 3, 33, 23, 24, 14, 4, 34, 35, 36, 37, 27, 28, 29, 19, 20, 21, 11, 12, 42, 32, 22, 9, 10,$ $ 40, 30, 17, 7, 8, 41, 31, 18, 5, 38, 25, 15, 16, 6, 39, 26, 13, 0)$
- $m=47$\
$S=\{ 1, \mathbf{33, 36} \}$\
$Q_1=(45, 46, 35, 24, 13, 2, 3, 4, 40, 29, 18, 19, 5, 41, 27, 16, 17, 6, 7, 43, 32, 33, 22, 8, 44, 30,$ $31, 20, 21, 10, 11, 12)$\
$Q_2=(23, 9, 42, 28, 14, 0, 36, 37, 38, 39, 25, 26, 15, 1, 34)$\
$C_1=(0, 33, 34, 20, 6, 42, 43, 29, 15, 16, 2, 35, 36, 22, 23, 12, 1, 37, 26, 27, 13, 14, 3, 39, 28, 17,$ $ 18, 4, 5, 38, 24, 25, 11, 44, 45, 31, 32, 21, 7, 40, 41, 30, 19, 8, 9, 10, 46, 0)$\
$C_2=(0, 1, 2, 38, 27, 28, 29, 30, 16, 5, 6, 39, 40, 26, 12, 13, 46, 32, 18, 7, 8, 41, 42, 31, 17, 3, 36,$ $ 25, 14, 15, 4, 37, 23, 24, 10, 43, 44, 33, 19, 20, 9, 45, 34, 35, 21, 22, 11, 0)$
- $m=49$\
$S=\{ 2, \mathbf{10, 13} \}$\
$Q_1=(47, 8, 18, 28, 38, 48, 9, 19, 21, 31, 44, 46, 10, 23, 25, 35, 37)$\
$Q_2=(24, 34, 36, 0, 2, 12, 22, 32, 45, 6, 16, 26, 39, 41, 5, 7, 20, 33, 43, 4, 14, 27, 29, 42, 3, 13,$ $15, 17, 30, 40, 1, 11)$\
$C_1=(0, 10, 20, 30, 43, 7, 9, 22, 35, 45, 47, 11, 13, 23, 36, 38, 40, 4, 17, 19, 32, 42, 6, 8, 21, 34,$ $ 44, 5, 18, 31, 33, 46, 48, 12, 14, 24, 26, 28, 41, 2, 15, 25, 27, 37, 1, 3, 16, 29, 39, 0)$\
$C_2=(0, 13, 26, 36, 46, 7, 17, 27, 40, 42, 44, 8, 10, 12, 25, 38, 2, 4, 6, 19, 29, 31, 41, 43, 45, 9,$ $11, 21, 23, 33, 35, 48, 1, 14, 16, 18, 20, 22, 24, 37, 39, 3, 5, 15, 28, 30, 32, 34, 47, 0)$\
Partition contains: $\{\pm 7,\pm 1\}$, $\{\pm 14,\pm 3\}$, $\{\pm 21,\pm 4\}$, and $\{ e \}$ for each remaining difference $e$
Computational results — Case $m \equiv 0 \pmod{3}$
==================================================
For each value of $m$ we give sets $S^X,S^Y \subseteq {\mathbb{Z}}_m^\ast$ satisfying Conditions ($X_1$) – ($X_4$), ($Y_1$) – ($Y_4$) of Proposition \[pro:=0\] (if $m \ge 15$), or from the proof of Lemma \[lem:9\] (if $m=9$). The required differences appear in bold type. In addition, we give a desired decomposition of a subgraph of $D[X]$ into directed $m$-cycles $C_i'$ and (for $m=9$ only) pairwise vertex-disjoint directed paths $Q_i'$, and a desired decomposition of a subgraph of $D[Y]$ into directed $m$-cycles $C_i$ and pairwise vertex-disjoint directed paths $Q_i$. We also give a partition of ${\mathbb{Z}}_m^\ast-(S \cup \{ \frac{m+1}{2} \})$ satisfying the assumptions of Lemma \[lem:BerFavMah\].
- $m=9$\
$S^X=\{ 1,2,3,\mathbf{4}, 6,\mathbf{7} \}$\
$Q_1'=(1, 3)$\
$Q_2'=(2, 4)$\
$Q_3'=(0, 6, 7, 8, 5)$\
$C_1'=(0, 4, 8, 3, 7, 5, 6, 1, 2, 0)$\
$C_2'=(0, 7, 2, 6, 8, 1, 4, 5, 3, 0)$\
$C_3'=(0, 3, 6, 4, 7, 1, 5, 2, 8, 0)$\
$C_4'=(0, 1, 8, 2, 3, 5, 7, 4, 6, 0)$\
$C_5'=(0, 2, 5, 8, 6, 3, 4, 1, 7, 0)$\
Partition contains: $\{ 8 \}$
$S^Y=\{ \mathbf{1}, 3,4,6,\mathbf{7},8 \}$\
$Q_1=(2, 3)$\
$Q_2=(4, 7, 1)$\
$Q_3=(6, 5, 0, 8)$\
$C_1=(0, 1, 8, 2, 5, 6, 7, 4, 3, 0)$\
$C_2=(0, 7, 8, 5, 3, 1, 4, 2, 6, 0)$\
$C_3=(0, 3, 6, 4, 1, 7, 5, 2, 8, 0)$\
$C_4=(0, 6, 1, 5, 4, 8, 3, 7, 2, 0)$\
$C_5=(0, 4, 5, 8, 7, 6, 3, 2, 1, 0)$\
Partition contains: $\{ 2 \}$
- $m=15$\
$S^X=\{ \mathbf{4, 7}, 9\}$\
$C_1'=(0, 4, 13, 7, 11, 5, 9, 3, 12, 1, 10, 14, 8, 2, 6, 0)$\
$C_2'=(0, 7, 1, 8, 12, 6, 13, 5, 14, 3, 10, 4, 11, 2, 9, 0)$\
$C_3'=(0, 9, 13, 2, 11, 3, 7, 14, 6, 10, 1, 5, 12, 4, 8, 0)$\
Partition contains: $\{\pm 3,\pm 5\}$, and $\{ e \}$ for each remaining difference $e$
$S^Y=\{ \mathbf{1}, 5, 6, 9, \mathbf{10} \}$\
$Q_1=(11, 6, 7, 12, 2, 8, 9, 4, 5, 14)$\
$Q_2=(13, 3)$\
$Q_3=(0, 10, 1)$\
$C_1=(0, 1, 2, 12, 7, 13, 8, 3, 4, 14, 9, 10, 11, 5, 6, 0)$\
$C_2=(0, 5, 11, 2, 7, 1, 6, 12, 3, 9, 14, 8, 13, 4, 10, 0)$\
$C_3=(0, 6, 11, 1, 7, 8, 2, 3, 12, 13, 14, 5, 10, 4, 9, 0)$\
$C_4=(0, 9, 3, 8, 14, 4, 13, 7, 2, 11, 12, 6, 1, 10, 5, 0)$\
Partition contains: $\{\pm 3,\pm 2\}$, and $\{ e \}$ for each remaining difference $e$
- $m=21$\
$S^X=\{ \mathbf{1, 4}, 18\}$\
$C_1'=(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 19, 16, 20, 17, 18, 0)$\
$C_2'=(0, 4, 8, 12, 9, 6, 10, 14, 18, 19, 1, 5, 2, 20, 3, 7, 11, 15, 16, 13, 17, 0)$\
$C_3'=(0, 18, 15, 12, 16, 17, 14, 11, 8, 5, 9, 13, 10, 7, 4, 1, 19, 20, 2, 6, 3, 0)$\
Partition contains: $\{\pm 6,\pm 7\}$, $\{\pm 9,\pm 2\}$,and $\{ e \}$ for each remaining difference $e$
$S^Y=\{ 3,\mathbf{4}, \mathbf{10}, 13, 18\}$\
$Q_1=(5, 15, 19, 2, 12, 16, 13, 17, 0, 4, 8, 18, 10, 20)$\
$Q_2=(7, 11, 14, 6, 3)$\
$Q_3=(9, 1)$\
$C_1=(0, 10, 14, 18, 1, 11, 15, 4, 7, 17, 6, 19, 8, 12, 9, 13, 16, 5, 2, 20, 3, 0)$\
$C_2=(0, 13, 2, 15, 12, 4, 1, 14, 11, 3, 6, 10, 7, 20, 17, 9, 19, 16, 8, 5, 18, 0)$\
$C_3=(0, 3, 16, 19, 1, 4, 14, 17, 20, 2, 6, 9, 12, 15, 18, 7, 10, 13, 5, 8, 11, 0)$\
$C_4=(0, 18, 15, 7, 4, 17, 14, 3, 13, 10, 2, 5, 9, 6, 16, 20, 12, 1, 19, 11, 8, 0)$\
Partition contains: $\{\pm 6,\pm 7\}$, $\{\pm 9,\pm 2\}$, and $\{ e \}$ for each remaining difference $e$
- $m=27$\
$S^X=\{ 3, \mathbf{22, 25} \}$\
$C_1'=(0, 25, 23, 21, 19, 17, 15, 13, 11, 6, 4, 1, 26, 2, 24, 22, 20, 18, 16, 14, 9, 12, 7, 10, 5, 8, 3, 0)$\
$C_2'=(0, 22, 25, 20, 15, 18, 13, 16, 19, 14, 17, 12, 10, 8, 11, 9, 4, 7, 2, 5, 3, 6, 1, 23, 26, 21, 24, 0)$\
$C_3'=(0, 3, 25, 1, 4, 26, 24, 19, 22, 17, 20, 23, 18, 21, 16, 11, 14, 12, 15, 10, 13, 8, 6, 9, 7, 5, 2, 0)$\
Partition contains: $\{\pm 6,\pm 1\}$, $\{\pm 9,\pm 4\}$, $\{\pm 12,\pm 7\}$, and $\{ e \}$ for each remaining difference $e$
$S^Y=\{ 3,4,\mathbf{19}, 24, \mathbf{25} \}$\
$Q_1=(20, 12, 4, 2, 5, 9, 13, 17, 21, 19, 16, 8, 11, 15, 18, 10, 7, 26)$\
$Q_2=(22, 14, 6, 25, 23, 0, 3)$\
$Q_3=(24, 1)$\
$C_1=(0, 19, 11, 3, 1, 26, 18, 16, 14, 12, 15, 7, 4, 23, 20, 24, 22, 25, 17, 9, 6, 10, 2, 21, 13, 5, 8, 0)$\
$C_2=(0, 25, 2, 6, 4, 7, 11, 8, 12, 9, 1, 5, 24, 16, 13, 10, 14, 17, 20, 23, 21, 18, 15, 19, 22, 26, 3, 0)$\
$C_3=(0, 4, 1, 25, 22, 19, 17, 14, 11, 9, 12, 10, 8, 6, 3, 7, 5, 2, 26, 23, 15, 13, 16, 20, 18, 21, 24, 0)$\
$C_4=(0, 24, 21, 25, 1, 4, 8, 5, 3, 6, 9, 7, 10, 13, 11, 14, 18, 22, 20, 17, 15, 12, 16, 19, 23, 26, 2, 0)$\
Partition contains: $\{\pm 6,\pm 1\}$, $\{\pm 9,\pm 5\}$, $\{\pm 12,\pm 7\}$, and $\{ e \}$ for each remaining difference $e$
- $m=33$\
$S^X=\{ 11, 12, \mathbf{19, 22}\}$\
$C_1'=(0, 19, 5, 24, 10, 29, 15, 1, 20, 6, 28, 14, 25, 11, 30, 8, 27, 16, 2, 13, 32, 21, 7, 18, 4, 26, 12,$ $ 23, 9, 31, 17, 3, 22, 0)$\
$C_2'=(0, 22, 1, 12, 31, 20, 9, 28, 6, 17, 29, 18, 7, 19, 8, 30, 16, 5, 27, 13, 24, 3, 25, 14, 26, 15, 4,$ $ 23, 2, 21, 10, 32, 11, 0)$\
$C_3'=(0, 11, 22, 8, 19, 30, 9, 20, 31, 10, 21, 32, 18, 29, 7, 26, 4, 15, 27, 5, 16, 28, 17, 6, 25, 3, 14,$ $ 2, 24, 13, 1, 23, 12, 0)$\
$C_4'=(0, 12, 24, 2, 14, 3, 15, 26, 5, 17, 28, 7, 29, 8, 20, 32, 10, 22, 11, 23, 1, 13, 25, 4, 16, 27, 6,$ $ 18, 30, 19, 31, 9, 21, 0)$\
Partition contains: $\{\pm 3,\pm 1\}$, $\{\pm 6,\pm 2\}$, $\{\pm 9,\pm 4\}$, $\{\pm 15,\pm 5\}$, and $\{ e \}$ for each remaining difference $e$
$S^Y=\{ 1, \mathbf{7, 13}, 26 \}$\
$Q_1=(8, 21, 14, 27, 28, 2, 15, 16, 29, 9, 22, 23, 24, 17, 30, 4, 11, 18, 25, 5, 31, 32)$\
$Q_2=(12, 19, 20, 13, 26, 6, 7, 0, 1)$\
$Q_3=(10, 3)$\
$C_1=(0, 7, 20, 27, 1, 14, 21, 28, 8, 15, 22, 29, 30, 23, 16, 9, 2, 3, 4, 17, 10, 11, 24, 31, 5, 12, 13,$ $ 6, 32, 25, 18, 19, 26, 0)$\
$C_2=(0, 13, 14, 7, 8, 1, 2, 9, 10, 17, 18, 11, 12, 25, 26, 27, 20, 21, 22, 15, 28, 29, 3, 16, 23, 30,$ $ 31, 24, 4, 5, 6, 19, 32, 0)$\
$C_3=(0, 26, 19, 12, 5, 18, 31, 11, 4, 30, 10, 23, 3, 29, 22, 2, 28, 21, 1, 27, 7, 14, 15, 8, 9, 16, 17,$ $ 24, 25, 32, 6, 13, 20, 0)$\
Partition contains: $\{\pm 3,\pm 11\}$, $\{\pm 6,\pm 2\}$, $\{\pm 9,\pm 4\}$, $\{\pm 12,\pm 5\}$, $\{\pm 15,\pm 8\}$, and $\{ e \}$ for each remaining difference $e$
- $m=39$\
$S^X=\{ \mathbf{13, 16}, 24, 26\}$\
$C_1'=(0, 13, 26, 3, 16, 29, 6, 19, 32, 9, 22, 35, 12, 25, 38, 15, 28, 2, 18, 31, 5, 21, 34, 8, 24, 37,$ $ 11, 27, 1, 14, 30, 4, 17, 33, 7, 20, 36, 10, 23, 0)$\
$C_2'=(0, 16, 32, 6, 22, 38, 12, 28, 15, 2, 26, 13, 29, 3, 19, 35, 9, 25, 1, 17, 30, 7, 23, 10, 36, 21,$ $ 37, 14, 27, 4, 20, 33, 18, 5, 31, 8, 34, 11, 24, 0)$\
$C_3'=(0, 26, 11, 37, 24, 9, 35, 22, 7, 33, 20, 5, 18, 3, 29, 14, 38, 25, 10, 34, 19, 4, 30, 15, 31, 16,$ $ 1, 27, 12, 36, 23, 8, 21, 6, 32, 17, 2, 28, 13, 0)$\
$C_4'=(0, 24, 11, 35, 20, 7, 31, 18, 34, 21, 8, 32, 19, 6, 30, 17, 4, 28, 5, 29, 16, 3, 27, 14, 1, 25, 12,$ $ 38, 23, 36, 13, 37, 22, 9, 33, 10, 26, 2, 15, 0)$\
Partition contains: $\{\pm 3,\pm 1\}$, $\{\pm 6,\pm 2\}$, $\{\pm 9,\pm 4\}$, $\{\pm 12,\pm 5\}$, $\{\pm 18,\pm 7\}$, and $\{ e \}$ for each remaining difference $e$
$S^Y=\{ 2,7, \mathbf{28, 34} \}$\
$Q_1=(29, 18, 7, 14, 16, 23, 25, 27, 22, 17, 12, 19, 21, 10, 5, 0, 28, 35, 24, 13, 2, 30, 32, 34, 36,$ $38)$\
$Q_2=(31, 20, 9, 37, 26, 15, 4, 11, 6, 8, 3)$\
$Q_3=(33, 1)$\
$C_1=(0, 34, 23, 12, 1, 35, 3, 37, 5, 7, 2, 36, 4, 32, 21, 16, 18, 25, 20, 15, 17, 6, 13, 8, 10, 38, 27,$ $ 29, 31, 33, 22, 24, 26, 28, 30, 19, 14, 9, 11, 0)$\
$C_2=(0, 7, 9, 16, 11, 13, 15, 10, 17, 19, 8, 36, 25, 32, 27, 34, 2, 4, 38, 6, 1, 3, 5, 12, 14, 21, 28,$ $23, 18, 20, 22, 29, 24, 31, 26, 33, 35, 30, 37, 0)$\
$C_3=(0, 2, 9, 4, 6, 34, 29, 36, 31, 38, 1, 8, 15, 22, 11, 18, 13, 20, 27, 16, 5, 33, 28, 17, 24, 19, 26,$ $ 21, 23, 30, 25, 14, 3, 10, 12, 7, 35, 37, 32, 0)$\
Partition contains: $\{\pm 3,\pm 13\}$, $\{\pm 6,\pm 1\}$, $\{\pm 9,\pm 4\}$, $\{\pm 12,\pm 8\}$, $\{\pm 15,\pm 10\}$, $\{\pm 18,\pm 14\}$, and $\{ e \}$ for each remaining difference $e$
- $m=45$\
$S^X=\{ \mathbf{4, 7}, 39\}$\
$C_1'=(0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 43, 5, 9, 13, 17, 21, 25, 29, 33, 37, 41, 2, 6, 10, 14, 18,$ $ 22, 26, 30, 34, 38, 42, 1, 40, 44, 3, 7, 11, 15, 19, 23, 27, 31, 35, 39, 0)$\
$C_2'=(0, 7, 1, 5, 12, 19, 26, 33, 27, 21, 28, 35, 42, 4, 11, 18, 25, 32, 39, 43, 2, 9, 16, 23, 30, 37,$ $44, 6, 13, 20, 14, 8, 15, 22, 29, 36, 40, 34, 41, 3, 10, 17, 24, 31, 38, 0)$\
$C_3'=(0, 39, 33, 40, 1, 8, 2, 41, 35, 29, 23, 17, 11, 5, 44, 38, 32, 26, 20, 27, 34, 28, 22, 16, 10, 4,$ $ 43, 37, 31, 25, 19, 13, 7, 14, 21, 15, 9, 3, 42, 36, 30, 24, 18, 12, 6, 0)$\
Partition contains: $\{\pm 3,\pm 5\}$, $\{\pm 9,\pm 10\}$, $\{\pm 12,\pm 20\}$, $\{\pm 15,\pm 1\}$, $\{\pm 18,\pm 2\}$, $\{\pm 21,\pm 8\}$, and $\{ e \}$ for each remaining difference $e$
$S^Y=\{ \mathbf{10, 16}, 31, 35\}$\
$Q_1=(11, 21, 31, 2, 12, 22, 38, 9, 19, 29, 39, 4, 14, 24, 40, 30, 20, 10, 0, 35, 25, 41, 6, 37, 27, 17,$ $ 7, 42, 28, 44)$\
$Q_2=(13, 23, 33, 43, 8, 18, 34, 5, 36, 26, 16, 32, 3)$\
$Q_3=(15, 1)$\
$C_1=(0, 10, 20, 30, 40, 5, 21, 37, 23, 13, 3, 38, 28, 18, 4, 39, 29, 15, 31, 41, 12, 2, 33, 19, 35, 6,$ $ 16, 26, 36, 1, 32, 22, 8, 24, 34, 44, 9, 25, 11, 42, 7, 17, 27, 43, 14, 0)$\
$C_2=(0, 16, 6, 22, 32, 42, 13, 29, 19, 9, 44, 30, 1, 17, 3, 34, 20, 36, 7, 38, 24, 10, 41, 31, 21, 11,$ $ 27, 37, 2, 18, 8, 43, 33, 23, 39, 25, 15, 5, 40, 26, 12, 28, 14, 4, 35, 0)$\
$C_3=(0, 31, 17, 33, 4, 20, 6, 41, 27, 13, 44, 34, 24, 14, 30, 16, 2, 37, 8, 39, 10, 26, 42, 32, 18, 28,$ $ 38, 3, 19, 5, 15, 25, 35, 21, 7, 23, 9, 40, 11, 1, 36, 22, 12, 43, 29, 0)$\
Partition contains: $\{\pm 3,\pm 5\}$, $\{\pm 6,\pm 20\}$, $\{\pm 9,\pm 1\}$, $\{\pm 12,\pm 2\}$, $\{\pm 15,\pm 4\}$, $\{\pm 18,\pm 7\}$, $\{\pm 21,\pm 8\}$, and $\{ e \}$ for each remaining difference $e$
[^1]: Corresponding author. Email: msajna$@$uottawa.ca. Mailing address: Department of Mathematics and Statistics, University of Ottawa, 585 King Edward Avenue, Ottawa, ON, K1N 6N5, Canada.
|
---
abstract: |
Let $\mathbf H^3$ be the hyperbolic space identified with the unit ball $\mathbf{B}^3 = \{x\in \mathbf{R}^3: |x| < 1\}$ with the Poincaré metric $d_h$ and assume that ${\mathcal{A}}(x_0,p,q):=\{x: p<d_h(x,x_0)< q\}\subset \mathbf H^3$ is an hyperbolic annulus with the inner and outer radii $0<p<q<\infty$. We prove that if there exists a proper hyperbolic harmonic mapping between annuli ${\mathcal{A}}(x_0,a,b)$ and ${\mathcal{A}}(y_0,\alpha,\beta)$ in the hyperbolic space $\mathbf
H^3$, then $\beta/\alpha>1+\psi(a,b)$, where $\psi$ is a positive function.
address: ' Faculty of Natural Sciences and Mathematics, University of Montenegro, Cetinjski put b.b. 81000 Podgorica, Montenegro'
author:
- David Kalaj
title: 'On J. C. C. Nitsche’s type inequality for hyperbolic space $\mathbf{H}^3$'
---
[ subjclass[Primary 58E20]{} ifundefined[subjclassname@2000]{}[ ]{}[ xpxpsubjclassname@2000 ]{}]{}
Introduction
============
Background and statement of the main result
-------------------------------------------
In this paper by $\mathbf{A}(a,b)$ we denote the annulus $\{x\in\mathbf{R}^n:a<|x|<b\}$ in the euclidean space $\mathbf{R}^n$, $n\ge 2$. The unit ball is defined by $\mathbf{B}^n = \{x\in \mathbf{R}^n: |x| < 1\}$ and the unit sphere is defined by ${S}^{n-1} = \{x\in \mathbf{R}^n: |x| = 1\}$ (here $x=(x_1,\dots,x_n)$ and $|x|=\sqrt{\sum_{i=1}^n x_i^2}$ ). Fifty years ago J. C. C. Nitsche [@n], studying the minimal surfaces and inspired by radial harmonic mappings between annuli asked a question whether the existence of a euclidean harmonic mapping between circular annuli $\mathbf{A}(a,1)$ and $\mathbf{A}(\alpha,1)$ in $\mathbf{R}^2$ is equivalent with the simple inequality $$\label{nit}\alpha\le
\frac{2a}{1+a^2}.$$ This question is answered recently in positive by Iwaniec, Kovalev and Onninen in [@conj]. The Nitsche conjecture is deeply rooted in the theory of doubly connected minimal surfaces. Some partial results have been obtained previously by Lyzzaik [@Al], Weitsman [@weit] and the author [@israel]. On the other hand in [@h] and in [@kalaj] is treated the same problem for the harmonic mappings w.r.t hyperbolic and Riemann metric in two-dimensional hyperbolic space and in two-dimensional Riemann sphere respectively. In [@kalaj; @jmaa] the author treated the three-dimensional case and obtained an inequality for euclidean harmonic mappings between annuli on $\mathbf{R}^3$. The $n-$dimensional generalization of conjectured inequality is $$\alpha\le \frac{2a}{n-1+a^n}$$ and is inspired by radial harmonic mappings $$\label{f}f(x)=\left(\frac{1-a^{n-1}\alpha}{1-a^n}+\frac{a^{n-1}\alpha-a^n}{(1-a^n)|x|^n}\right)x$$ between annuli $\mathbf{A}(a,1)$ and $\mathbf{A}(\alpha,1)$ (c.f. [@jmaa]). The last conjectured inequality for $n\ge 3$ remains an open problem.
The existence of harmonic mappings between certain annuli in the two dimensional euclidean space is deeply related to the existence of minimizers of Dirichlet integral without boundary data for differentiable mappings between annuli. Further it is shown in [@ar] that the minimizer of Dirichlet integral w.r.t. euclidean metric of certain deformations between annuli $\mathbf{A}(a,1)$ and $\mathbf{A}(\alpha,1)$ is a homeomorphism if and only if the inequality is satisfied. In this case the minimizer is the harmonic diffeomorphism given by ($n=2$). See also [@dist; @gaven] for some generalization of the previous problem to radial metrics. In multidimensional setting (when $n\ge 3$), the minimization problem of Dirichlet energy without boundary data is essentially different from the case $n=2$. It seems that in this case the minimization of $n$-energy is more appropriate. Then the appropriate Euler-Lagrange equations reads as a $n-$harmonic equation. In [@io] Iwaniec and Onninen formulated a J. C. C. Nitsche type inequality for $n-$harmonic mappings and shown that under these inequality the absolute minimizers of Dirichlet energy are radial $n-$harmonic mappings. One of advantages of $n-$harmonic mappings for $n\ge 3$ is that they are invariant under Möbius transformations of the space. This property the class of $n-$harmonic mappings share with hyperbolic harmonic mappings. Moreover hyperbolic harmonic mappings are invariant under Möbius transformations of the domain as well as of the image domain.
In this paper we consider hyperbolic harmonic mappings between certain subsets of the hyperbolic space $\mathbf{H}^3$. Li and Tam in [@invent; @anals] established the existence and regularity of proper hyperbolic harmonic mappings between $\mathbf{H}^n$ onto $\mathbf{H}^m$, satisfying certain conditions in the ideal boundary $\mathbf{S}^{n-1}$.
The purpose of this paper is to study J. C. C. Nitsche type problem for the harmonic mappings between domains of the hyperbolic spaces $\mathbf{H}^3$.
In this paper we prove the following theorem:
\[peopeo\] Let ${\mathbf A}=\mathbf{A}(a,b),$ ${\mathbf A}'=\mathbf{A}(\alpha,\beta)\subset \mathbf{B}^3$, $0<a<b<1$, $0<\alpha<\beta< 1$ be spherical annuli endowed with the hyperbolic metric of the unit ball. If there exists a proper hyperbolic harmonic mapping $u$ of $\mathbf{A}$ onto $\mathbf{A}'$ then $$\label{ence}\frac{(1 - \alpha^2)^2}{4 \alpha (1 + \alpha^2)}\log\left[\frac{1+\beta}{1+\alpha}\frac{1-\alpha}{1-\beta}\right]
\geq \left(-1 + \frac{a}{b} +
\log\frac{b}{a}\right):\left(1+\log\frac{1-a^2}{1-b^2}\right).$$
Recall that a mapping $f : X \to Y$ between two topological spaces is proper if and only if the preimage of every compact set in $Y$ is compact in $X$.
Hyperbolic harmonic mappings
----------------------------
In general, if the metrics of two non-compact manifolds $M^m$ and $N^n$ are given locally by $$ds^2_M =
\sum_{i,j}^m g_{ij} dx^idx^j$$ and $$ds^2_N =
\sum_{\alpha,\beta=1}^n h_{ij} du^\alpha du^\beta$$ respectively, then the energy-density function of a $C^1$ map $u: M\to N$ is defined by $$e(u)(x) = \sum_{i,j=1}^m\sum_{\alpha,\beta=1}^n
g^{ij}(x)h_{\alpha\beta}(u(x))\frac{\partial u^\alpha}{\partial
x^i}\frac{\partial u^\beta}{\partial x^j},$$ and the total energy of u is given by $$E(u) =\int_Me(u)(x)dx.$$ The harmonic-map equation from $M$ into $N$, which is the Euler-Lagrange equation for critical points of the total energy functional, can be written as $$\Delta u^\alpha(x) =- \sum_{i,j=1}^m\sum_{\alpha,\beta=1}^n
\Gamma^{\alpha}_{\beta\gamma}(u(x))g^{ij}(x)\frac{\partial
u^\beta}{\partial x^i}\frac{\partial u^\gamma}{\partial x^j},$$ for all $1 \le \alpha \le m$, where the $\Gamma^{\alpha}_{\beta\gamma}$ are the Christoffel symbols of $N$. Here $\Delta$ is Laplace-Beltrami operator on $M^m$. We refer to the monograph [@sy] for some important properties of harmonic mappings.
The hyperbolic space $\mathbf{H}^n$ is identified with $\mathbf{B}^n=\{x\in \mathbf R^n: |x|<1\}$ with the Poincaré metric tensor given by $$ds^2_{\mathbf{H}}(x)=\frac{4|dx|^2}{(1-|x|^2)^2}$$ which in polar coordinates can be written as $$ds^2_{\mathbf{H}}(x)=4\frac{d\rho^2+\rho^2\sum_{i,j=2}^ng_{ij}d\eta^id\eta^j}{(1-\rho^2)^2}$$ where $\rho^2=|x|^2=\sum_{i=1}^n x_i^2$ and $\sum_{i,j=2}^n
g_{ij}d\eta^i d\eta^j$ is the standard metric tensor on the unit sphere $S^{n-1}$.
Polar coordinates are natural to use when working with annuli. We associate with any point $\,x\in
\mathbf{B}^n_\circ\,$ a pair of polar coordinates $$(r,\omega )\,\in (0,1)\times \mathbf{S}^{n-1} \;\sim \;
\mathbf B^n_\circ$$ where $r = |x|\,$ is referred to as the radial distance and $\,\omega = \frac{x}{|x|} \,$ as the spherical coordinate of $\,x\,$. Obviously $\,x =r\, \omega\,$ and the volume element in polar coordinates reads as $\textrm dV(x)\,=\, r^{n-1}\, d r\, d
\mathcal{H}^{n-1}(\omega)$, where $\mathcal{H}^{n-1}$ is the $n-1$-dimensional Hausdorff surface measure.
If the polar coordinates of a point $x\in \mathbf{B}^n$ are $(r,\omega)$, then the geodesic polar coordinates are $(2\tanh^{-1}(|x|\},\omega)$ and they will be of crucial importance for our approach. In [@invent] Li and Tam computed the coefficients of tension field of a mapping $$u(x)=r(x) \Theta(x)=r(x)(\theta^1(x),\dots,
\theta^n(x)):\mathbf{X}\to \mathbf{B}^n, \ \ \mathbf{X}\subset
\mathbf{B}^m$$ in polar coordinates as follows $$\begin{split}\label{lita}
\tau(u)^1&=\frac{(1-\rho^2)^2}{4}\Delta_0r\\&\ \ \ \
+\frac{1}{4}\bigg(2(m-2)(1-\rho^2)\rho\frac{\partial \rho}{\partial
\rho}\\&\ \ \ \ \ +
\frac{r(1-\rho^2)^2(2|\nabla_0r|^2-(1+r^2)\sum_{p,q=2}^nh_{pq}\left<\nabla_0\theta^p,\nabla_0\theta^q\right>)}{1-r^2}\bigg)\end{split}$$ and $$\begin{split}\label{litapo}
\tau(u)^s&=\frac{(1-\rho^2)^2}{2}\left(\Delta_0
\theta^s+\sum_{p,q=2}^n\tilde\Gamma^s_{pq}\left<\nabla_0\theta^p,\nabla_0\theta^q\right>\right)\\&+
\frac{1-\rho^2}{2}\left((m-2)\rho\frac{\partial\theta^s}{\partial
\rho}+\frac{(1+r^2)(1-\rho^2)\left<\nabla_0r,\nabla_0\theta^s\right>}{r(1-r^2)}\right)\end{split}$$ for $s\ge 2$. Here $\Delta_0$ denotes the euclidean Laplacian and $\nabla_0$ denotes the euclidean gradient and the $\tilde\Gamma^s_{pq}$ denote the Christoffel symbols with respect to the standard metric tensor of $S^{n-1}$. The mapping $u$ is harmonic if $\tau(u)^s=0$ for $1\le s\le n$. We will use only the relation . This in particular mean that, the results of this paper can be formulated in slightly more general setting.
The set of isometries of the hyperbolic space is a Kleinian group subgroup of all Möbius transformations of the extended space $\overline{\mathbf{R}}^n$ onto itself denoted by $\mathbf{Conf}(\mathbf{B}^n)=\mathbf{Isom}(\mathbf{H}^n)$. We refer to the Ahlfors’ book [@al] for detailed survey to this class of important mappings. We recall that the hyperbolic metric $d_h(z,w)$ of the unit ball $\mathbf{B}^n$ is defined by $$\tanh \frac{d_h(x,y)}{2}= \frac{|x-y|}{[x,y]},$$ where $[x,y]^2:=1+|x|^2|y|^2-2\left<x,y\right>$. In particular $d_h(0,x)=2\tanh^{-1}(|x|)$. Since the harmonicity is an isometric invariant, we obtain the following proposition.
\[eko\] If $u:\mathbf{X}\to \mathbf{Y}$ is a harmonic mapping between the domains $\mathbf{X}$ and $\mathbf{Y}$ of the unit ball and $f,g\in
\mathbf{Conf}(\mathbf{B}^n)$, then $f\circ u\circ g$ is a harmonic mapping between $g(\mathbf{X})$ and $f(\mathbf{Y})$.
Let $x_0\in \mathbf{H}^3$ and assume that $0<p<q<\infty$. Then the set $\mathcal{A}(x_0,p,q):=\{x\in \mathbf{H}^3: p<d_h(x,x_0)<q\}$ is called the hyperbolic annulus. Moreover $\mathcal{A}(0,p,q)=\mathbf{A}(p',q')$, $p=\tanh\frac{p'}{2}$ and $q=\tanh\frac{q'}{2}$. Having in mind the previous observation, together with Proposition \[eko\] we obtain the following reformulation of Theorem \[peopeo\]
\[peo\] Let ${\mathcal A}={\mathcal A}(x_0,a',b'),$ ${\mathcal A'}={\mathcal A}(y_0,\alpha',\beta')
\subset \mathbf{B}^3$, $0<a'<b'<\infty$, $0<\alpha'<\beta'< \infty$ be spherical annuli endowed by hyperbolic metric of the unit ball. If there exists a proper hyperbolic harmonic mapping $u$ of $\mathcal{A}$ onto $\mathcal{A}'$ then $$\label{beprim}\frac{\beta'}{\alpha'}\ge1+
\frac{\sinh(2\alpha')}{\alpha'}\frac{\log[\coth\frac{a'}{2}
\tanh\frac{b'}{2}]+\coth\frac{b'}{2}
\tanh\frac{a'}{2}-1}{1+2
\log[\cosh\frac{b'}{2}\mathrm{sech}\,\frac{a'}{2}])}.$$
Since $\frac{\sinh(2\alpha')}{\alpha'}> 2$, from , we obtain the following $$\frac{\alpha'}{\beta'}< \varphi(a',b')<1,
\text{ (c.f. \eqref{nit}, for $n=2$ and euclidean metric) }.$$
preliminary results
===================
For a matrix $A=\{a_{ij}\}_{i,j=1}^n$ we define the Hilbert-Schmidt and geometric norm as follows $$\|A\|_2=\sqrt{\sum_{i,j}^n
a_{ij}^2}\text{ and }\|A\|=\sup\{|A x|: |x|=1\},$$ respectively. Let $\lambda_1^2\le \dots \le \lambda_n^2$ be the eigenvalues of the matrix $A^T A$. Then $$\|A\|_2=\sqrt{\sum_{i}^n \lambda_{i}^2}\text{
and }\|A\|=\lambda_n$$ and $$\mathrm{det} A=\prod_{k=1}^n
\lambda_k.$$ We say that $A$ is $K$-quasiconformal, where $K\ge 1$, if $\lambda_n\le K \lambda_1$.
\[extra\]There hold the sharp inequality $$\label{main}|Ax_1\times \dots \times A x_{n-1}|\le
\left[\frac{K^2}{1+(n-1)K^2}\right]^{(n-1)/2}
\|A\|_2^{n-1}|x_1\times\dots\times x_{n-1}|,$$ where $1\le K\le \infty $ is the constant of quasiconformality of $A$. If $A$ is an orthogonal transformation, then in we have equality with $K=1$. If $A$ is singular, then $K=\infty$ and we make use of the convention $\frac{K^2}{1+(n-1)K^2}=\frac{1}{n-1}$.
Assume firstly that $A$ is a nonsingular matrix. Then $A$ is $K-$quasiconformal for some $K<\infty$. Let $x_i=\sum_{j=1}^n
x_{ij} e_j$, $i=1,\dots n-1$. Then $$A x_1\times \dots\times Ax_{n-1}=\sum_{\sigma}
\varepsilon_\sigma x_{1,\sigma_1}\dots x_{n-1\sigma_{n-1}}
Ae_1\times \dots\times Ae_{n-1}.$$ It follows that $$\label{adj}A x_1\times \dots\times Ax_{n-1} =\tilde A( x_1\times
\dots\times x_{n-1}).$$ Here $\tilde A$ is the adjugate of $A$, which for nonsingular matrix $A$ satisfies the relation $\tilde A=\det A \cdot A^{-1}$. As $A$ is $K$ quasiconformal, $\tilde A$ is $K$ quasiconformal as well. Let $\lambda_1^2\le \dots
\le \lambda_n^2$ be the eigenvalues of the matrix $A^T A$. $A$ is $K-$quasiconformal if and only if $$\label{lam}\frac{\lambda_n}{\lambda_1}\le K.$$ From $\tilde A = \det A
\cdot A^{-1}$, it follows that $$\tilde \lambda_k=\det A\cdot \frac{1}{\lambda_k}, \text{and}\,
\tilde\lambda_n\le \tilde\lambda_{n-1}\le \dots\le \tilde\lambda_1$$ and consequently $$\frac{\tilde \lambda_1}{\tilde\lambda_n}\le K.$$ From we obtain $$\label{adj1}|A x_1\times
\dots\times Ax_{n-1}| \le\|\tilde A \| \cdot |x_1\times \dots\times
x_{n-1}|.$$ Furthermore $$\label{adj2}\|\tilde A \|=\tilde \lambda _1=\frac{\det
A}{\lambda_1}=\prod_{k=2}^n \lambda_k.$$ On the other hand $$\label{A}\|A\|_2=\sqrt{\sum_{k=1}^n
\lambda_k^2}.$$ From G–A inequality we have $$\label{but}\begin{split}\frac{\prod_{k=2}^n
\lambda_k}{\left(\sqrt{\sum_{k=1}^n
\lambda_k^2}\right)^{n-1}}&\le\frac{1}{(n-1)^{(n-1)/2}}
\frac{\left(\sqrt{\sum_{k=2}^n
\lambda_k^2}\right)^{n-1}}{\left(\sqrt{\sum_{k=1}^n
\lambda_k^2}\right)^{n-1}}\\&=
\left(\frac{B}{(n-1)(B+\lambda_1^2)}\right)^{(n-1)/2},\end{split}$$ where $B=\sum_{k=2}^n \lambda_k^2$. Since $\lambda_k$ is an increasing sequence, from we have $$\label{ine}\lambda_1^2\ge \lambda^2_k/K^2, \ \ \ k=2,\dots, n.$$ Summing the inequalities we obtain $$\label{inea}\lambda^2_1\ge
\frac{B}{(n-1)K^2}.$$ From , , and we obtain $$\frac{\|\tilde
A\|}{\|A\|_2^{n-1}}\le
\left[\frac{K^2}{1+(n-1)K^2}\right]^{(n-1)/2}.$$ This in view of , completes the proof of inequality of lemma. To show the sharpness of the inequality, take $A(x)=(x_1,Kx_2,\dots, Kx_n)$. Then $A$ is $K-$quasiconformal. Moreover $$|Ae_2\times \dots \times
A e_{n}|=K^{n-1}$$ and $$\left[\frac{K^2}{1+(n-1)K^2}\right]^{(n-1)/2}
\|A\|_2^{n-1}|e_2\times\dots\times e_{n}|=K^{n-1}.$$ Since the set of singular matrices is nowhere dense and closed subset of $M_{n\times n}$, for a singular matrix $A$ there exists a sequence of positive real numbers $\epsilon_k$ converging to zero such that $A_k=A+\epsilon_k I$ is a nonsingular matrix, where $I$ is the identity matrix. Moreover the constants of quasiconformality $K_k$ of $A_k$ tend to $\infty$. By applying the previous proof to $A_k$ we obtain the inequality for $K=\infty$. The inequality is attained for $A(x)=(0,x_2,\dots,x_n)$.
\[onep\] Let $u$ be a $C^1$ surjection between the spherical rings $\mathbf{A}(a,b)$ and $\mathbf{A}(\alpha,\beta)$, and let $\Theta=(\theta^1,\dots,\theta^n)={u}/{|u|}$. Let $P^{n-1}$ be a closed $n-1$ dimensional hyper-surface that separates the components of the set $\mathbf{A}^C(a, b)$. Then $$\label{inequ}\int_{P^{n-1}}\|D\Theta\|_2^{n-1}d\mathcal{H}^{n-1}\geq
(n-1)^{\frac{n-1}{2}}\omega_{n-1},$$ and $$\label{inequsec}
\int_{\mathbf{A}(a,b)}\|D\Theta\|_2^{n-1}dV\geq
(n-1)^{\frac{n-1}{2}}(b-a)\omega_{n-1},$$ where $\omega_{n-1}$ denote the measure of $S^{n-1}$ and $D\Theta=\{\theta^j_{x_i}\}_{i,j=1}^n$ is the differential matrix of $\Theta$. Moreover $d\mathcal{H}^{n-1}$ is the $n-1$-dimensional Hausdorff surface measure and $dV$ is the volume element.
Let $K^{n-1}$ be an $n-1$-dimensional rectangle and let $g:K^{n-1}\to P^{n-1}$ be a parametrization of $P^{n-1}$. Then the function $\Theta \circ g$ is a differentiable surjection from $K^{n-1}$ onto the unit sphere $S^{n-1}$. Then we have $$\int_{K^{n-1}}D_{\Theta\circ g}dV\geq \omega_{n-1}.$$ (cf. [@israel p. 245]). According to Lemma \[extra\] (for $K=\infty$), we obtain $$\begin{split}{D_{\Theta\circ g}(x)}&={\left|D\Theta(g(x))\frac{\partial
g(x)}{\partial x_1}\times \dots \times D\Theta(g(x))\frac{\partial
g(x)}{\partial x_{n-1}}\right|}\\&\leq (n-1)^{\frac{1-n}{2}}
\|D\Theta(g(x))\|_2^{n-1}{D_g(x)}.\end{split}$$ Hence we obtain $$(n-1)^{\frac{n-1}{2}}\omega_{n-1}\leq
\int_{K^{n-1}}\|D\Theta(g(x))\|_2^{n-1}D_g(x)dV(x)=
\int_{P^{n-1}}\|D\Theta(\zeta)\|_2^{n-1}d\mathcal{H}^{n-1}(\zeta).$$ Thus we have proved (\[inequ\]). It follows that $$\int_{\mathbf{A}(a,b)}\|D\Theta\|_2^{n-1}dV=
\int_{a}^{b}\left(\int_{S^{n-1}(0,t)}\|D\Theta\|_2^{n-1}\,d\mathcal{H}^{n-1}\right)\,\mathrm{d}t\geq
(n-1)^{\frac{n-1}{2}}(b-a)\omega_{n-1}.$$ The proof of the proposition has been completed.
The proof of main result
========================
One of key formulas follows from following lemma
Let $u(x)=r(x) \Theta(x):\mathbf{A}\to \mathbf{A}'$ be a hyperbolic harmonic mapping between the domains $\mathbf{A}$ and $\mathbf{A}'$ of the hyperbolic space $\mathbf{B}^n$ and assume that $R(x)=2\tanh^{-1}(r(x))$ and $\rho=|x|$. Then
$$\label{hyplap}\Delta_0 R+\frac{2(n-2)\rho}{(1-\rho^2)}\frac{\partial R}{\partial
\rho} =\frac{\sinh(2R)}{2}\|D \Theta\|_2^2,$$
where $\Delta_0$ and $\|D\Theta\|_2$ are euclidean Laplacian and Hilbert-Schmidt norm of differential matrix respectively.
Since $u$ is harmonic, from the equation $\tau(u)^1=0$, where $\tau(u)^1$ is defined in , we obtain $$\Delta_0 r+\left(2(n-2)(1-\rho^2)^{-1}\rho\frac{\partial \rho}{\partial \rho} +
\frac{r(2|\nabla_0r|^2-(1+r^2)\sum_{p,q=2}^nh_{pq}\left<\nabla_0\theta^p,\nabla_0\theta^q\right>)}{1-r^2}\right)=0.$$ Let $g(q)=\tanh(q/2)$. Then $$\Delta_0 r= g''(R(x))|\nabla_0 R|^2 +
g'(R(x))\Delta_0 R$$ and $$|\nabla_0 r|^2=(g')^2|\nabla_0 R|^2.$$ Since $$\frac{2g}{1-g^2}=\sinh(R)=-\frac{g''}{{g'}^2}$$ it follows that $$\frac{1}{2} \mathrm{sech}^2\frac{R}{2}\Delta_0 R+(n-2) \mathrm{sech}^2\frac{R}{2}\frac{\rho}{1-\rho^2}\frac{\partial R}{\partial \rho}
=\frac{r((1+r^2)\sum_{p,q=2}^nh_{pq}\left<\nabla_0\theta^p,\nabla_0\theta^q\right>)}{1-r^2},$$ i.e. $$\Delta_0 R+2(n-2)(1-\rho^2)^{-1}\rho\frac{\partial R}{\partial \rho}
=\frac{\sinh(2R)}{2}\sum_{p,q=2}^nh_{pq}\left<\nabla_0\theta^p,\nabla_0\theta^q\right>),$$ which can be written as $$\Delta_0 R+2(n-2)(1-\rho^2)^{-1}\rho\frac{\partial R}{\partial \rho}
=\frac{\sinh(2R)}{2}\|D\Theta\|_2^2.$$
Let $\alpha'=2\tanh^{-1}\alpha$ and $\beta'=2\tanh^{-1}\beta$. Let $\varphi_k :[\alpha',\beta'] \mapsto [\alpha',\beta'] $ be a sequence of non decreasing functions, constant in some small neighborhood of $\alpha'$, for example in $[\alpha',\alpha'+(\beta'-\alpha')/k]$ and satisfying the following conditions $$\label{posdif}
0\leq\varphi'_k(R)\to 1 \ \text{and} \ 0\leq\varphi''_k(R)\to 0 \
\text{as} \ k\to \infty$$ $ \text{for every} \ R \in [\alpha',\beta'] .$ (See [@israel]) for an example of such sequence). Let $R_k$ be a function defined on $\{x:a<|x|< b \}$ by $R_k(x)=\varphi_{k}(R(x))$. Then $$\label{ariu}\Delta_0 R_k(x)=\varphi_k''(R(x))|\nabla
R(x)|^2+\varphi_k'(R(x))\Delta_0 R(x).$$ Therefore $$\begin{split}\Delta_0 R_k+2(n-2)(1-\rho^2)^{-1}\rho\frac{\partial
R_k}{\partial \rho}&=\varphi_k''(R(x))|\nabla
R(x)|^2+\varphi_k'(R(x))\Delta_0
R(x)\\&+\varphi_k'(R(x))2(n-2)(1-\rho^2)^{-1}\rho\frac{\partial
R}{\partial \rho}\\&=\varphi_k''(R(x))|\nabla
R(x)|^2+\varphi_k'(R(x))\frac{\sinh(2R_k)}{2}\|D\Theta_k\|_2^2.\end{split}$$ Thus $$\label{deltak}\Delta_0
R_k+2(n-2)(1-\rho^2)^{-1}\rho\frac{\partial R_k}{\partial \rho}\ge
0$$ for every $k$. By and (\[posdif\]) it follows at once that $$\Delta_0 R_k(x)\to \Delta_0 R(x) \ \text{as} \ k\to \infty$$ for every $x\in \mathbf{A}(a,b)$. Similarly we obtain $$\frac{\partial R_k}{\partial \rho}(x)\to \frac{\partial
R}{\partial \rho}(x) \ \text{as} \ k\to \infty$$ uniformly on $\{x:
|\zeta|=s\}$ for every $s\in (a,b)$. By applying Green’s formula for $R_k$ on $\{x:a \leq|x|\leq s \}$, we obtain $$\int_{|\zeta|=s}\frac{\partial R_k}{\partial
\rho}\,d\mathcal{H}^{n-1}(\zeta)- \int_{|\zeta|=a}\frac{\partial
R_k}{\partial \rho}\,d\mathcal{H}^{n-1}(\zeta) =\int_{a\leq|x|\leq s
}\Delta_0 R_k \, dV(x).$$ Since the function $R_k$ is constant in some neighborhood of the sphere $|\zeta|=a$, it follows that for $a<s<b$ and large enough $k$ $$\int_{|\zeta|=s}\frac{\partial R_k}{\partial \rho}\,d\mathcal{H}^{n-1}(\zeta)=
\int_{a \leq|x|\leq s}\Delta_0 R_k \, dV(x).$$ Therefore $$\begin{split}\int_{|\zeta|=s}\frac{\partial R_k}{\partial
\rho}\,d\mathcal{H}^{n-1}(\zeta)&+\int_{A_s}2(n-2)(1-\rho^2)^{-1}\rho\frac{\partial
R_k}{\partial \rho}dV(x)\\&=\int_{\alpha \leq|x|\leq
s}\left[\Delta_0 R_k+2(n-2)(1-\rho^2)^{-1}\rho\frac{\partial
R_k}{\partial \rho} \right]\, dV(x).\end{split}$$ Further for $\omega\in S^{n-1}$ $$\begin{split}\lim_{k\to\infty}\int_a^s\frac{\rho^{n}}{1-\rho^2}&\frac{\partial
R_k}{\partial \rho}(s\omega) d\rho\\&=
\lim_{k\to\infty}\left[\frac{s^{n}R_k(s\omega)}{1-s^2}-\frac{a^{n}R_k(a
\omega)}{1-a^2}-\int_{a}^s\frac{\rho^{n-1} (n + 2 \rho^2 - n
\rho^2)}{(1 - \rho^2)^2}R_k(\rho \omega)d\rho\right]
\\&\le \frac{s^{n}}{1-s^2}\beta'-\frac{a^{n}}{1-a^2}\alpha'-\int_{a}^s
\frac{\rho^{n-1} (n + 2 \rho^2 - n \rho^2)}{(1 - \rho^2)^2}\alpha'
d\rho\\&= \frac{s^{n}}{1-s^2}(\beta'-\alpha').\end{split}$$ Therefore $$\label{drilip}\begin{split}\int_{|\zeta|=s}\frac{\partial R}{\partial \rho}\,d\mathcal{H}^{n-1}(\zeta) &+
2(n-2)\int_{S^{n-1}}\frac{s^{n}}{1-s^2}(\beta'-\alpha')
d\mathcal{H}^{n-1}\\&\ge \limsup_{k\to\infty}\int_{a \leq|x|\leq
s}\bigg[\Delta_0 R_k+2(n-2)(1-\rho^2)^{-1}\rho\frac{\partial
R_k}{\partial \rho} \bigg]\, dV(x).
\end{split}$$ By applying Fatou’s lemma, having in mind and using , letting $k \to \infty$, we obtain $$\label{drili}\begin{split}\limsup_{k\to\infty}\int_{a \leq|x|\leq s}\bigg[\Delta_0
R_k&+2(n-2)(1-\rho^2)^{-1}\rho\frac{\partial R_k}{\partial \rho}
\bigg]\, dV(x)\\&\ge \int_{a \leq|x|\leq s}\bigg[\Delta_0
R+2(n-2)(1-\rho^2)^{-1}\rho\frac{\partial R}{\partial \rho} \bigg]\,
dV(x)\\&=
\int_{S^{n-1}}\int_a^s\rho^{n-1}\frac{\sinh(2R)}{2}\|\nabla
\Theta\|_2^2 d\rho d\mathcal{H}^{n-1}.\end{split}$$ From and we obtain $$\label{drilica}\begin{split}\int_{|\zeta|=s}\frac{\partial R}{\partial \rho}\,d\mathcal{H}^{n-1}(\zeta) &+
2(n-2)\int_{S^{n-1}}\frac{s^{n}}{1-s^2}(\beta'-\alpha')
d\mathcal{H}^{n-1}\\&\ge
\int_{S^{n-1}}\int_a^s\rho^{n-1}\frac{\sinh(2R)}{2}\|\nabla
\Theta\|_2^2 d\rho d\mathcal{H}^{n-1}.
\end{split}$$ It follows that $$\begin{split}s^{n-1}\frac{\partial }{\partial s} \int_{|\zeta|=1}R(s\zeta)
\,d\mathcal{H}^{n-1}(\zeta)&+\int_{S^{n-1}}\frac{2(n-2)s^{n}}{1-s^2}(\beta'-\alpha')
d\mathcal{H}^{n-1}\\&\geq
\int_{S^{n-1}}\int_a^s\rho^{n-1}\frac{\sinh(2R)}{2}\|\nabla
\Theta\|_2^2 d\rho d\mathcal{H}^{n-1}\end{split}$$ i.e. $$\begin{split}s^{n-1}\frac{\partial
}{\partial s} \int_{|\zeta|=1}R(s\zeta)
\,d\mathcal{H}^{n-1}(\zeta)&+2(n-2)\omega_{n-1}\frac{s^{n}}{1-s^2}(\beta'-\alpha')
\\&\geq \int_{S^{n-1}}\int_a^s\rho^{n-1}\frac{\sinh(2R)}{2}\|\nabla
\Theta\|_2^2 d\rho d\mathcal{H}^{n-1}\end{split}$$ or what is the same $$\begin{split}\frac{\partial }{\partial s} \int_{|\zeta|=1}R(s\zeta)
\,\mathrm{d}S(\zeta)&+2(n-2)\omega_{n-1}\frac{s}{1-s^2}(\beta'-\alpha')
\\&\geq
s^{1-n}\int_{S^{n-1}}\int_a^s\rho^{n-1}\frac{\sinh(2R)}{2}\|\nabla
\Theta\|_2^2 d\rho d\mathcal{H}^{n-1}.\end{split}$$ Integrating the previous expression w.r.t $s$ on $[a,b]$ we obtain $$\label{pomo}\begin{split}\omega_{n-1}(\beta'-\alpha')&+(n-2)\omega_{n-1}\log\frac{1-a^2}{1-b^2}(\beta'-\alpha')
\\&\geq \int_a^b
s^{1-n}\int_{S^{n-1}}\int_a^s\rho^{n-1}\frac{\sinh(2R)}{2}\|\nabla
\Theta\|_2^2 d\rho d\mathcal{H}^{n-1} ds\\&\ge
\frac{\sinh(2\alpha')}{2}\int_a^b
s^{1-n}\int_{S^{n-1}}\int_a^s\rho^{n-1}\|\nabla \Theta\|_2^2 d\rho
d\mathcal{H}^{n-1} ds.\end{split}$$ Now we put $n=3$, which implies this simple fact $n-1=2$. Combining now with Proposition \[onep\] we obtain $$(\beta'-\alpha')+(n-2)\log\frac{1-a^2}{1-b^2}(\beta'-\alpha')
\geq {\sinh(2\alpha')}\int_a^b s^{-2} (s-a)ds$$ and therefore $$\label{pola}(\beta'-\alpha')\left(1+\log\frac{1-a^2}{1-b^2}\right) \geq
{\sinh(2\alpha')}\left(-1 + \frac{a}{b} +
\log\frac{b}{a}\right).$$ By using the formulas $a'=2\tanh^{-1}a$ and $b'=2\tanh^{-1}b$, we obtain , for $x_0=0$ and $y_0=0$. The general case follows from Proposition \[eko\]. By using the following formulas $$2\tanh^{-1}t=\log\frac{1+t}{1-t}\text{ and }
\sinh(4\tanh^{-1}t)=\frac{4 t (1 + t^2)}{(1 - t^2)^2},$$ we obtain . This finishes the required proofs.
Assume that $$u(x)=r(\rho) \frac{x}{|x|}$$ is a hyperbolic harmonic mapping. Then from , taking $\rho=e^t$, we obtain that $r(\rho)=\tanh\frac{y(t)}{2}$ where $y$ is a solution of the differential equation $$y''+(2-n)\coth(t) y'=\frac{(n-1)\sinh(2y)}{2}.$$ Two of many possible solutions of the previous differential equation are $y_{+}(t)=2\tanh^{-1}(e^{t})$ and $y_{-}(t)=2\tanh^{-1}(e^{-t})$. The function $y_+$ produces the identity mapping $u_+(x)=x$, while the function $y_{-}$ produces the inversion $u_{- }(x)=x/|x|^2$. Both are hyperbolic harmonic mappings, but the second one maps the complement of the unit ball onto the unit ball. Notice that both, the unit ball and its complement, with appropriate metrics, can identify the hyperbolic space $\mathbf{H}^n$.
[99]{}
Ordway Professorship Lectures in Mathematics. University of Minnesota, School of Mathematics, Minneapolis, Minn., 1981. ii+150 pp.
Arch. Ration. Mech. Anal. [**195**]{} (2010), no. 3, 899–921.
: [*Remarks on the geometric behavior of harmonic maps between surfaces. Chow, Ben (ed.) et al., Elliptic and parabolic methods in geometry.*]{} Proceedings of a workshop, Minneapolis, MN, USA, May 23–27, 1994. Wellesley, MA: A K Peters. 57-66 (1996).
J. *$n$-Harmonic Mappings Between Annuli: The Art of Integrating Free Lagrangians,* to appear in Mem. Amer. Math. Soc.
*The Nitsche conjecture*, J. Amer. Math. Soc. 24 (2011), no. 2, 345-373. :[*On the univalent solution of PDE $\Delta u=f$ between spherical annuli.*]{} J. Math. Anal. Appl. [**327**]{} (2007), no. 1, 1–11. : [*On the Nitsche conjecture for harmonic mappings in ${\Bbb R}\sp 2$ and ${\Bbb R}\sp 3$.*]{} Israel J. Math. [**150**]{} (2005), 241–251.
: *Deformations of annuli on Riemann surfaces with smallest mean distortion.* ArXiv:1005.5269.
: [*Harmonic maps between annuli on Riemann surfaces.*]{} Israel J. Math. **182**, (2011), 123-147. (arXiv:1003.2744).
[Martin, G., McKubre-Jordens, M.]{} *Deformations with smallest weighted Lp average distortion and Nitsche type phenomena.* Journal of the London Mathematical Society, (in press).
<span style="font-variant:small-caps;">Li, P., Tam, L.-F.:</span> *The heat equation and harmonic maps of complete manifolds.* Invent. Math. 105, No.1, 1-46 (1991).
<span style="font-variant:small-caps;">Li, P., Tam, L.-F.:</span> *Uniqueness and regularity of proper harmonic maps.* Ann. Math. (2) 137, No.1, 167-201 (1993).
J. London Math. soc., (2) [**64**]{} (2001), pp. 369-384.
: [*On the modulus of doubly connected regions under harmonic mappings*]{}, Amer. Math. Monthly [**69**]{} (1962), 781-782.
Conference Proceedings and Lecture Notes in Geometry and Topology, II. International Press, Cambridge, MA, 1997. vi+394 pp.
Lecture Notes in Mathematics, 1319. Springer-Verlag, Berlin, 1988. xx+209 pp.
Israel J. Math. [**124**]{}, 327–331 (2001)
|
---
abstract: '$\\$ $\\$ In order to be practically useful, quantum cryptography must not only provide a guarantee of secrecy, but it must provide this guarantee with a useful, sufficiently large throughput value. The standard result of generalized privacy amplification yields an upper bound only on the [*average value*]{} of the mutual information available to an eavesdropper. Unfortunately this result by itself is inadequate for cryptographic applications. A naive application of the standard result leads one to [*incorrectly*]{} conclude that an acceptable upper bound on the mutual information has been achieved. It is the [*pointwise value*]{} of the bound on the mutual information, associated with the use of some specific hash function, that corresponds to actual implementations. We provide a fully rigorous mathematical derivation that shows how to obtain a cryptographically acceptable upper bound on the actual, pointwise value of the mutual information. Unlike the bound on the average mutual information, the value of the upper bound on the pointwise mutual information and the number of bits by which the secret key is compressed are specified by two different parameters, and the actual realization of the bound in the pointwise case is necessarily associated with a specific failure probability. The constraints amongst these parameters, and the effect of their values on the system throughput, have not been previously analyzed. We show that the necessary shortening of the key dictated by the cryptographically correct, pointwise bound, can still produce viable throughput rates that will be useful in practice.'
author:
- |
G. Gilbert,$^{\dag}$ M. Hamrick$^{\ddag}$ and F.J. Thayer$^{\ast}$\
[*The MITRE Corporation, McLean, Virginia 22102, USA*]{}
title: |
Privacy Amplification in Quantum Key Distribution:\
Pointwise Bound [*versus*]{} Average Bound
---
\
------------------------------------------------------------------------
\
\
\
\
\
\
\
\
\
[@ l l l l]{}
& & &\
& & &\
[@ l l]{}
&\
&\
\
\
\
\
\
\
[2]{}
Introduction
============
Quantum cryptography has been heralded as providing an important advance in secret communications because it provides a guarantee that the amount of mutual information available to an eavesdropper can unconditionally be made arbitrarily small. Any [*practical*]{} realization of quantum key distribution that consists only of sifting, error correction and authentication will allow some information leakage, thus necessitating privacy amplification. Of course, one might contemplate carrying out privacy amplification after executing a classical key distribution protocol. In the absence of any assumed [*conditions*]{} on the capability of an eavesdropper, it is not possible to deduce a provable upper bound on the leaked information in the classical case, so that the subsequent implementation of privacy amplification would produce nothing, [*i.e.,*]{} the “input" to the privacy amplification algorithm cannot be bounded, and as a result neither can the “output." In the case of quantum key distribution, however, the leaked information associated with that string which is the input to the privacy amplification algorithm can be bounded, and this can be done in the absence of any assumptions about the capability of an eavesdropper. This bound is not good enough for cryptography, however. Nevertheless, this bound on the input allows one to prove a bound on the output of privacy amplification, so that one deduces a final, unconditional upper bound on the mutual information available to an eavesdropper. Moreover this bound can be made arbitrarily small, and hence good enough for cryptography, at the cost of suitably shortening the final string. .075in [Except that as usually presented this is not exactly true.]{} .075in The above understanding is usually presented in connection with the standard result of generalized privacy amplification given in [@BBCM], which applies only to the [*average*]{} value of the mutual information. The average is taken with respect to a set of elements, namely, the $universal_2$ class of hash functions introduced by Carter and Wegman [@WC]. The actual implementation of privacy amplification, however, will be executed by software and hardware that selects a [*particular*]{} hash function. The bound on the average value of the mutual information does not apply to this situation: it does not directly measure the amount of mutual information available to an eavesdropper in practical quantum cryptography.
In this paper we calculate cryptographically acceptable pointwise bounds on the mutual information which can be achieved while still maintaining sufficiently high throughput rates. In contrast to a direct application of the privacy amplification result of [@BBCM], we must also consider and bound a probability of choosing an unsuitable hash function and relate this to cryptographic properties of the protocol and the throughput rate. The relation between average bounds and pointwise bounds of random variables is not new and follows from elementary probability theory, as was also noticed in [@lutkenhaus-practical].
Privacy Amplification
=====================
In ideal circumstances, the outcome of a $k$-bit key-exchange protocol is a $k$-bit key shared between Alice and Bob which is kept secret from Eve. Perfect secrecy means that from Eve’s perspective the shared key is chosen uniformly from the space of $k$-bit keys. In practice, one can only expect Eve’s probability distribution for the shared key be close to uniform in the sense that its Shannon entropy is close to its largest possible value $k$. Moreover, because quantum key-exchange protocols implemented in practice [*inevitably*]{} leak information to Eve, Eve’s distribution of the key is too far from uniform to be usable for cryptographic purposes. Privacy amplification is the process of obtaining a nearly uniformly distributed key in a keyspace of smaller bitsize.
We review the standard assumptions of the underlying probability model of [@BBCM]: $\Omega$ is the underlying sample space with probability measure $\mathbf{P}$. Expectation of a real random variable $X$ with respect to $\mathbf{P}$ is denoted $\mathbf{E} X$. $W$ is a random variable with key material known jointly to Alice and Bob and $V$ is a random variable with Eve’s information about $W$. $W$ takes values in some finite keyspace $\mathcal{W}$. The distribution of $W$ is the function $\mathbf{P}_{\mathcal{W}}(w) = \mathbf{P}(W = w)$ for $w \in
\mathcal{W}$. Eve’s distribution having observed a value $v$ of $V$ is the conditional probability $\mathbf{P}_{\mathcal{W}}|_{V = v}(w) =
\mathbf{P}(W = w | V = v)$ on $\mathcal{W}$. In the the discussion that follows, $v$ is fixed and accordingly we denote Eve’s distribution of Alice and Bob’s shared key given $v$ by $\mathbf{P}_{\mathrm{Eve}}$. ${\operatorname{H}}$ and ${\operatorname{R}}$ denote Shannon and Renyi entropies of random variables defined on $\mathcal{W}$ relative to $\mathbf{P}_{\mathrm{Eve}}$.
Suppose $\mathcal{Y}$ is a keyspace. If $\alpha$ is a positive real number, a mapping $\gamma: \mathcal{W} \rightarrow
\mathcal{Y}$ is an $\alpha$ strong uniformizer for Eve’s distribution iff ${\operatorname{H}}(\gamma) = \sum_{y \in \mathcal{Y}} \mathbf{P}_{\mathrm{Eve}}(\gamma^{-1}(y))
\log_2 \mathbf{P}_{\mathrm{Eve}}(\gamma^{-1}(y)) \geq \log_2 {|\mathcal{Y}|} -
\alpha$.
If $\gamma$ is an $\alpha$ strong uniformizer, then we obtain a bound on the mutual information between Eve’s data $V$ and the image of the hash transformation $Y$ as follows:
$$\label{E:alphastrong}
I(Y,V) = I(Y) - H(Y|V) = \log_2{|\mathcal{Y}|} - {\operatorname{H}}(\gamma) \leq \alpha~.$$
Let $\Gamma$ be a random variable with values in $\mathcal{Y}^\mathcal{W}$ (space of functions $\mathcal{W} \rightarrow \mathcal{Y}$) which is conditionally independent of $W$ given $V = v$ i.e. $
\mathbf{P}(\Gamma = \gamma \mbox{ and }
W = w | {V=v}) = \mathbf{P}(\Gamma = \gamma | {V=v}) \, \mathbf{P}( W = w |
{V=v}).
$ $\Gamma$ is an $\alpha > 0$ average uniformizer for Eve’s distribution iff $$\mathbf{E}( {\operatorname{H}} \Gamma) \geq \log_2
{|\mathcal{Y}|} - \alpha \,
$$ where $ {\operatorname{H}} \Gamma = {\operatorname{H}} \Gamma(z) = {\operatorname{H}}(\Gamma(z))$.
If $\Gamma$ is an $\alpha$ average uniformizer, the bound is on the mutual information averaged over the set $\Gamma$:
$$\label{E:alphaaverage}
I(Y,\Gamma V) = I(Y) - H(Y|\Gamma V) = \log_2{|\mathcal{Y}|} - \mathbf{E}( {\operatorname{H}}
\Gamma) \leq \alpha~.$$
Uniformizers are produced stochastically. Notice that by the conditional stochastic independence assumption, $z$ can be assumed to vary independently of $w \in \mathcal{W}$ with the law $\mathbf{P}_{\mathrm{Eve}}$.
\[P:strongresult\] Suppose $\Gamma$ is an $\alpha$ average uniformizer. Then for every $\beta
> 0$, $\Gamma(\omega)$ is a $\beta$ strong uniformizer for $\omega$ outside a set of probability $\frac{\alpha}{\beta}$.
[Proof.]{} Note that for any $\gamma:\mathcal{W} \rightarrow
\mathcal{Y}$, ${\operatorname{H}}\gamma$ is at most $\log_2 {|\mathcal{Y}|}$. Thus $\log_2 {|\mathcal{Y}|} - {\operatorname{H}}\Gamma$ is a nonnegative random variable. Applying Chebychev’s inequality to $\log_2
{|\mathcal{Y}|} - {\operatorname{H}}\Gamma$, it follows that for every $\beta>0$, $$\begin{aligned}
\mathbf{P}\bigl( \log_2 {|\mathcal{Y}|} - \beta \geq
{\operatorname{H}}\Gamma\bigr) & \leq &
\frac{1}{\beta} \mathbf{E}(\log_2 {|\mathcal{Y}|} -
{\operatorname{H}}\Gamma) \\
& = & \frac{1}{\beta} \bigl( \log_2 {|\mathcal{Y}|} - \mathbf{E}
({\operatorname{H}}\Gamma) \bigr) \\ & \leq & \frac{1}{\beta} \alpha.
$$ The random variable $\Gamma$ is strongly $\mathrm{universal}_2$ iff for all $x \neq x' \in X$, $$\mathbf{P}\{z: \Gamma(z)(x) = \Gamma(z)(x')\} \leq
\frac{1}{{|\mathcal{Y}|}}.$$ The following is the main result of [@BBCM]:
[**(BBCM Privacy Amplification)**]{}. \[P:averageresult\] Suppose $\Gamma$ is a $\mathrm{universal}_2$ family of mappings $\mathcal{W} \rightarrow \mathcal{Y}$ conditionally independent of $W$. Then $\Gamma$ is a $\frac{2^{\log_2 {|\mathcal{Y}|} - {\operatorname{R}}(X)}}{\ln 2}$ average uniformizer for $X$.
Practical Results
=================
We will refer to the inequality that provides the upper bound on the average value of the mutual information as the [*average privacy amplification bound*]{}, or APA, and we will refer to the inequality that provides the upper bound on the actual, or pointwise mutual information as the [*pointwise privacy amplifcation bound*]{}, or PPA.
In carrying out privacy amplification we must shorten the key by the number of bits of information that have potentially been leaked to the eavesdropper [@GH_large]. Having taken that into account, we denote by $g$ the additional number of bits by which the key length will be further shortened to assure sufficient secrecy, [*i.e.*]{}, the additional bit subtraction amount, and we refer to $g$ as the [*privacy amplification subtraction parameter*]{}. With this definition of $g$, Bennett [*et al.*]{} [@BBCM] show as a corollary of \[P:averageresult\] that the set of Carter-Wegman hash functions is an $2^{-g}/\ln 2$ average uniformizer. We thus have for the APA bound on $\langle I\rangle$, the average value of the mutual information, the inequality
$$\label{APA}
\langle I\rangle\equiv I(Y,\Gamma V)\leq {2^{-g}\over\ln 2}~.$$
In the case of APA the quantity $g$ plays a dual role: in addition to representing the number of additional subtraction bits, for the APA case $g$ also directly determines the upper bound on the average of the mutual information.
In the case of PPA we again employ the symbol $g$ to denote the number of subtraction bits, as above for APA, but the upper bound on the pointwise mutual information is now given in terms of a different quantity $\gp$, which we refer to as the [*pointwise bound parameter*]{}. Also in the case of PPA we need the parameter $\gpp$, which we refer to as the [*pointwise probability parameter*]{}, in terms of which we may define the failure probability $P_f$. This definition is motivated by \[P:strongresult\], from which we find that the Carter-Wegman hash functions are $2^{-\gp}/\ln 2$ strong uniformizers except on a set of probability
$$P_f\equiv {2^{-g}\over\ln 2}{\Big /}{2^{-\gp}\over\ln 2}~.$$
We therefore define the pointwise probability parameter as
$$\label{CE}
\gpp \equiv g - \gp ~.$$
Thus the quantities $g$, $\gp$ and $\gpp$ are not all independent, and are constrained by equation \[CE\]. In terms of these parameters we have for the PPA bound on $I$, the actual value of the mutual information, the inequality
$$\label{PPA}
I\equiv I(Y,V)\leq {2^{-\gp}\over\ln 2}={2^{-\left(g-\gpp\right)}\over\ln 2}$$
where the associated failure probability $P_f$ is given by
$$\label{FP}
P_f=2^{-\gpp}~.$$
The failure probability is not even a defined quantity in the APA case, but it plays a crucial role in the PPA case. Thus, the bound on the pointwise mutual information is directly determined by the value of the parameter $\gp$, with respect to which one finds a tradeoff between $g$, the number of additional compression bits by which the key is shortened, and $\gpp$, the negative logarithm of the corresponding failure probability.
Application of Pointwise Bound
==============================
Operationally, it will usually be the case in practice that end-users of quantum key distribution systems will be first and foremost constrained to ensure that a given upper bound on the pointwise mutual information available to the enemy is realized.
To appreciate the significance of the distinction between the PPA and APA results, we will consider an illustrative example that shows how reliance on the APA bound can lead to complete compromise of cryptographic security. We begin with the APA case. As noted above, in the case of APA the privacy amplification subtraction parameter, which we will now denote by $g_{APA}$ to emphasize the nature of he bound, directly specifies both the upper bound on $\langle I\rangle$ and also the number of bits by which the key needs to be shortened to achieve this bound. Without loss of generality we take the value of the privacy amplification subtraction parameter to be given by $g_{APA}=30$, which means that, in addition to the compression by the number of bits of information that were estimated to have been leaked, the final length of the key will be further shortened by an additional 30 bits. This results in an upper bound on the average mutual information given by $\langle I\rangle\leq 2^{-30}/\ln 2\simeq 1.34\times 10^{-9}$, which we take as the performance requirement for this example. While this might appear to be an acceptable bound, the fact that it applies only to the average of the mutual information of course means that it is not the quantity we require.
We turn to the PPA case, with respect to which we will now refer to the privacy amplification subtraction parameter as $g_{PPA}$. In order to discuss the PPA bound we must select appropriate values amongst $g_{PPA}$, $\gp$ and $\gpp$. In the APA case discussed above, the bound on the (average) mutual information and the number of subtraction bits are both specified by the same parameter $g_{APA}$. In the PPA case, the number of subtraction bits and the parameter that specifies the bound on the (pointwise) mutual information are not the same. To achieve the same value for the upper bound on $I$ as we discussed for the upper bound on $\langle I\rangle$ above, we must select $\gp=30$ as the value of the pointwise bound parameter. From eq.(\[PPA\]) this indeed yields the required inequality $I\leq 2^{-30}/\ln2\simeq 1.34\times 10^{-9}$. However, with respect to this requirement on the value on the mutual information, [*i.e.*]{}, the required final amount of cryptographic secrecy, there are a denumerable set (since bits are discrete) of different amounts of compression of the key that are possible to select, each associated with a corresponding failure probability, $P_f$, in the form of ordered pairs $\left(g_{PPA},\gpp\right)$ that satisfy the constraint given by $g_{PPA}=\gp+\gpp$ ([*cf*]{} eq.(\[CE\])).
Our starting point was the secrecy performance requirement that must be satisfied. On the basis of the APA analysis above, one might conclude that in order to achieve the required secrecy performance constraint it is sufficient to shorten the key by 30 bits. However in the PPA case, satisfying the same performance requirement [*and*]{} shortening the key by 30 bits means choosing identical values for the privacy amplification subtraction parameter ($g_{PPA}=30$) and the pointwise bound parameter ($\gp=30$). However, we note from eq.(\[CE\]) that in the case of the PPA bound, $g_{PPA}$ and $\gp$ become the same only when $\gpp=0$, which corresponds to 100% failure probability on the upper bound. This is clearly cryptographically useless!
This example emphasizes the importance of assuring a sufficiently small failure probability in addition to a sufficiently small upper bound on the mutual information. As we see from the above example, the APA result provides no information about the correct number of subtraction bits that are required in order to achieve a specified upper bound on the pointwise mutual information with a suitable failure probability, for which it is essential to use the PPA result instead. In Figure 1 we have plotted the failure probability as a function of the upper bound on the mutual information, for a family of choices of $g_{PPA}$ values. Returning to the example discussed above for the APA bound, we see that if we need to achieve an upper bound on $I$ of about $10^{-9}$, we may do so with a failure probability of about (coincidentally) $10^{-9}$, at the cost of shortening the final key by 60 bits: the secrecy is dictated by the pointwise bound parameter value of $\gp=30$, which is effected by choosing $g_{PPA}=60$, corresponding to $P_f\simeq 10^{-9}$. Smaller upper bounds can obviously be obtained, with suitable values of the failure probability, at the cost of further shortening of the key.
In Figure 2 we plot the throughput of secret Vernam cipher material in bits per second, as a function of bit cell period, for the two bit subtraction amounts $g_{PPA}=30$ and $g_{PPA}=60$. The example chosen is a representative scenario for applied quantum cryptography. In calculating the rate we follow the method described in reference [@GH_large]. We assume the use of an attenuated, pulsed laser, with Alice located on a low earth orbit satellite at an altitude of 300 kilometers and Bob located at mean sea level, with the various system parameters corresponding to those for Scenario ([*i*]{}) in Section 5.3.2 in [@GH_large], except that here the source of the quantum bits operates at a pulse repetition frequency (PRF) of 1 MHz, and we specifically assume that the enemy does not have the capability to make use of prior shared entanglement in conducting eavesdropping attacks. We see that the additional cost incurred in subtracting the amount required to achieve the required mutual information bound and failure probability reduces the throughput rate by an amount that is likely to be acceptable for most purposes. For instance, for a source PRF of 1 MHz we find that the throughput rate with a value of $g_{PPA}=30$ is 5614 bits per second. With a subtraction amount of $g_{PPA}=60$ the throughput rate drops to 5563 bits per second [@blocksize].
Conclusions
===========
The significance and proper implementation of privacy amplification in quantum cryptography are clarified by our analysis. By itself the bound on the average value of the mutual information presented in [@BBCM] does not allow one to determine the values of parameters required to bound the actual, pointwise value of the mutual information. Those parameters must satisfy a constraint, which in turn implies a constraint on the final throughput of secret key material. We have rigorously derived the cryptographically meaningful upper bound on the pointwise mutual information associated with the use of some specific privacy amplification hash function, and shown that the corresponding requirements on the shortening of the key still allow viable throughput values.
[1]{}
C. H. Bennett, G. Brassard, C. Crépeau, and U. Maurer, “Generalized Privacy Amplification," IEEE Trans. Inf. Th. [**41**]{}, 1915 (1995).
J. L. Carter and M. N. Wegman, “Universal classes of hash functions," J. Comp. Syst. Sciences [**18**]{}, 143 (1979).
G. Gilbert and M. Hamrick, “Practical Quantum Cryptography: A Comprehensive Analysis (Part One)," [*arXive e-print*]{} quant-ph/0009027 (2000).
N. Lütkenhaus, “Estimates for practical quantum cryptography," Phys. Rev. [**A59**]{}, 3301-3319 (1999). The effect on the viability of throughput rates caused by changing the number of subtraction bits associated with replacing the average bound with the pointwise bound is not analyzed in [@lutkenhaus-practical], and the tradeoffs between the security parameters that define the pointwise bound are not numerically studied. Also, the complete loss of cryptographic security that is caused by naive application of the result given in [@BBCM] is not presented in [@lutkenhaus-practical]. (See Section IV of the present paper.)
The difference between the two throughput values is about 50 bits per second, because an additional 30 bits are subtracted per processing block, and in the example presented there are about 1.6 blocks per second. See reference [@GH_large] for a discussion of processing block size.
|
---
abstract: 'Many computer vision problems require optimization of binary non-submodular energies. We propose a general optimization framework based on [*local submodular approximations*]{} (LSA). Unlike standard LP relaxation methods that linearize the whole energy globally, our approach iteratively approximates the energies locally. On the other hand, unlike standard local optimization methods (gradient descent or projection techniques) we use non-linear submodular approximations and optimize them without leaving the domain of integer solutions. We discuss two specific LSA algorithms based on [*trust region*]{} and [*auxiliary function*]{} principles, LSA-TR and LSA-AUX. These methods obtain state-of-the-art results on a wide range of applications outperforming many standard techniques such as LBP, QPBO, and TRWS. While our paper is focused on pairwise energies, our ideas extend to higher-order problems. The code is available online [^1].'
author:
- |
Lena Gorelick Yuri Boykov Olga Veksler\
Computer Science Department\
University of Western Ontario
- |
Ismail Ben Ayed\
GE Healthcare\
- |
Andrew Delong\
Elect. & Comp. Engineering\
University of Toronto
bibliography:
- 'arXiv\_lsa2.bib'
title: 'Submodularization for Quadratic Pseudo-Boolean Optimization'
---
Introduction {#sec:intro}
============
We address a general class of binary pairwise non-submodular energies, which are widely used in applications like segmentation, stereo, inpainting, deconvolution, and many others. Without loss of generality, the corresponding binary energies can be transformed into the form[^2] $$\label{eq:en}
E(S) = S^TU + S^T M S, \;\;\;\;\;\;S\in \{0,1\}^{\Omega}$$ where $S=\{s_p\,|\,p\in\Omega\}$ is a vector of binary indicator variables defined on pixels $p\in\Omega$, vector $U=\{u_p\in{\cal R}\,|\,p\in\Omega\}$ represents unary potentials, and symmetric matrix $M=\{m_{pq}\in{\cal R}\,|\,p,q\in\Omega\}$ represents pairwise potentials. Note that in many practical applications matrix $M$ is sparse since elements $m_{pq}=0$ for all non-interacting pairs of pixels. We seek solutions to the following integer quadratic optimization problem $$\label{eq:iqp}
\min_{S\in \{0,1\}^{\Omega}} E(S).$$ When energy is [*submodular*]{}, $m_{pq}\leq 0\;\;\forall (p,q)$, globally optimal solution for can be found in a low-order polynomial time using graph cuts [@Boros01pseudo-booleanoptimization]. The general non-submodular case of problem is NP hard.
Standard linearization methods
------------------------------
Integer quadratic programming problems is a well-known challenging optimization problem with extensive literature in the combinatorial optimization community, see [@lazimy:82; @goemans:95; @Boros01pseudo-booleanoptimization]. It often appears in computer vision where it can be addressed with many methods including spectral and semi-definite programming relaxations, see [@olsson:CVIU08; @Keuchel-et-al-02a].
--------------------------- --------------------------
\(a) global linearization \(b) local linearization
--------------------------- --------------------------
Methods for solving based on LP relaxations, QPBO [@rother-et-al-cvpr-2007] and TRWS [@GTRWS:arXiv12], are considered among the most powerful in computer vision [@kappes-2013]. They approach integer quadratic problem by [*global linearization*]{} of the objective function at a cost of introducing a large number of additional variables and linear constraints. These methods attempt to optimize the relaxed LP or its dual. However, the integer solution can differ from the relaxed solution circled in Fig.\[fig:overview\](a). This is a well-known [*integrality gap*]{} problem. Most heuristics for extracting an integer solution from the relaxed solution have no [*a priori*]{} quality guarantees.
Our work is more closely related to [*local linearization*]{} techniques for approximating , parallel ICM, IPFP [@NIPS09LeordeanuHS09], and other similar methods [@NIPS10BrendelTodorovic]. Parallel ICM iteratively linearizes energy $E(S)$ around current solution $S_0$ using Taylor expansion and makes a step by computing an integer minimizer $S_{int}$ of the corresponding linear approximation, see Fig.\[fig:overview\](b). However, similarly to Newton’s methods, this approach often gets stuck in bad local minima by making too large steps regardless of the quality of the approximation. IPFP attempts to escape such minima by reducing the step size. It explores the continuous line between integer minimizer $S_{int}$ and current solution $S_0$ and finds optimal relaxed solution $S_{rlx}$ with respect to the original quadratic energy. Similarly to the global linearization methods, see Fig.\[fig:overview\](a), such continuous solutions give no quality guarantees with respect to the original integer problem .
Overview of submodularization
-----------------------------
Linearization has been a popular approximation approach to integer quadratic problem -, but it often requires relaxation leading to the integrality gap problem. We propose a different approximation approach, which we refer to as [*submodularization*]{}. The main idea is to use submodular approximations of energy . We propose several approximation schemes that keep submodular terms in and linearize non-submodular potentials in different ways leading to very different optimization algorithms. Standard [*truncation*]{} of non-submodular pairwise terms[^3] and some existing techniques for high-order energies [@FTR:cvpr13; @Bilmes2005; @rother:cvpr06; @AuxCut:cvpr13] can be seen as [*submodularization*]{} examples, as discussed later. Common properties of submodularization methods is that they compute globally optimal integer solutions of the approximation and do not need to leave the domain of discrete solutions avoiding integrality gaps. Sumbodularization can be seen as a generalization of local linearization methods since it uses more accurate higher-order approximations.
One way to linearize non-submodular terms in is to compute their Taylor expansion around current solution $S_0$. Taylor’s approach is similar to IPFP [@NIPS09LeordeanuHS09], but they linearize all terms including submodular ones. In contrast to IPFP, our overall approximation of $E(S)$ at $S_0$ is not linear; it belongs to a more general class of submodular functions. Such non-linear approximations are more accurate while still permitting efficient optimization in the integer domain.
We also propose a different mechanism for controlling the step size. Instead of exploring relaxed solutions on continuous interval $[S_0,S_{int}]$ in Fig.\[fig:overview\]b, we compute integer intermediate solutions $S$ by minimizing local submodular approximation over $\{0,1\}^\Omega$ under additional distance constraints $||S-S_0||<d$. Thus, our approach avoids integrality gap issues. For example, even linear approximation model in Fig.\[fig:overview\]b can produce solution $S^*$ if Humming distance constraint $||S-S_0||\leq 1$ is imposed. This local submodularization approach to - fits a general [*trust region framework*]{} [@fletcher:87; @TRreview:Yuan; @olsson:CVIU08; @FTR:cvpr13] and we refer to it as LSA-TR.
Our paper also proposes a different local submodularization approach to - based on the general [*auxiliary function*]{} framework [@Lange2000; @Bilmes2005; @AuxCut:cvpr13][^4]. Instead of Taylor expansion, non-submodular terms in $E(S)$ are approximated by linear upper bounds specific to current solution $S_0$. Combining them with submodular terms in $E(S)$ gives a submodular upper-bound approximation, a.k.a. an [*auxiliary function*]{}, for $E(S)$ that can be globally minimized within integer solutions. This approach does not require to control the step sizes as the global minimizer of an auxiliary function is guaranteed to decrease the original energy $E(S)$. Throughout the paper we refer to this type of local submodular approximation approach as LSA-AUX.
Some auxiliary functions were previously proposed in the context of high-order energies [@Bilmes2005; @AuxCut:cvpr13]. For example, [@Bilmes2005] divided the energy into submodular and supermodular parts and replaced the latter with a certain permutation-based linear upper-bound. The corresponding auxiliary function allows polynomial-time solvers. However, experiments in [@rother:cvpr06] (Sec. 3.2) demonstrated limited accuracy of the permutation-based bounds [@Bilmes2005] on high-order segmentation problems. Recently, Jensen inequality was used in [@AuxCut:cvpr13] to derive linear upper bounds for several important classes of high-order terms that gave practically useful approximation results. Our LSA-AUX method is first to apply auxiliary function approach to arbitrary (non-submodular) pairwise energies. We discuss all possible linear upper bounds for pairwise terms and study several specific cases. One of them corresponds to the permutation bounds [@Bilmes2005] and is denoted by LSA-AUX-P.
Recently both [*trust region*]{} [@fletcher:87; @TRreview:Yuan; @olsson:CVIU08] and [*auxiliary function*]{} [@Lange2000] frameworks proved to work well for optimization of energies with high-order regional terms [@FTR:cvpr13; @AuxCut:cvpr13]. They derive specific linear [@FTR:cvpr13] or upper bound [@AuxCut:cvpr13] approximations for non-linear cardinality potentials, KL and other distances between segment and target appearance models. To the best of our knowledge, we are the first to develop trust region and auxiliary function methods for integer quadratic optimization problems -.
[**Our contributions**]{} can be summarized as follows:
A general [*submodularization*]{} framework for solving integer quadratic optimization problems - based on [*local submodular approximations*]{} (LSA). Unlike global linearization methods, LSA constructs an approximation model without additional variables. Unlike local linearization methods, LSA uses a more accurate approximation functional.
In contrast to the majority of standard approximation methods, LSA avoids integrality gap issue by working strictly within the domain of discrete solutions.
State-of-the-art results on a wide range of applications. Our LSA algorithms outperform QPBO, LBP, IPFP, TRWS, its latest variant SRMP, and other standard techniques for -.
Description of LSA Algorithms {#sec:overview}
=============================
In this section we discuss our framework in detail. Section \[sec:tr\] derives local submodular approximations and describes how to incorporate them in the trust region framework. Section \[sec:aux\] briefly reviews auxiliary function framework and shows how to derive local auxiliary bounds.
---------------------------------------------- ------------------------------------------ --------------------------------
\(a) supermodular potential $\alpha\cdot xy$ \(b) “Taylor” based local linearizations \(c) Upper-bound linearization
---------------------------------------------- ------------------------------------------ --------------------------------
LSA-TR {#sec:tr}
------
Trust region methods are a class of iterative optimization algorithms. In each iteration, an approximate model of the optimization problem is constructed near the current solution $S_0$. The model is only accurate within a small region around the current solution called “trust region”. The approximate model is then globally optimized within the trust region to obtain a candidate solution. This step is called [*trust region sub-problem*]{}. The size of the trust region is adjusted in each iteration based on the quality of the current approximation. For a detailed review of trust region framework see [@TRreview:Yuan].
Below we provide details for our trust region approach to the binary pairwise energy optimization (see pseudo-code in Algorithm \[alg:TR\]). The goal is to minimize $E(S)$ in . This energy can be decomposed into submodular and supermodular parts $E(S)=E^{sub}(S) + E^{sup}(S)$ such that $$\begin{aligned}
E^{sub}(S) & = & S^T U + S^T M^- S \\
E^{sup}(S) & = & S^T M^+ S\end{aligned}$$ where matrix $M^-$ with negative elements $m^-_{pq} \leq 0$ represents the set of submodular pairwise potentials and matrix $M^+$ with positive elements $m^+_{pq}\geq 0$ represents supermodular potentials. Given the current solution $S_t$ energy $E(S)$ can be approximated by submodular function $$\label{eq:eqTR}
E_t(S) = E^{sub}(S) + S^T U_t + const$$ where $U_t = 2M^+ S_t$. The last two terms in are the first-order Taylor expansion of supermodular part $E^{sup}(S)$.
While the use of Taylor expansion may seem strange in the context of functions of integer variables, Figure \[fig:approx\](a,b) illustrates its geometric motivation. Consider individual pairwise supermodular potentials $f(x,y)$ in $$E^{sup}(S) = \sum_{pq} m^+_{pq}\cdot s_p s_q = \sum_{pq} f_{pq}(s_p,s_q).$$ Coincidentally, Taylor expansion of each relaxed supermodular potential $f(x,y)=\alpha\cdot xy$ produces a linear approximation (planes in b) that agrees with $f$ at three out of four possible discrete configurations (points A,B,C,D).
The standard trust region sub-problem is to minimize approximation $\widetilde{E}$ within the region defined by step size $d_t$ $$\label{eq:constrained}
S^*= \underset{||S-S_t||<d_t}{\operatorname{argmin}} E_t(S).$$ Hamming, $L_2$, and other useful metrics $||S-S_t||$ can be represented by a sum of unary potentials [@pdecut-eccv06]. However, optimization problem is NP-hard even for unary metrics[^5]. One can solve Lagrangian dual of by iterative sequence of graph cuts as in [@kahl:13], but the corresponding duality gap may be large and the optimum for is not guaranteed.
Instead of we use a simpler formulation of the trust region subproblem proposed in [@FTR:cvpr13]. It is based on unconstrained optimization of submodular Lagrangian $$\label{eq:lagrangian}
L_t(S) = E_t(S) + \lambda_t\cdot||S-S_t||$$ where parameter $\lambda_t$ controls the trust region size indirectly. Each iteration of LSA-TR solves for some fixed $\lambda_t$ and adaptively changes $\lambda_t$ for the next iteration (Alg.\[alg:TR\] line \[line:tau2\]), as motivated by empirical inverse proportionality relation between $\lambda_t$ and $d_t$ discussed in [@FTR:cvpr13].
Once a candidate solution $S^*$ is obtained, the quality of the approximation is measured using the ratio between the actual and predicted reduction in energy. Based on this ratio, the solution is updated in line \[line:tau1\] and the step size (or $\lambda$) is adjusted in line \[line:tau2\]. It is common to set the parameter $\tau_1$ in line \[line:tau1\] to zero, meaning that any candidate solution that decreases the actual energy gets accepted. The parameter $\tau_2$ in line \[line:tau2\] is usually set to 0.25 [@TRreview:Yuan]. Reduction ratio above this value corresponds to good approximation model allowing increase in the trust region size.
[**Initialize** $t=0$, $S_0$, $\lambda_0$ **Repeat** **//Solve Trust Region Sub-Problem** $S^* \longleftarrow \operatorname{argmin}_{S\in\{0,1\}^\Omega} L_t(S)$ // as defined in \[line:TRsolve\] $P=E_t(S_t)-E_t(S^*)$ //predicted reduction in energy $R =E(S_t) - E(S^*)$ //actual reduction in energy **//Update current solution** $S_{t+1} \longleftarrow
\left\{
\begin{array}{ll}
S^* & \mbox{if } R/P>\tau_1 \\
S_t & \mbox{otherwise}
\end{array}
\right.$\[line:tau1\] **//Adjust the trust region** $\lambda_{t+1} \longleftarrow
\left\{
\begin{array}{ll}
\lambda_t / \alpha & \mbox{if } R/P>\tau_2 \\
\lambda_t \cdot \alpha & \mbox{otherwise }
\end{array}
\right.$ \[line:tau2\] **Until Convergence** ]{}
LSA-AUX {#sec:aux}
-------
Bound optimization techniques are a class of iterative optimization algorithms constructing and optimizing upper bounds, a.k.a. [*auxiliary functions*]{}, for energy $E$. It is assumed that those bounds are easier to optimize than the original energy $E$. Given a current solution $S_t$, the function $A_t(S) $ is an auxiliary function of $E$ if it satisfies the following conditions:
$$\begin{aligned}
E(S) \, &\leq \, A_t(S) \label{general-second} \\
E(S_t) &= A_t(S_t) \label{general-third} \end{aligned}$$
\[Eq:Auxiliary\_function\_conditions\]
To approximate minimization of $E$, one can iteratively minimize a sequence of auxiliary functions: $$S_{t+1} = \arg \min_{S} \; A_t(S) \, , \quad t=1, 2, \dots
\label{Eq:Iterative_bound_minimization}$$ Using , , and , it is straightforward to prove that the solutions in correspond to a sequence of decreasing energy values $E(S_t)$. Namely, $$E(S_{t+1}) \leq A_t(S_{t+1}) \leq A_t(S_t) = E(S_t). \nonumber$$
The main challenge in bound optimization approach is designing an appropriate auxiliary function satisfying conditions and . However, in case of integer quadratic optimization problem -, it is fairly straightforward to design an upper bound for non-submodular energy $E(S)=E^{sub}(S) + E^{sup}(S)$. As in Sec.\[sec:tr\], we do not need to approximate the submodular part $E^{sub}$ and we can easily find a linear upper bound for $E^{sup}$ as follows.
Similarly to Sec.\[sec:tr\], consider supermodular pairwise potentials $f(x,y)=\alpha\cdot xy$ for individual pairs of neighboring pixels according to $$\label{eq:sum}
E^{sup}(S) = \sum_{pq} m^+_{pq}\cdot s_p s_q = \sum_{pq} f_{pq}(s_p,s_q)$$ where each $f_{pq}$ is defined by scalar $\alpha = m^+_{pq} >0$. As shown in Figure \[fig:approx\](b,c), each pairwise potential $f$ can be bound above by linear function $u(x,y)$ $$f(x,y)\leq u(x,y) := v\cdot x+w\cdot y$$ for some positive scalars $v$ and $w$. Assuming current solution $(x,y)=(x^t,y^t)$, the table below specifies linear upper bounds (planes) for four possible discrete configurations
$(x^t, y^t)$ upper bound $u(x,y)$ plane in Fig.\[fig:approx\](b,c)
-------------- ----------------------------------------- ----------------------------------
(0,0) $\frac{\alpha}{2}x + \frac{\alpha}{2}y$ purple
(0,1) $\alpha x$ green
(1,0) $\alpha y$ orange
(1,1) $\frac{\alpha}{2}x + \frac{\alpha}{2}y$ purple
As clear from Fig.\[fig:approx\](b,c), there are many other possible linear upper bounds for pairwise terms $f$. Interestingly, the “permutation” approach to high-order supermodular terms in [@Bilmes2005] reduces to linear upper bounds for $f(x,y)$ where each configuration (0,0) or (1,1) selects either orange or green plane randomly (depending on a permutation). Our tests showed inferior performance of such bounds for pairwise energies. The upper bounds using purple planes for (0,0) and (1,1), as in the table, work better in practice.
Summing upper bounds for all pairwise potentials $f_{pq}$ in using linear terms in this table gives an overall linear upper bound for supermodular part of energy $$\label{eq:supAux}
E^{sup}(S)\leq S^T U_t$$ where vector $U_t = \{u^t_p|p \in \Omega \}$ consists of elements $$u^t_p = \sum_{q } \frac{m^+_{pq}}{2}(1+s^t_p -2 s^t_p s^t_q)$$ and $S_t = \{s^t_p|p\in \Omega\}$ is the current solution configuration for all pixels. Defining our auxiliary function as $$\label{eq:auxiliary_bound}
A_t(S) := S^T U_t + E^{sub}(S)$$ and using inequality we satisfy condition $$E(S) = E^{sup}(S) + E^{sub}(S) \leq A_t(S).$$ Since $S_t^T U_t = E^{sup}(S_t)$ then our auxiliary function also satisfies condition $$E(S_t) = E^{sup}(S_t) + E^{sub}(S_t) = A_t(S_t).$$ Function $A_t(S)$ is submodular. Thus, we can globally optimize it in each iterations guaranteeing energy decrease.
Applications {#sec:applications}
============
Below we apply our method in several applications such as binary deconvolution, segmentation with repulsion, curvature regularization and inpainting. We report results for both LSA-TR and LSA-AUX frameworks and compare to existing state of the art methods such as QPBO [@rother-et-al-cvpr-2007], LBP [@pearl-1982], IPFP [@NIPS09LeordeanuHS09], TRWS and SRMP [@GTRWS:arXiv12] in terms of energy and running time[^6]. For the sake of completeness, and to demonstrate the advantage of non-linear submodular approximations over linear approximations, we also compare to a version of LSA-TR where both submodular and supermodular terms are linearized, denoted by LSA-TR-L. In the following experiments, all local approximation methods, IPFP, LSA-AUX, LSA-AUX-P, LSA-TR, LSA-TR-L are initialized with the entire domain assigned to the foreground. All global linearization methods, TRWS, SRMP and LBP, are run for 50, 100, 1000 and 5000 iterations. For QPBO results, unlabeled pixels are shown in gray color. Running time is shown in log-scale for clarity.
Binary Deconvolution
--------------------
Figures \[fig:deblur\] (top-left) shows a binary image after convolution with a uniform $3\times3$ and adding Gaussian noise ($\sigma=0.05$). The goal of binary deconvolution is to recover the original binary image and the energy is defined as $$\label{eq:deblur}
E(S) = \sum_{p\in\Omega} (I_p - \frac{1}{9}\sum_{q\in {\cal N}_p}s_q)^2$$
Here $ {\cal N}_p$ denotes the $3 \times 3$ neighborhood window around pixel $p$ and all pairwise interactions are supermodular. We did not use length regularization, since it would make the energy easier to optimize. Fig. \[fig:deblur\] demonstrates the performance of our approach (LSA-TR/LSA-AUX) and compares to standard optimization methods such as QPBO, LBP, IPFP, TRWS and SRMP. In this case LSA-TR-L and LSA-TR are identical since energy has no submodular pairwise terms. The bottom of Fig. \[fig:deblur\] shows the mean energy as a function of noise level $\sigma$. For each experiment the results are averaged over ten instances of random noise. The mean time is reported for the experiments with $\sigma=0.05$.
Segmentation with Repulsion
---------------------------
In this section we consider segmentation with attraction and repulsion pairwise potentials. Adding repulsion is similar to correlation clustering [@Bansal02correlationclustering], where data points either attract or repulse each other. Using negative repulsion in segmentation can avoid the bias of submodular length regularizer to [*short-cutting*]{}, whereby elongated structures are shortened to avoid high length penalty. Figure \[fig:repulsion\] (top-left) shows an example of an angiogram image with elongated structures. We use 16-neighborhood system and the pairwise potentials are defined as follows: $$\omega(p,q) = \frac{-\Delta (p,q)+c }{\mbox{dist(p,q)}}.$$ Here $\mbox{dist(p,q)}$ denotes the distance between image pixels $p$ and $q$ and $\Delta (p,q)$ is the difference in their respective intensities (see pairwise potentials in Fig. \[fig:repulsion\], bottom-left). The constant $c$ is used to make neighboring pixels with similar intensities attract and repulse otherwise. Being supermodular, repulsions potentials make the segmentation energy more difficult to optimize, but are capable to extract thin elongated structures. To demonstrate the usefulness of “repulsion” potentials we also show segmentation results with graph-cut a la Boykov-Jolly [@BJ:ICCV01] where negative pairwise potentials were removed/truncated (top-right).
Curvature
---------
Below we apply our optimization method to curvature regularization. We focus on the curvature model proposed in [@elzehiry2010fast]. The model is defined in terms of 4-neighborhood system and accounts for 90 degrees angles. In combination with appearance terms, the model yields discrete binary energy that has both submodular and non-submodular pairwise potentials. Originally, the authors of [@elzehiry2010fast] proposed using QPBO for optimization of the curvature regularizer. We show that our method significantly outperforms QPBO and other state-of-the-art optimization techniques, especially with large regularizer weights.
First we deliberately choose a toy example (white circle on a black background, see Fig. \[fig:LeoCurvature\_Circle\]) where we know what an optimal solution should look like. When using 4-neighborhood system, as the weight of the curvature regularizer increases, the solution should minimize the number of 90 degrees angles (corners) while maximizing the appearance terms. Therefore, when the weight of curvature regularizer is high, the solution should look more like a square than a circle. Consider the segmentation results in Fig. \[fig:LeoCurvature\_Circle\]. With low curvature weight, i.e., $\lambda_{curv} = 0.1$ , all compared methods perform equally well (see Fig. \[fig:LeoCurvature\_Circle\] top row). In this case appearance data terms are strong compared to the non-submodular pairwise terms. However, when we increase the curvature weight and set $\lambda_{curv} = 0.5$ or $2$ there is a significant difference between the optimization methods both in terms of the energy and the resulting solutions (see Fig. \[fig:LeoCurvature\_Circle\] middle and bottom).
Next, we selected an angiogram image example from [@elzehiry2010fast] and evaluate the performance[^7] of the optimization methods with two values of regularizer weight $\lambda_{curv}=19$ and $\lambda_{curv}=21$ (see Fig. \[fig:LeoCurvature\_Stenosis\]). Although the weight $\lambda$ did not change significantly, the quality of the segmentation deteriorated for all global linearization methods, namely QPBO, TRWS, LBP. The proposed methods LSA-TR and LSA-AUX seem to be robust with respect to the weight of the supermodular part of the energy.
Chinese Characters Inpainting
-----------------------------
Below we consider the task of in-painting in binary images of Chinese characters, *dtf-chinesechar* [@kappes-2013]. We used a set of pre-trained unary and pairwise potentials provided by the authors with the dataset. While each pixel variable has only two possible labels, the topology of the resulting graph and the non-submodularity of its pairwise potentials makes this problem challenging. Figure \[fig:chinese\_examples\] shows two examples of inpainting. Table \[tab:chinese\] reports the performance of our LSA-TR and LSA-AUX methods on this problem and compares to other standard optimization methods reported in [@kappes-2013], as well as, to [*truncation*]{} of non-submodular terms. LSA-TR is ranked second, but runs three orders of magnitudes faster.
[|p[1.7cm]{}|p[1.5cm]{}|p[1.3cm]{}|c|c|]{}
------
Alg.
Name
------
: Chinese characters in-painting database [@kappes-2013]. We tested three methods (at the bottom) and compared with other techniques (above) reported in [@kappes-2013]. \* - To the best of our knowledge, BPS in [@kappes-2013] is the basic sequential version of loopy belief-propagation without damping that we simply call LBP in this paper. \[tab:chinese\]
&
---------
Mean
Runtime
---------
: Chinese characters in-painting database [@kappes-2013]. We tested three methods (at the bottom) and compared with other techniques (above) reported in [@kappes-2013]. \* - To the best of our knowledge, BPS in [@kappes-2013] is the basic sequential version of loopy belief-propagation without damping that we simply call LBP in this paper. \[tab:chinese\]
&
--------
Mean
Energy
--------
: Chinese characters in-painting database [@kappes-2013]. We tested three methods (at the bottom) and compared with other techniques (above) reported in [@kappes-2013]. \* - To the best of our knowledge, BPS in [@kappes-2013] is the basic sequential version of loopy belief-propagation without damping that we simply call LBP in this paper. \[tab:chinese\]
&
--------
\#best
/100
--------
: Chinese characters in-painting database [@kappes-2013]. We tested three methods (at the bottom) and compared with other techniques (above) reported in [@kappes-2013]. \* - To the best of our knowledge, BPS in [@kappes-2013] is the basic sequential version of loopy belief-propagation without damping that we simply call LBP in this paper. \[tab:chinese\]
& Rank\
MCBC&2053.89 sec &-49550.1&85&1\
BPS (LBP)$^*$ &72.85 sec &-49537.08&18&3\
ILP&3580.93 sec &-49536.59&8&6\
QPBO&0.16 sec &-49501.95&0&8\
SA&NaN sec &-49533.08&13&4\
TRWS&0.13 sec &-49496.84&2&7\
TRWS-LF&2106.94 sec &-49519.44&11&5\
Truncation& 0.06 sec&-16089.2&0&8\
LSA-AUX&0.30 sec&-49515.95&0& 8\
LSA-AUX-P&0.16 sec&-49516.63&0& 8\
LSA-TR&0.21 sec& -49547.61&35&2\
Conclusions and Future Work {#sec:conclusion}
===========================
There are additional applications (beyond the scope of this paper) that can benefit from efficient optimization of binary non-submodular pairwise energies. For instance, our experiments show that our approach can improve non-submodular $\alpha$-expansion and fusion moves for multilabel energies. Moreover, while our paper focuses on pairwise interactions, our approach naturally extends to high-order potentials that appear in computer vision problems such as curvature regularization, convexity shape prior, visibility and silhouette consistency in multi-view reconstruction. In the companion paper [@curvatureFTR] we apply our method for optimization of a new highly accurate curvature regularization model. The model yields energy with triple clique interactions and our method achieves state-of-the-art performance.
Acknowledgments {#acknowledgments .unnumbered}
===============
We greatly thank V. Kolmogorov and our anonymous reviewers for their thorough feedback. We also thank Canadian granting agency NSERC for its continued support.
[^1]: <http://vision.csd.uwo.ca/code/>
[^2]: Note that such transformations are up to a constant.
[^3]: Truncation is known to give low quality results, e.g. Fig.\[fig:repulsion\], Tab.\[tab:chinese\].
[^4]: [*Auxiliary functions*]{} are also called [*surrogate functions*]{} or [*upper-bounds*]{}. The corresponding approximate optimization technique is also known as the [*majorize-minimize*]{} principle [@Lange2000].
[^5]: By a reduction to the [*balanced cut*]{} problem.
[^6]: We used [*http://pub.ist.ac.at/$\sim$vnk/software.html*]{} code for SRMP and [*www.robots.ox.ac.uk/$\sim$ojw*]{} code for QPBO, TRWS, and LBP. The corresponding version of LPB is sequential without damping.
[^7]: For QPBO, we only run QPBO-I and do not use other post-processing heuristics as suggested in [@elzehiry2010fast], since the number of unlabeled pixel might be significant when the regularization is strong.
|
---
abstract: 'Two-dimensional kinematics of the central region of (NGC5236) were obtained through three-dimensional NIR spectroscopy with Gemini South telescope. The spatial region covered by the integral field unit ($\sim 5\arcsec\times13\arcsec$ or $\sim90\times240\,$pc), was centered approximately at the center of the bulge isophotes and oriented SE-NW. The Pa${\beta}$ emission at half arcsecond resolution clearly reveals spider-like diagrams around three centers, indicating the presence of extended masses, which we describe in terms of Satoh distributions. One of the mass concentrations is identified as the optical nucleus (ON), another as the center of the bulge isophotes, similar to the CO kinematical center (KC), and the third as a condensation hidden at optical wavelengths (HN), coincident with the largest lobe in 10emission. We run numerical simulations that take into account ON, KC and HN and four more clusters, representing the star forming arc at the SW of the optical nucleus. We show that ON, KC and HN suffer strong evaporation and merge in 10-50 Myr. The star-forming arc is scattered in less than one orbital period, also falling into the center. Simulations also show that tidal-striping boosts the external shell of the condensations to their escape velocity. This fact might lead to an overestimation of the mass of the condensations in kinematical observations with spatial resolution smaller than the condensations’ apparent sizes. Additionally the existence of two ILR resonances embracing the chain of HII regions, claimed by different authors, might not exist due to the similarity of the masses of the different components and the fast dynamical evolution of M83 central 300pc.'
author:
- 'Irapuan Rodrigues, Horacio Dottori, Rubén J. Díaz, María P. Agüero and Damián Mast'
title: Kinematics and Modeling of the Inner Region of M83
---
Introduction
============
In spite of the peculiar structure of its circumnuclear region, presents a well-defined bulge that can be fitted by de Vaucouleurs’ law in an annular region between radii $\approx 10$and 40 corresponding to $\approx 180 \times 720$ pc. The dramatic inward behavior begins approximately at a radius of 180-190pc, where the main dust lanes associated with the galaxy bar spiral into a couple of (J-K) rings that, according to @1998AJ....116.2834E, might coincide with two inner Linblad resonances. These rings embrace an arc of about twenty star-forming condensations comparable to 30Doradus in the Large Magellanic Cloud [@1993BAAS...25..840H], clustered in four main knots [@1998AJ....116.2834E Figure 4]. The center of this arc is approximately $\approx$ 6.7to the SW of the condensation known as the [*optical*]{} or [*visible nucleus*]{} ([**ON**]{}) of NGC5236. ON marks the center of the @1998AJ....116.2834E inner nuclear ring, but not that of the outer nuclear ring, which is shifted 2.5to the SW of the optical nucleus. K-band spectroscopy suggests a dynamical center coincident with the center of symmetry of the K-band external isophotes ([**KC**]{}), which is shifted 4to the SW of the optical center [see Figure3 in @2006AJ....131.1394M and our Figure \[fig:blowup\]]. This dynamical center is also suggested, although with smaller spatial resolution, by 2-D CO spectroscopy [@2004ApJ...616L..59S]. The region of Thatte’s (2000) isophotal study coincides with that analyzed by @1981ApJ...243..716J, and is dominated by the bulge light.
{width="90.00000%"}
The disk-like appearance of the CO kinematics interior to 300pc [@2004ApJ...616L..59S] indicates that the bar perturbed disk survives well inside the galaxy bulge. How deep it extends into the nucleus is a question discussed in this paper. The presence of the hidden condensation ([**HN**]{}) [@2006ApJ...652.1122D], which is larger and more massive than the optical nucleus, raises doubts about the true nature and the role of the optical nucleus. Furthermore, we also cast doubts on the existence of a double ILR resonance embracing the arc of HII regions.
{width="90.00000%"}
To better understand the richness of phenomena in the central region of we performed 3D near infrared spectroscopy at subarsecond resolution using GEMINI+CIRPASS configuration [@2006ApJ...652.1122D]. We observed a region including the ON, KC and HN. Our study is complemented by N-body simulations to test the stability of the M83 central region, including the star-forming arc. In Section \[sec:obs\] we present our observations, in Section \[sec:kin\] we discuss the central kinematics and the three most important condensations, which present differentiated kinematics. Numerical simulations are presented in Section \[sec:simul\] and in Section \[sec:conclu\] we present our conclusions.
[cccccc]{} & RA & Dec. & R$_{max}$ & V$_{peak}$ & $\sigma$\
& (J2000) & (J2000) & \[pc\] & \[kms$^{-1}$\] & \[kms$^{-1}$\]\
KC & $13^h37^m0.56^s$ & $-29^{\circ}\ 51\arcmin\ 56.9\arcsec $ & $65\pm5$ &$68\pm8$ & $82\pm10$\
ON & $13^h37^m0.98^s$ & $-29^{\circ}\ 51\arcmin\ 55.5\arcsec $ & $8\pm1$ & $46\pm9$ & $110\pm10$\
HN & $13^h37^m0.46^s$ & $-29^{\circ}\ 51\arcmin\ 53.6\arcsec $ & $45\pm8$ &$58\pm11$ & $96\pm10$\
Observations {#sec:obs}
============
We used the [*Cambridge IR Panoramic Survey Spectrograph*]{} [@2000SPIE.4008.1193P; @2004SPIE.5492.1135P] installed on the GEMINI South telescope in March 2003. These observations, performed in queue mode, were taken with an integral field unit (IFU) sampling of 0.36(6.4pc) with an array with size $\approx 5\arcsec \times 13\arcsec$. The array of 490 hexagonal doublet lenses attached to fibers provides an area filling factor of nearly 100%. The IFU was oriented at PA$=120^{\circ}$ and centered slightly to the NW of the kinematical center according to @2002BAAA...45...74M position. In Figure \[fig:blowup\], a sketch of the detector position is shown. It was exposed for 45 minutes and covers a spectral range between $1.2-1.4\,\micron$, including Pa$\beta$1.3 and \[FeII\]1.26, with a spectral resolution of approximately 3200. The seeing was about 0.5$^{\prime\prime}$; therefore, the focal plane was slightly sub-sampled by the configuration constraint.
The data were reduced using IRAF (distributed by the NOAO), ADHOC (2D kinematics analysis software developed by Marseille’s Observatory) and SAO (spectra processing software developed by the Special Astrophysical Observatory, Russia). More details on the reduction are presented in @2006NewAR..49..547D and the general technique is thoroughly discussed by @1999ApJ...512..623D and @2002BAAA...45...74M.
2-D kinematics of M83 central region at less than 10pc spatial resolution {#sec:kin}
=========================================================================
In Figure \[fig:blowup\], we present the composed JHK 2-mass image of M83. The inset shows a blow-up of the central 400$\times$400pc, where we detach the positions of ON, KC and HN. The astrometric positions of the condensations are given in Table \[tab:kindat\] [the astrometry is discussed in @2006NewAR..49..547D]. In Figure \[fig:velmap\], we show the brightness distribution at the Pa$\beta$ continuum, the Pa$\beta$ radial isovelocity contours, and the Pa$\beta$ velocity dispersion map. The velocity dispersion is shown as the line FWHM (Table \[tab:disks\]). In spite of the complex kinematics shown by the 2-D radial velocity map, we can distinguish ordered radial velocities in the form of spider diagrams around ON, KC and HN. As is well known, a spider diagram indicates disk-like motions. The sizes of the disks around the three condensations, their maximum radial velocities along the line of the nodes, and their velocity dispersions at their centers of symmetry are presented in Table \[tab:kindat\].
The ionized gas kinematics around KC {#sec:kc}
------------------------------------
![Central part of the @2004ApJ...616L..59S CO (2–1) velocity map (brown contours) superimposed on the Pa${\beta}$ isovelocity maps. []{data-label="fig:zerovel"}](figure03){width="45.00000%"}
KC is the most extended of the three disks. Its center coincides, up to the margin of error, with the center of symmetry of the bulge isophotes , which in turn coincides with that of the CO kinematical center [@2004ApJ...616L..59S]. We recall that the GEMINI spatial resolution is at least twenty times higher than that of Sakamoto’s CO observation.
The major axis of the spider diagram is at P.A.$=120^{\circ}$. From the spider diagram major-to-minor axis ratio, we derive an inclination of $68^{\circ} \pm 8
^{\circ}$ for the gaseous disk. We have to be cautious at this point because the minor axis velocity field seems to be perturbed. The same situation was faced by @2004ApJ...616L..59S, who also quote two highly discrepant possible values for the inclination of the 300pc CO disk. These authors finally chose the inclination angle of the large scale galactic disk based on kinematical reasons. In our case, the best two-dimensional velocity model fitting to the observed velocity field is attained for an inclination angle of $25^{\circ} \pm
8^{\circ}$ (see Section \[sec:obs-mod\]), which coincides, up to the margin of error, with the small inclination of the large scale M83 disk. A small angle also better fits the innermost scale rotation curves to Sakamoto’s 300pc disk one (see Fig.3). The inflow of matter along the bar into the central region might be mimicking the disks’ high inclination. The Pa$\beta$ disk around the KC and Sakamoto’s CO disk present different orientations (see Figure \[fig:zerovel\]), probably pointing to a phenomenon of transition between $x_2$- and $x_1$-like orbits, as quoted by @2004ApJ...616L..59S, or to a strong perturbation of the central structure, as discussed by @2006ApJ...652.1122D.
The spider diagram (see Figure \[fig:velmap\]) can be traced up to R$_{max}\approx$65pc but it is strongly perturbed to the E at the receding extreme, and to the W at the approaching extreme. The KC rotation curve was obtained from approximately 60 radial velocity points distributed along the whole disk. From this dataset we obtained 13 mean rotation velocities between 4pc and 65pc. Each velocity point entering the mean was weighted by $w(\alpha)=\,|cos(\alpha)|$, where $\alpha$ is the position angle of the line joining the point to the disk center with respect to the line of the nodes. The disk around KC can be described in terms of a Satoh’s-like spheroid with an effective radius of R$_{KC}=38 \pm 8$pc and a mass inside R$_{max}$ of M$_{KC}\approx (60.0\pm 20) \times 10^6$M$_\odot$ (for a disk inclination of $25^{\circ}$). We note that an inclination of $68^{\circ}$ for the disk around KC would give a mass of $\approx (17\pm 2) \times 10^6$M$_\odot$, similar to that deduced spectroscopically by from the $^{12}$CO bandhead velocity broadening.
The velocity dispersion in the non-resolved central part is of the order of 80kms$^{-1}$ (see middle panel in Figure \[fig:velmap\]). Assuming that this velocity dispersion is due to a central mass concentration, namely a putative BH, we determined the BH mass upper limit, M$_{BH}$, by adding Keplerian rotation curves convolved with a 9 pc Gaussian to that of the Satoh disk, and constraining the resulting Satoh+Kepler rotation curve with the central velocity dispersion and the errors of the measured velocity points. The result of this procedure is shown in Figure \[fig:bhrot\] for M$_{BH}=0.2, 1.0$ and $2.0\times10^6$M$_{\odot}$. The best fit is quoted in Table \[tab:disks\].
This analysis points to KC as the true nucleus of M83.
[llll]{} & KC & ON & HN\
PA $[^{\circ}]$ & 120 & 120 & 64\
i $[^{\circ}]$ & $25.0\pm8$ & $60\pm10$ & $62\pm8$\
R$_{eff}$ \[pc\] & $38.0\pm4$ & $8\pm1$ & $33\pm3$\
$M_K$ \[$10^6 M_{\odot}$\] & $60.0\pm20$ & $4\pm2$ & $20\pm7$\
$M_{BH}$ \[$10^6 M_{\odot}$\] & 0.2 – 1.0 & $\le 1.0$ & 0.2 – 1.0\
{width="48.00000%"} {width="48.00000%"}
{width="48.00000%"} {width="48.00000%"}
The ionized gas kinematics around HN {#sev:hn}
------------------------------------
Located at R$=7.8\arcsec \pm 0.7\arcsec$ to the NW of KC (Fig. 2), the spider diagram of the ionized gas around HN presents a P.A.$=64^{\circ}$. The form of the spider diagram (open legs) points to a more concentrated and more homogeneous distribution than that of the KC. It is probably a cannibalized dwarf satellite of M83, as discussed by @2006NewAR..49..547D. To derive the rotation curve as for KC, we used a distribution of more than 50 points along the whole disk, which provided 13 mean rotation velocities up to R$=50$pc. In Tables \[tab:kindat\] and \[tab:disks\] we present the kinematical data for HN and the derived Satoh disk parameters. The mass of HN was also derived from the rotation curve, as well as that of the putative Black Hole at its center (see Table \[tab:disks\]), through the velocity dispersion, following the procedure previously outlined for KC.
The ionized gas kinematics around ON {#sec:on}
------------------------------------
ON is traditionally referred to as the off-centered optical nucleus of M83. @2004ApJ...616L..59S proposed that it could be a cannibalized satellite.
From photometric data, @1998AJ....116.2834E derived a mass of $4 \times
10^{6}$ M$_\odot$ and of $2.5 \times 10^{6}$ M$_\odot$. From $^{12}$CO band-head 2.293$\mu m$ velocity dispersion, derived a kinematical mass of $1.3 \times 10^7$ M$_\odot$ within 5.4 pc, assuming that the optical nucleus is virialized. The mass of ON derived photometrically disagrees by a factor of 4 to 5 from that derived spectroscopically, probably indicating a system with a non-virialized periphery, as discussed in Section \[sec:simul\] in connection with the many-body interaction. The disk around ON is the smallest of the three disks discussed in this paper. The ON kinematical center is slightly shifted to the SW of the ON continuum peak at $1.3\,\mu m$, as can be seen in Figure \[fig:velmap\]. The velocity gradient across ON is determined from data near the spatial resolution limit. We are probably dealing with problems of oversampling. The rotation curve with mean errors is shown in Figure \[fig:rot\]. The mass of ON was also derived from the rotation curve, as well as that of the putative Black Hole at its center (see Table \[tab:disks\]), using the velocity dispersion and the procedure previously outlined for KC. Our determination of the mass agrees very well with that obtained from photometric data.
Fitting the whole central radial velocity field of the gas {#sec:obs-mod}
----------------------------------------------------------
The data in Tables \[tab:kindat\] and \[tab:disks\] allow us to generate a composed radial velocity map model (Figure \[fig:obs-mod\]a), to be compared with the observed one. The fitting of KC was the most critical due to the difficulty in the determination of its disk inclination angle (\[sec:kc\]). Several fits were made, keeping constant the inclination angles of the disks around ON and HN and allowing the one associated with KC to vary from $25^{\circ}$ to $60^{\circ}$. We verified that the disk-like residual around KC decreases dramatically when approaching smaller inclinations. Finally, the smallest residuals are obtained for an inclination of $25^{\circ}\pm 5^{\circ}$. The final map is shown in Figure \[fig:obs-mod\]b. This inclination is in good agreement with the inclination of the large scale M83 disk and that of the inner 300pc, derived from CO kinematics [@2004ApJ...616L..59S], as can be seen in Figure \[fig:rot\]. This again points to a continuity of the galactic disk kinematics from a kiloparsec scale to the smaller scales studied in this paper, although variations in the position angles of the lines of nodes are observed at 300pc and at scales smaller that 100pc as well (see Figure \[fig:zerovel\]).
Figure \[fig:obs-mod\]b shows that around KC, ON and HN the residuals are mainly at the noise level, indicating that the interpretation of the observed velocity map as resulting from the superposition of three disks is, from a qualitative point of view, essentially correct. Nevertheless, deviations from rotation points to a more complex situation and indicates that the mass estimations based on rotation curves are barely aproximate. As an example we detect anomalous blueshifted kinematics at the East extreme (upper right) above ON, and redshifted kinematics to the West of HN. These regions coincide with the high velocity dispersion zones in Figure \[fig:velmap\]. These might indicate regions of inflow of gas along the bar falling into the central region.
The fit of the rotation curve, from the scale of tens to hundreds of parsecs (Fig.3), does not seem to confirm the existence of double ILR, as claimed by @1998AJ....116.2834E and @2004ApJ...616L..59S.
(a) (b)\
![(a) Velocity map model obtained by superposing the disks around KC, HN and KC. (b) Residual velocity map resulting from the subtraction of the modeled velocity map from the observed one.[]{data-label="fig:obs-mod"}](figure06a "fig:"){width="23.00000%"} ![(a) Velocity map model obtained by superposing the disks around KC, HN and KC. (b) Residual velocity map resulting from the subtraction of the modeled velocity map from the observed one.[]{data-label="fig:obs-mod"}](figure06b "fig:"){width="22.46000%"}
{width="30.00000%"} {width="30.00000%"} {width="30.00000%"}
N-body simulations {#sec:simul}
==================
In order to understand the dynamical evolution of the M83 central region, we performed N-body simulations of a system composed of KC, HN, ON and four condensations representing the star-forming arc. Hernquist’s models were used for KC, HN and ON [@1993ApJS...86..389H]. We included stellar and gaseous components. The models were constrained by the bodies’ rotation curves. Figure \[fig:modrotcurve\] shows the model rotation curves adjusted in order to fit the observational data. The model parameters are quoted in Table \[tab:models\]. The four condensations on the star forming arc were represented by Plummer’s models, with 1000 particles each, with positions and masses corresponding to regions 2 to 5 in Figure8 of @1998AJ....116.2834E, where: region 2 has M$_2 = 1.3 \times 10^6$ M$_\odot$; regions 3 and 4 have M$_3 =$ M$_4 = 1.4 \times 10^6$ M$_\odot$; and region 5 has M$_5 = 1.8 \times 10^6$ M$_\odot$. All Plummer models have a core radius of 1.5pc and a cutoff radius of 15pc.
A large set of simulations was run in order to refine the models and orbital parameters used in the final simulation. The orbital parameters were chosen considering that M83 rotates clockwise. Inclinations and position angles are as in Table2. Here we suppose that clusters 2 to 5 on the star forming arc, as well as ON are in circular clockwise (prograde) orbits in the plane of the external disk of M83. According to [@2006ApJ...652.1122D], HN is the remnant of a captured galaxy that triggered the star formation along the arc of HII regions. The form of the arc and its position with respect to HN lead us to adopt a circular counter-clockwise orbit for HN.
The simulations were performed with Gadget2 [@2005MNRAS.364.1105S]. A total of 91,552 particles was used, and the simulations were run for 300 Myr, starting at the present time configuration. We considered starting at a previous epoch, before the formation of the arc of HII regions, but the complexity of the environment at the M83 central region and the number of parameters required to qualitatively take into account all possible configurations in the past would be a matter for a paper especially devoted to that study.
The simulation (see Figure \[fig:sim\]) shows that the galaxy nucleus (KC), the optical nucleus (ON) and the hidden nucleus (HN) would form a single massive core in about 16 Myr. All Plummer models (clusters 2 to 5) fall into the nucleus in 130 Myr.
[l c c c]{} &[**KC model**]{} &[**HN model**]{} &[**ON model**]{}\
Number of particles in disk & 16384 & 8192 & 4096\
Disk mass & 15.0 & 13.0 & 3.5\
Disk radial scale length & 17.0 & 13.0 & 4.5\
Disk vertical scale thickness & 1.7 & 0.8 & 0.4\
Reference radius R$_{ref}$ & 42.5 & 30 & 8.0\
Toomre Q at R$_{ref}$ & 1.5 & 1.5 & 1.5\
Number of particles in gas disk & 16384 & 8192 & 4096\
Gas disk mass & 1.5 & 1.5 & 0.35\
Gas disk radial scale length & 17.0 & 13.0 & 4.5\
Gas disk vertical scale thickness & 1.7 & 0.8 & 0.4\
Toomre Q at R$_{ref}$ & 1.5 & 1.5 & 1.5\
Number of particles in bulge & 512 & 512 & 512\
Bulge mass & 1.3 & 0.5 & 0.3\
Bulge radial scale length & 3.4 & 4.0 & 1.0\
Number of particles in spherical component & 16384 & 8192 & 4096\
Halo mass & 150.0 & 40.0 & 7.0\
Halo cutoff radius & 200.0 & 80.0 & 45.0\
Halo core radius & 17.0 & 25 & 4.5\
[**Note:**]{} Simulations were done in a system of units with G=1. Model units scale to physical ones such that: length unit is 1pc, velocity unit is 65.58kms$^{-1}$, mass unit is $1 \times 10^{6} \mathrm{M}_\odot$ and time unit is 14909.92yr.
Considering the range of uncertainties in the orbit determination, we can state that this massive core would finally settle as the new nucleus of M83 in a few tens of Myr, implying a net growth of the central galactic mass. Furthermore, the whole star formation and nuclei merging event would last about the time of a global galactic revolution (about 150 Myr at the radius of 5 kpc).
Simulations also show that tidal-striping of the condensations boosts the velocity field of the external shell to escape velocity, which, if mistaken for the dispersion velocity of systems in equilibrium, might lead to an overestimation of their masses. This seems to be the case for the disagreement between the photometric and kinematic masses derived by for ON.
The circumnuclear HII regions arc is far from being a stable system. In fact, it will spread out in an orbital time and be swallowed by the new refurbished nucleus in a period slightly longer than the time of merging of ON, KC and HN. Therefore, the claimed existence of two inner Linblad resonances is difficult sustain.
{width="32.00000%"} {width="32.00000%"} {width="32.00000%"} {width="32.00000%"} {width="32.00000%"} {width="32.00000%"}
Conclusion {#sec:conclu}
==========
We performed CIRPASS 2D spectroscopy at Pa$\beta$ and its continuum with the GEMINI-S telescope. Pa$\beta$ spectroscopy allows for penetration of the dust in the direction of KC and HN, and for determination of the ionized gas kinematics in the central $\approx 5\arcsec\times13\arcsec$ embracing ON, KC and HN. The kinematics can mainly be explained in terms of three disks around each one of the condensations, KC, HN and ON. Perturbations are visible when the three-disk model is subtracted from the observations. The line of the nodes of a tens of parsecs-scale disk around KC is rotated with respect to that of the hundreds of parsecs-scale CO one, which in turn is rotated with regard to the line of the nodes of the outer galactic disk. They are most probably coplanar up to the margin of error. Nevertheless, a relative warp of the galactic disk inwards of up to $40^{\circ}$ can not be ruled out.
Besides the mass of the condensations KC, HN and ON, we inferred upper mass limits for the putative BH that could be associated to them.
Our simulation shows that the three nuclei, KC, HN and ON, will merge in a few tens of Myr. The disruption of the circumnuclear HII regions arc in an orbital time shows that even if ILR do exist, this is not enough to ensure the stability of the HII regions’ orbits.
We are witnessing a dramatic change in the central region of M83, the nearest galaxy with strong nuclear star formation, which may in a few Myrs change the mass at its kinematical center by a factor of 2, as well as the global aspect at a hundred parsec scale. Although HN was interpreted as a captured dwarf satellite, the metamorphosis of the M83 central region is independent of the origin of HN.
I.R. and H.D. acknowledge support from CNPq (Brazil), I.R. also acknowledge support from FAPESP (Brazil). R.D., M.A., and D.M. acknowledge support of CONICET grant PIP 5697. We acknowledge support from Secyt-Capes Agreement, project 035/07. We acknowledge the Instituto Nacional de Pesquisas Espaciais (INPE/MCT, Brazil) for providing computer time for part of the simulations. Some simulations were also run at IF-UFRGS, Brazil. The Gemini Observatory is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: NSF (USA), STFC (United Kingdom), NRC (Canada), ARC (Australia), MINCYT (Argentina), CNPq (Brazil) and CONICYT (Chile). This publication makes use of data products from the 2-MASS (Two Micron All Sky Survey), which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the NASA and the NSF.
|
---
abstract: 'Gamma ray bursts (GRBs) are thought to originate from highly relativistic jets. The fireball model predicts internal shocks in the jets, causing magnetic field to be amplified & particles to be accelerated. We model the effects of an asymmetric density configuration for an internal plasma collision in a quasi-parallel magnetic field. We measured electron acceleration & found that a tenuous population of electrons is accelerated to Lorentz factors of $\sim$ 300 - close to energy equipartition with ions. We found that the filaments did not remain static, but were deflected by the Lorentz force & rolled up into small vortices, which themselves merge to form a larger current vortex.'
address:
- 'Dublin Institute for Advanced Studies, Ireland, gmurphy@cp.dias.ie'
- ' Linköping University, SE-60174 Norrköping, Sweden, Mark.E.Dieckmann@itn.liu.se'
- 'Dublin Institute for Advanced Studies, ld@cp.dias.ie'
author:
- 'GARETH C. MURPHY'
- 'MARK E. DIECKMANN'
- 'LUKE O’C DRURY'
title: 'FIELD AMPLIFICATION, VORTEX FORMATION, & ELECTRON ACCELERATION IN A PLASMA PROTOSHOCK: EFFECT OF ASYMMETRIC DENSITY PROFILE '
---
Introduction
============
Prompt emissions of the ultrarelativistic GRBs are probably the most energetic radiative events in the universe. Since the first observations of GRBs, thousands have been detected. They all share a common signature of energetic non-thermal radiation attributed to relativistic motion of electrons in strong magnetic field. The fireball model predicts internal shocks in the jets, causing magnetic field to be amplified & particles to be accelerated [@Meszaros:1992xq; @Spitkovsky:2008rm]. The details of the underlying physical mechanism that causes the prompt emissions are still unknown. In particular, how magnetic field is spontaneously generated, & how electrons are energised to relativistic speeds & injected into the Fermi mechanism, is under debate. It is therefore relevant to use numerical simulations to probe the behavior of magnetized shocks to see if a robust mechanism to self-consistently amplify the field may be found. Current models focus on exploiting the symmetry of uniform density plasma cloud collision, in this work, we extend the survey to unequal density plasma clouds.
[[Two plasma clouds, each consisting of ions and electrons with the mass ratio $m_i / m_e = 250$, collide at position $x=0$, Figure \[f1\] shows the simulation setup. Initial electron and ion number densities of the dense cloud both equal $n_1$ and those of the tenuous cloud equal $n_2 = n_1 / 10$. The velocity vectors of both clouds are antiparallel and aligned with $x$. The modulus $v_b$ of each cloud in the box frame gives the collision speed $v_c = 2v_b / (1+v_b^2/c^2) = 0.9c$. The dense cloud propagates to increasing values of $x$. All species have an initial temperature of 131 keV. The magnetic energy to baryonic energy ratio, $\sigma$, is 0.2 (0.02) in the tenuous (dense) cloud. The modulus of the convection electric field is $|E_{0y}| = v_b B_{0z}/c$. Both $\nabla \cdot B =0 $ and $\nabla \cdot
E =0$ at $t=0$. The simulation box size $L_x \times L_y = 656 \lambda_i \times 6 \lambda_i$ is resolved by $2.8 \cdot
10^4 \times 256$ square grid cells. The dense (tenuous) plasma is resolved by 100 (50) particles per cell per species. ]{}]{}
![Left: Simulation setup. Right: formation of current vortex at collision boundary. \[f1\]](Murphy_1_f1 "fig:"){width="5cm"} [![Left: Simulation setup. Right: formation of current vortex at collision boundary. \[f1\]](Murphy_1_f2 "fig:"){width="7cm"}]{}
![Upper panel: Specific x-momentum vs displacement phase space for ion clouds Lower panel: Lorentz factor vs displacement phase space for electron species \[f2\]](Murphy_1_f3){width="0.87\linewidth"}
![Upper panel: Specific x-momentum vs displacement phase space for ion clouds Lower panel: Lorentz factor vs displacement phase space for electron species \[f2\]](Murphy_1_f4){width="0.87\linewidth"}
Results
=======
[**[Vortex formation in plasma shocks:]{}**]{} We carried out a large scale, long term 2d relativistic PIC simulation of a plasma collision. The peak Lorentz factor of the electrons is determined, along with the orientation & the strength of the magnetic field at the cloud collision boundary. The magnetic field component orthogonal to the initial plasma flow direction is amplified to values that exceed those expected from the shock compression by over an order of magnitude. The forming shock is quasi-perpendicular due to this amplification, caused by a current sheet which develops in response to the differing deflection of the upstream electrons & ions incident on the magnetised shock transition layer. [[As the upstream plasma impacts on the strong magnetic field, electrons are deflected away from their original flow direction, while the ion reaction is much weaker. Current flows in the (y,z) plane, which amplifies magnetic field. The electrons fall behind the ions, because their velocity along x is reduced. An $E_x > 0$ builds up, which tries to restore the quasi-neutrality. Incoming upstream electrons are dragged by it across the magnetic field and accelerated to relativistic speeds. The distribution of electrons has a non thermal tail with a power law $\sim 2$.]{}]{} A magnetic field structure resembling the cross section of a flux tube grows self-consistently in the current sheet of the shock transition layer (Fig. \[f1\]). Plasma filamentation develops behind the shock front, as well as signatures of orthogonal magnetic field striping, indicative of the filamentation instability. These magnetic fields convect away from the shock boundary & their energy density exceeds by far the thermal pressure of the plasma. Localized magnetic bubbles form. Energy equipartition between the ion, electron & magnetic energy is obtained at the shock transition layer. The electronic radiation can provide a seed photon population that can be energized by secondary processes (e.g. inverse Compton). We measured electron acceleration & found that a tenuous population of electrons is accelerated to Lorentz factors of $\sim$ 200 - close to energy equipartition with ions.
We found that the filaments did not remain static, but were deflected by the Lorentz force & rolled up into small vortices, which themselves merge to form a larger current vortex. In order to check the validity of the 2d approximation we also carried out a short-term simulation in 3d & found evidence of filamentation.
[**[Long term 1D simulations:]{}**]{} We carried out long term simulations in 1D in order to verify that the acceleration & field amplification process was robust & predict the SSC emission. In 1D, the simulation timescale was increased to 919 $\omega_p^{-1}$. In the 1d simulation, a reverse shock forms, which was not achieved in 2D due to the shorter timescales. The reverse shock is visible at $x=350$ in Fig. \[f1\]. Two smaller discontinuouties are seen in the ion phase space plot at $x=400,500$.
[**[Synthetic observations:]{}**]{} In order to analyze the data, a model of the emission from the hot spot was created using the Compton Sphere Suite [@Georg2007]. In this one-zone steady-state model of secondary emission processes, a sphere of a given radius is permeated by a magnetic field. Photons are produced relativistic electrons via the synchrotron process & are inverse-Compton scattered by the same electron population to $\gamma$-ray energies. The model assumes synchrotron losses dominate over SSC losses. The Thomson scattering cross section is used. The values from the numerical simulation are used, but must be scaled up from the reduced mass ratio to the true value. In Fig. \[ssc\] we plot the derived spectrum. For future work we shall include the external photon field inverse Compton losses.
[**[Conclusions:]{}**]{} In 2D simulations the F.I. grows despite the high temperatures & quasi-parallel background magnetic field (which tends to suppress transverse motion). The quasi-parallel field is rotated into a perpendicular field locally at the forward shock transition layer. Longer term 1D simulations show similar magnetic field amplification and electron acceleration, while also evolving for sufficient time to see a reverse shock forming. The synthetic observations show emission in the MeV range.
![Predicted synchrotron self-Compton emission, assuming Doppler factor of 450, magnetic field of 20 nT, maximum Lorentz factor of 1500 using the Compton sphere suite.\[ssc\]](Murphy_1_f5){width="0.29\linewidth"}
Acknowledgments {#acknowledgments .unnumbered}
===============
GM,LD funded by SFI RFP/ 08/PHY 1694. H. Ruhl for use of Plasma Simulation Code (PSC). ICHEC for computer facilities & support. GM acknowledges HPC resources (Tier-0) provided by PRACE on Jugene based in Germany.
[0]{} , P. & [Rees]{}, M. J. 1992, MNRAS, 257, P29
, A. 2008, ApJ, 682, L5
, G. C., [Dieckmann]{}, M. E., Bret, A.,& [Drury]{}, L.O’C. 2010[[a]{}]{}, A&A, 524, A84
, G. C., [Dieckmann]{}, M. E., & [Drury]{}, L.O’C. 2010[[b]{}]{}, IEEE Trans Plasma Sci., 38, 2985
Georganopoulos, M., Kazanas, D., Perlman, E., Wingert,B., Graff, P. & Castro, R. The Compton Sphere, 2007. http://jca.umbc.edu/ markos/cs/index.html
|
---
abstract: 'The Korkine-Zolotareff (KZ) reduction has been used in communications and cryptography. In this paper, we modify a very recent KZ reduction algorithm proposed by Zhang et al., resulting in a new algorithm, which can be much faster and more numerically reliable, especially when the basis matrix is ill conditioned.'
author:
-
- '[^1]'
bibliography:
- 'ref.bib'
title: A Modified KZ Reduction Algorithm
---
Lattice reduction, SVP, LLL reduction, KZ reduction, numerical stability.
Introduction
============
For any full column rank matrix $\A\in \mathbb{R}^{m\times n}$, the lattice $\mathcal{L}(\A)$ generated by $\A$ is defined by \[e:latticeA\] ()={|\^n}. The columns of $\A$ form a basis of $\mathcal{L}(\A)$. For any $n\geq2$, $\mathcal{L}(\A)$ has infinity many bases and any of two are connected by a unimodular matrix $\Z$, i.e., $\Z \in \mathbb{Z}^{n\times n}$ and $\det(\Z)=\pm1$. Specifically, for each given lattice basis matrix $\A\in \mathbb{R}^{m\times n}$, $\A\Z$ is also a basis matrix of $\mathcal{L}(\A)$ if and only if $\Z$ is unimodular, see, e.g., [@AgrEVZ02].
The process of selecting a good basis for a given lattice, given some criterion, is called lattice reduction. In many applications, it is advantageous if the basis vectors are short and close to be orthogonal [@AgrEVZ02]. For more than a century, lattice reduction have been investigated by many people and several types of reductions have been proposed, including the KZ reduction [@KZ73], the Minkowski reduction [@Min96], the LLL reduction [@LenLL82] and Seysen’s reduction [@Sey93] etc.
Lattice reduction plays an important role in many research areas, such as, cryptography (see, e.g., [@HanPS11]), communications (see, e.g., [@AgrEVZ02; @WubSJM11]) and GPS (see, e.g., [@Teu96]), where the closest vector problem (CVP) and/or the shortest vector problem (SVP) need to be solved: \[e:ILS\] \_[\^n]{}-\_2\^2, \[e:SVP\] \_[\^n\\{}]{} \_2\^2.
The often used lattice reduction is the LLL reduction, which can be computed in polynomial time under some conditions and has some nice properties, see, e.g., [@ChaWX13] for some latest results. In some communication applications, one needs to solve a sequence of CVPs, where $\y$’s are different, but $\A$’s are identical. In this case, instead of using the LLL reduction, one usually uses the KZ reduction [@KZ73] to do reduction, since sphere decoding for solving these CVPs becomes more efficient, although the KZ reduction costs more than the LLL reduction.
There are various KZ reduction algorithms, see, e.g., [@Hel85], [@Kan87], [@Sch87], [@AgrEVZ02]. Very recently, another KZ reduction algorithm was proposed in [@ZhaQW12]. Like in [@AgrEVZ02], the LLL-aided Schnorr-Euchner search strategy [@SchE94] is used to solve the $n-1$ SVPs in [@ZhaQW12]. But instead of using Kannan’s basis expansion method used in [@Kan87] and [@AgrEVZ02], it uses a new basis expansion method which is more efficient.
In this paper, we will propose a new KZ reduction algorithm, which improves the basis expansion method proposed in [@ZhaQW12]. Like [@ZhaQW12], we assume floating point arithmetic with fixed precision is used in the computation. Numerical results indicate that the modified algorithm can be much faster and more numerically reliable. The rest of the paper is organized as follows. In section \[s:reduction\], we introduce the LLL and KZ reductions. In section \[s:KZ\], we introduce our modified KZ reduction algorithm. Some simulation results are given in section \[s:sim\] to show the efficiency and numerical reliability of our new algorithm. Finally, we summarize this paper in section \[s:sum\].
In this paper, boldface lowercase letters denote column vectors and boldface uppercase letters denote matrices. For a matrix $\A$, let $a_{ij}$ be its $(i,j)$ element and $\A_{i:j,k:\ell}$ be the submatrix containing elements with row indices from $i$ to $j$ and column indices from $k$ to $\ell$. Denote $\e_1=[1,0,\ldots, 0]^T$, whose dimension depends on the context.
LLL and KZ Reductions {#s:reduction}
=====================
Assume that $\A$ in has the QR factorization \[e:QR\] =\[\_1, \_2\]\
where $[\underset{n}{\Q_1}, \underset{m-n}{\Q_2}]\in \Rmbm$ is orthogonal and $\R\in \Rnbn$ is upper triangular. After the QR factorization of $\A$, the LLL reduction [@LenLL82] reduces the matrix $\R$ in to $\bbR$ through the QRZ factorization: \[e:QRZ\] \^T = , where $\bbQ \in \mathbb{R}^{n\times n}$ is orthogonal, $\Z\in \mathbb{Z}^{n\times n}$ is unimodular and $\bbR\in \mathbb{R}^{n\times n}$ is upper triangular and satisfies the following conditions: $$\begin{aligned}
&|\br_{ik}|\leq\frac{1}{2} |\br_{ii}|, \quad i=1, 2, \ldots, k-1 \label{e:criteria1} \\
&\delta\, \br_{k-1,k-1}^2 \leq \br_{k-1,k}^2+ \br_{kk}^2,\quad k=2, 3, \ldots, n \label{e:criteria2}\end{aligned}$$ where $\delta$ is a constant satisfying $1/4 < \delta \leq 1$. The matrix $\A\Z$ is said to be LLL reduced. Equations and are referred to as the size-reduced condition and the Lovász condition, respectively.
Similarly, after the QR factorization of $\A$, the KZ reduction reduces the matrix $\R$ in to $\bbR$ in , where $\bbR$ satisfies and $$\begin{aligned}
\label{e:criteria2KZ}
%|\br_{ii}|\leq \lambda_1(\bbR_{i:n,i:n}), \quad i=1, 2, \ldots, n,
| \br_{ii}| =\min_{\x\,\in\,\mathbb{Z}^{n-i+1}\backslash \{\0\}}\|\bbR_{i:n,i:n}\x\|_2, \ \
i=1,\ldots, n.\end{aligned}$$ The matrix $\A\Z$ is said to be KZ reduced. Note that if a matrix is KZ reduced, it must be LLL reduced for $\delta=1$.
A modified KZ reduction algorithm {#s:KZ}
==================================
In this section, we first introduce the KZ reduction algorithm given in [@ZhaQW12], then propose a modified algorithm.
The KZ Reduction Algorithm in [@ZhaQW12]
----------------------------------------
From the definition of the KZ reduction, the reduced matrix $\bbR$ satisfies both and . If the QRZ factorization in gives $\bbR$ satisfying , then we can easily apply size reductions to $\bbR$ such that holds. Thus, in the following, we will only show how to obtain $\bbR$ such that holds.
The algorithm needs $n-1$ steps. Suppose that at the end of step $k-1$, one has found an orthogonal matrix $\Q^{(k-1)} \in \Rbb^{n\times n}$, a unimodular matrix $\Z^{(k-1)}\in \Zbb^{n\times n}$ and an upper triangular $\R^{(k-1)}\in \Rbb^{n\times n}$ such that $$\begin{aligned}
\label{e:recursionk-1}
(\Q^{(k-1)})^T\R\Z^{(k-1)}=\R^{(k-1)}\end{aligned}$$ where for $ i=1,\ldots, k-1$, |r\^[(k-1)]{}\_[ii]{}|= \_[\^[n-i+1]{}\\{}]{} \^[(k-1)]{}\_[i:n,i:n]{} \_2. \[e:diagk-1\] At step $k$, like [@AgrEVZ02], [@ZhaQW12] uses the LLL-aided Schnorr-Euchner search strategy [@SchE94] to solve the SVP: $$\begin{aligned}
\label{e:SVPk}
\x^{(k)}=\arg \min_{\x\,\in \mathbb{Z}^{n-k+1}\setminus \{\0\}}\|\R^{(k-1)}_{k:n,k:n}\x\|_2^2.\end{aligned}$$ Then, unlike other KZ reduction algorithms, [@ZhaQW12] finds the unimodular matrix by expanding $\R^{(k-1)}_{k:n,k:n}\x^{(k)}$ to a basis for the lattice $\{\R^{(k-1)}_{k:n,k:n}\x: \x\in \mathbb{Z}^{n-k+1}\}$. Specifically, [@ZhaQW12] first constructs a unimodular matrix $\widetilde{\Z}^{(k)}\in \Zbb^{(n-k+1)\times (n-k+1)}$ whose first column is $\x^{(k)}$, i.e., \^[(k)]{} \_1 = \^[(k)]{} \[e:zkxk\] and then finds an orthogonal matrix $\widetilde{\Q}^{(k)}$ to bring $\R^{(k-1)}_{k:n,k:n}\widetilde{\Z}^{(k)}$ back to an upper triangular matrix $\widetilde{\R}^{(k)}$, i.e., $$\begin{aligned}
\label{e:recursionk-12}
(\widetilde{\Q}^{(k)})^T\R^{(k-1)}_{k:n,k:n}\widetilde{\Z}^{(k)}=\widetilde{\R}^{(k)}.\end{aligned}$$
Based on and , we define $$\begin{aligned}
\label{e:Qk}
\Q^{(k)}&=\Q^{(k-1)}\bsmx \I_{k-1}&\0\\ \0&\widetilde{\Q}^{(k)}\esmx,\\
\label{e:Rk}
\R^{(k)}&=\bsmx \R^{(k-1)}_{1:k-1, 1:k-1}&\R^{(k-1)}_{1:k-1, k:n}\widetilde{\Z}^{(k)}\\ \0&\widetilde{\R}^{(k)}\esmx,\\
\Z^{(k)}&=\Z^{(k-1)}\bsmx \I_{k-1}&\0\\ \0&\widetilde{\Z}^{(k)}\esmx.
\label{e:Zk}\end{aligned}$$ Here $\Q^{(k)}$ is orthogonal, $\R^{(k)}$ is upper triangular and $\Z^{(k)}$ is unimodular. Then, combining and , we obtain $$\begin{aligned}
\label{e:recursionk}
(\Q^{(k)})^T\R\Z^{(k)}=\R^{(k)}.\end{aligned}$$
At the end of step $n-1$, we get $\R^{(n-1)}$, which is just $\bbR$ in . In the following we explain why holds.
From and , it is easy to verify that for $i=1, \ldots, k$, $$\begin{aligned}
\label{e:rink}
\R_{i:n,i:n}^{(k)} = \bsmx \I_{k-i}&\0\\ \0& \widetilde{\Q}^{(k)}\esmx^T \R_{i:n,i:n}^{(k-1)} \bsmx \I_{k-i}&\0\\ \0&\widetilde{\Z}^{(k)}\esmx.\end{aligned}$$ Then, from and , for $i=1,\ldots, k-1$, $$\begin{aligned}
|r^{(k)}_{ii}|
&=|r^{(k-1)}_{ii}| = \min_{\x\,\in\,\mathbb{Z}^{n-i+1}\backslash \{\0\}}\| \R^{(k-1)}_{i:n,i:n} \x\|_2 \\
%&= \min_{\x\,\in\,\mathbb{Z}^{n-i+1}\backslash \{\0\}}\| \bar{\Q}^{(k)} \R^{(k)}_{i:n,i:n}(\bar{\Z}^{(k)})^{-1} \x\|_2 \\
&=\min_{\z\,\in\,\mathbb{Z}^{n-i+1}\backslash \{\0\}}\| \R^{(k)}_{i:n,i:n}\z\|_2\end{aligned}$$ where $\z=\bsmx \I_{k-i}&\0\\ \0&\widetilde{\Z}^{(k)}\esmx^{-1}\x$. From , , and , $$\begin{aligned}
|r^{(k)}_{kk}|&=\|\widetilde{\R}^{(k)}\e_1\| =\|(\widetilde{\Q}^{(k)})^T\R^{(k-1)}_{k:n,k:n}\widetilde{\Z}^{(k)}\e_1\| \nonumber \\
& = \|\R^{(k-1)}_{k:n,k:n}\x^{(k)}\|
= \min_{\x\,\in \mathbb{Z}^{n-k+1}\setminus \{\0\}}\|\R^{(k-1)}_{k:n,k:n}\x\|_2\nonumber \\
& = \min_{\x\,\in \mathbb{Z}^{n-k+1}\setminus \{\0\}}\| \widetilde{\R}^{(k)} (\widetilde{\Z}^{(k)})^{-1}\x \|_2 \nonumber \\
& = \min_{\z\,\in \mathbb{Z}^{n-k+1}\setminus \{\0\}}\|\R^{(k)}_{k:n,k:n}\z\|_2.
\label{e:diagkk}\end{aligned}$$ Thus holds when $k-1$ changes to $k$. Then, with $\bbR=\R^{(n-1)}$, we can conclude holds.
In the following, we introduce the process of obtaining the unimodular matrix $\widetilde{\Z}^{(k)}$ in proposed in [@ZhaQW12]. (There are some other methods to find $\widetilde{\Z}^{(k)}$, see, e.g., [@New72 pp.13].) Suppose that $\z=[p,q]^T\in \mathbb{Z}^2$ and $\gcd(p,q)=d$, then, there exist two integers $a$ and $b$ such that $ap+bq=d$. Obviously, $$\begin{aligned}
\label{e:U}
\U=\bmx p/d &-b\\ q/d &a \emx\end{aligned}$$ is unimodular and it is easy to verify that $\U^{-1} \z = d \,\e_1$. From , we can conclude that $$\gcd(x^{(k)}_1, x^{(k)}_2, \ldots, x^{(k)}_k)=1.$$ After getting $\x^{(k)}$, $\widetilde{\Z}^{(k)}$ can be obtained by applying a sequence of 2 by 2 unimodular transformations of the form to transform $\x^{(k)}$ to $\e_1$, i.e., $(\widetilde{\Z}^{(k)})^{-1} \x^{(k)}=\e_1$ (see ). Specifically they eliminate the entries of $ \x^{(k)}$ from the last one to the second one. The resulting algorithm for finding $\widetilde{\Z}^{(k)}$ is described by Algorithm \[a:expansion\] and the corresponding KZ reduction algorithm is described by Algorithm \[a:KZ\].
find $d=\gcd(x_{i}, x_{i+1})$ and integers $a$ and $b$ such that $ax_{i}+bx_{i+1}=d$; set $\U=\bmx x_{i}/d &-b\\x_{i+1}/d &a\\\emx$; $x_{i}=d$; $\Z_{1:n,i+k-1:i+k}=\Z_{1:n,i+k-1:i+k}\U$; $\R_{1:i+k,i+k-1:i+k}=\R_{1:i+k,i+k-1:i+k}\U$; find a 2 by 2 Givens rotation $\G$ such that: $$\G\bmx r_{i+k-1,i+k-1}\\r_{i+k,i+k-1}\\\emx=\bmx \times\\0\\\emx;$$ $\R_{i+k-1:i+k,i+k-1:n}=\G\R_{i+k-1:i-k,i+k-1:n}$;
computer the QR factorization of $\A$, see ; set $\Z=\I$; solve $\min_{\x\,\in \mathbb{Z}^{n-k+1}\setminus \{\0\}}\|\R_{k:n,k:n}\x\|_2^2$ by the LLL-aided Schnorr-Euchner search strategy; apply Algorithm \[a:expansion\] to update $\R$ and $\Z$; perform size reductions on $\R$ and update $\Z$
Here we make a remark. Algorithm \[a:KZ\] does not show how to form and update $\Q$, as it may not be needed in applications. If an application indeed needs $\Q$, then we can obtain it by the QR factorization of $\A\Z$ after obtaining $\Z$. This would be more efficient.
Proposed KZ Reduction Algorithm
--------------------------------
In this subsection, we modify Algorithms \[a:expansion\] and \[a:KZ\] to get a new KZ reduction algorithm, which can be much faster and more numerically reliable.
First, we make an observation on Algorithm \[a:KZ\] and make a simple modification. At step $k$, if $\x^{(k)}=\pm\,\e_1$ (see ), then, obviously, the basis expansion algorithm, i.e., Algorithm \[a:expansion\] is not needed and we can move to step $k+1$. Later we will come back to this issue again.
In the following, we will make some major modifications. But before doing it, we introduce the following basic fact, which can be found in the literature: For any two integers $p$ and $q$, the time complexity of finding two integers $a$ and $b$ such that $ap+bq=d\equiv \gcd(p,q)$ by the extended Euclid algorithm is bounded by $\bigO(\log_2(\min\{|p|,|q|\}))$ if fixed precision is used.
In Algorithm \[a:KZ\], after finding $\x^{(k)}$ (see ), Algorithm \[a:expansion\] is used to expand $\R^{(k-1)}_{k:n,k:n}\x^{(k)}$ to a basis for the lattice $\{\R^{(k-1)}_{k:n,k:n}\x: \x\in \mathbb{Z}^{n-k+1}\}$. There are some serious drawbacks with this approach. Sometimes, especially when $\A$ is ill-conditioned, some of the entries of $\x^{(k)}$ may be very large such that they are beyond the range of consecutive integers in a floating point system (i.e., integer overflow occurs), very likely resulting in wrong results. Even if integer overflow does not occur in storing $\x^{(k)}$, large $\x^{(k)}$ may still cause problems. One problem is that the computational time of the extended Euclid algorithm will be long according to its complexity result we just mentioned before. The second problem is that updating $\Z$ and $\R$ in lines 4 and 5 of Algorithm \[a:expansion\] may cause numerical issues. Large $x_i$ and $x_{i+1}$ are likely to produce large elements in $\U$. As a result, integer overflow may occur in updating $\Z$, and large rounding errors are likely to occur in updating $\R$. Finally, $\R$ is likely to become more ill-conditioned after the updating, making the search process for solving SVPs in later steps expensive.
In order to deal with the large $\x^{(k)}$ issue, we look at line 4 in Algorithm \[a:KZ\], which uses the LLL-aided Schnorr-Euchner search strategy to solve the SVP. Specifically at step $k$, to solve , the LLL reduction algorithm is applied to $\R^{(k-1)}_{k:n,k:n}$: \[e:QRZk2\] (\^[(k)]{})\^T\^[(k-1)]{}\_[k:n,k:n]{}\^[(k)]{}=\^[(k-1)]{} where $\widehat{\Q}^{(k)}\in \mathbb{R}^{(n-k+1)\times(n-k+1)}$ is orthogonal, $\widehat{\Z}^{(k)}\in \mathbb{Z}^{(n-k+1)\times(n-k+1)}$ is unimodular and $\widehat{\R}^{(k-1)}$ is LLL-reduced. Then, one solves the reduced SVP by the Schnorr-Euchner search strategy: $$\begin{aligned}
\label{e:SVPk2}
\z^{(k)}=\arg \min_{\z\,\in \mathbb{Z}^{n-k+1}\setminus \{\0\}}\|\widehat{\R}^{(k-1)}\z\|_2^2.\end{aligned}$$ The solution of the original SVP is $\x^{(k)}=\widehat{\Z}^{(k)}\z^{(k)}$.
Instead of expanding $\R^{(k-1)}_{k:n,k:n}\x^{(k)}$ as done in Algorithm \[a:KZ\], we propose to expand $\widehat{\R}^{(k-1)}\z^{(k)}$ to a basis for the lattice $\{\widehat{\R}^{(k-1)}\z: \z\in \mathbb{Z}^{n-k+1}\}$. Thus, before doing the expansion, we update $\Q^{(k)}, \R^{(k)}$ and $\Z^{(k)}$ by using the LLL reduction : $$\begin{aligned}
\label{e:Qk2}
\check{\Q}^{(k)}&=\Q^{(k-1)}\bsmx \I_{k-1}&\0\\ \0&\widehat{\Q}^{(k)}\esmx,\\
\label{e:Rk2}
\check{\R}^{(k)}&=\bsmx \R^{(k-1)}_{1:k-1, 1:k-1}&\R^{(k-1)}_{1:k-1, k:n}\widehat{\Z}^{(k)}\\ \0&\widehat{\R}^{(k-1)}\esmx,\\
\check{\Z}^{(k)}&=\Z^{(k-1)}\bsmx \I_{k-1}&\0\\ \0&\widehat{\Z}^{(k)}\esmx.
\label{e:Zk2}\end{aligned}$$ Now we do expansion. We construct a unimodular matrix $\widetilde{\Z}^{(k)}\in \Zbb^{(n-k+1)\times (n-k+1)}$ whose first column is $\z^{(k)}$, and find an orthogonal matrix $\widetilde{\Q}^{(k)}$ to bring $\widehat{\R}^{(k-1)}\widetilde{\Z}^{(k)}$ back to an upper triangular matrix $\widetilde{\R}^{(k)}$ (cf. ): \[e:recursionk-13\] (\^[(k)]{})\^T\^[(k-1)]{}\^[(k)]{}=\^[(k)]{}. Then, we update $\check{\Q}^{(k)}$, $\check{\R}^{(k)}$ and $\check{\Z}^{(k)}$ as follows (cf. –): $$\begin{aligned}
\label{e:Qk3}
\Q^{(k)}&=\check{\Q}^{(k-1)}\bsmx \I_{k-1}&\0\\ \0&\widetilde{\Q}^{(k)} \esmx,\\
\label{e:Rk3}
\R^{(k)}&=\bsmx \check{\R}^{(k-1)}_{1:k-1, 1:k-1}&\check{\R}^{(k-1)}_{1:k-1, k:n}\widetilde{\Z}^{(k)} \\ \0&\widetilde{\R}^{(k)}\esmx,\\
\Z^{(k)}&=\check{\Z}^{(k-1)}\bsmx \I_{k-1}&\0\\ \0&\widetilde{\Z}^{(k)} \esmx
\label{e:Zk3}\end{aligned}$$ and we obtain the QRZ factorization of $\R$ in the same form as at step $k$.
Unlike $\x^{(k)}$ in , which can be arbitrarily large, $\z^{(k)}$ in can be bounded. Actually by using the LLL reduction properties and the fact that $$\|\widehat{\R}^{(k-1)}\z^{(k)}\|_2 \leq \|\widehat{\R}^{(k-1)}\e_1\|_2=|\widehat{r}_{11}^{(k-1)}|$$ we can show the following result:
\[t:zk\] For $1\leq i\leq n-k+1$, the $i$-th entry of $\z^{(k)} \in \mathbb{Z}^{n-k+1}$ (see ) satisfies $$\begin{aligned}
\label{e:zk}
|z^{(k)}_i | \leq\big(\frac{4}{4\delta-1}\big)^{(n-k)/2}2^{n-k+1-i}\end{aligned}$$ where $\delta$ is the parameter in the LLL reduction (see ).
Because of the limitation of space, we omit its proof.
Now we discuss the benefits of the modification. First, since $\widehat{\R}^{(k-1)}$ is LLL reduced, it has a very good chance, especially when $\R$ is well-conditioned and $n$ is small (say, smaller than 30), that $\z^{(k)}=\pm \,\e_1$ (see ). This was observed in our simulations. As we stated before, the basis expansion is not needed in this case and we can move to next step. Second, the entries of $\z^{(k)}$ are bounded according to Theorem \[t:zk\], but the entries of $\x^{(k)}$ are not. Our simulations indicated that the former are smaller or much smaller than the latter. Thus, the serious problems with using $\x^{(k)}$ for basis expansion mentioned before can be significantly mitigated by using $\z^{(k)}$ instead.
To further reduce the computational cost, we look at the basis expansion process at step $k$ of Algorithm 2. After $\z^{(k)}$ is obtained, Algorithm 1 is used to find a sequence of 2 by 2 unimodular matrices in the form of to eliminate its entries form the last one to the second one. We noticed in our simulations that often $\z^{(k)}$ has a lot of zeros and we would like to explore this to make the basis expansion process more efficient. Specifically, if $\z=[p,q]^T\in \mathbb{Z}^2$ with $q=0$, then $\mbox{gcd}(p,q)=p$, and $\U=\I_2$ in . Thus, in this case we do not need to do anything and move to eliminate the next element in $\z^{(k)}$.
Now we can describe the modified KZ reduction algorithm in Algorithm \[a:mKZ\].
computer the QR factorization of $\A$, see ; set $\Z=\I, k=1$; compute the LLL reduction of $\R_{k:n,k:n}$ (see ) and update $\R, \Z$ (see -); solve $\min_{\z\,\in \mathbb{Z}^{n-k+1}\setminus \{\0\}}\|\R_{k:n,k:n}\z\|_2^2$ by the Schnorr-Euchner search strategy to get the solution $\z$; $k=k+1$; $i=n-k$; perform lines 2-7 of Algorithm \[a:expansion\] (where $x_i$ and $x_{i+1}$ are replaced by $z_i$ and $z_{i+1}$); $i=i-1$; $k=k+1$; perform size reductions on $\R$ and update $\Z$.
Numerical tests {#s:sim}
===============
In this section, we compare the performance of the proposed KZ algorithm Algorithm \[a:mKZ\] with Algorithm \[a:KZ\]. All the numerical tests were done by <span style="font-variant:small-caps;">Matlab</span> 14b on a desktop computer with Intel(R) Xeon(R) CPU W3530 @ 2.80GHz$\times4$. The <span style="font-variant:small-caps;">Matlab</span> code for Algorithm \[a:KZ\] was provided by Dr. Wen Zhang, one of the authors of [@ZhaQW12]. The parameter $\delta$ in the LLL reduction was chosen to be 1.
We first give an example to show that Algorithm \[a:KZ\] may not even give a LLL reduced matrix (for $\delta=1$), while Algorithm \[a:mKZ\] does.
*Example*. Let $$%\A=\bmx
%29.0515&-6.2868&-13.824&35.8878&56.9155\\
%0&3.1479&-0.3457&2.2674&4.8752\\
%0&0&0.2320&-0.3432&-0.4627\\
%0&0&0&0.0102&0.0335\\
%0&0&0&0&0.0035
%\emx
\A\!=\!\bmxc{rrrrr}
10.6347&-66.2715&9.3046&17.5349&24.9625\\
0&8.6759&-4.7536&-3.9379&-2.3318\\
0&0&0.3876&0.1296&-0.2879\\
0&0&0&0.0133&-0.0082\\
0&0&0&0&0.0015
\emxc.$$
Applying Algorithm \[a:KZ\] gives $$\R=\bmxc{rrrrr}
-0.2256&-0.0792&0.0125&0&0\\
0&0.2148&-0.0728&-0.0029&-0.0012\\
0&0&0.2145&0.0527&-0.0211\\
0&0&0&-0.1103&0.0306\\
0&0&0&0&0.6221
\emxc.$$ It is easy to check that $\R$ is not LLL reduced (for $\delta=1$) since ${r}_{33}^2>{r}_{34}^2+{r}_{44}^2$. Moreover, the matrix $\Z$ obtained by Algorithm \[a:KZ\] is not unimodular since its determinant is $-3244032$, which was precisely calculated by Maple. The reason for this is that $\A$ is ill conditioned (its condition number in the 2-norm is about $1.0\times 10^5$) and some of the entries of $\x^{(k)}$ (see ) are too large, causing severe inaccuracy in updating $\R$ and integer overflow in updating $\Z$ (see lines 4-5 in Algorithm \[a:expansion\]). In fact, $$\begin{aligned}
\x^{(1)}&=\bmx
-47,&-27,&-21,&-14,&-34
\emx^T;\\
\x^{(2)}&=\bmx
-48029,&-27593,&2145,&345
\emx^T; \\
\x^{(3)}&=\bmx
-2767925153,&432235,&40
\emx^T;\\
\x^{(4)}&=\bmx
691989751,&2
\emx^T.\end{aligned}$$ The condition numbers in the 2-norm of $\R(k\!:\!5,k\!:\!5)$ obtained at the end of step $k=1,2,3,4$ of Algorithm \[a:KZ\] are respectively $2.9\times10^8, 1.5\times10^{15},6.2 \times10^{18}$ and $1.1\times 10$. A question one may raise is that if $\A$ is updated by the unimodular matrices produced in the process (i.e., $\Z$ is not explicitly formed) is $\A\Z$ LLL reduced? We found it is still not by looking at the R-factor of the QR factorization of $\A\Z$.
Applying Algorithm \[a:mKZ\] to $\A$ gives $$%{\footnotesize
{\R}=\bmxc{rrrrr}
-0.2256&0.0792&-0.0126&0.0028&-0.0621\\
0&-0.2148&0.0728&-0.0084&0.0930\\
0&0&0.2145&0.0292&-0.0029\\
0&0&0&-0.2320&0.0731\\
0&0&0&0&-0.2959
\emxc.
%}$$ Although we cannot verify if $\R$ is KZ reduced, we can verify that indeed it is LLL reduced. All of the solutions of the four SVPs are $\e_1$ (note that the dimensions are different). Thus, no basis expansion is needed. The condition numbers in the 2-norm of $\R(k\!:\!5,k\!:\!5)$ obtained at the end of step $k=1,2,3,4$ of Algorithm \[a:mKZ\] are respectively $2.1, 1.9, 1.6$ and $1.4$.
Now we consider two more general cases for comparing the efficiency of the two algorithms:
- Case 1. $\A=\text{randn}(n,n)$, where $\text{randn}(n,n)$ is a <span style="font-variant:small-caps;">Matlab</span> built-in function to generate a random $n\times n$ matrix, whose entries follow the normal distribution ${\cal N}(0,1)$.
- Case 2. $\A=\U\D\V^T$, $\U,\V$ are random orthogonal matrices obtained by the QR factorization of random matrices generated by $\text{randn}(n,n)$ and $\D$ is a $n\times n$ diagonal matrix with $d_{ii}=10^{3(n/2-i)/(n-1)}$.
In the numerical tests for each case for a fixed $n$ we gave 200 runs to generate 200 different $\A$’s. Figures \[fig:CPUT1\] and \[fig:CPUT2\] display the average CPU time over 200 runs versus $n=2:2:20$ for Cases 1 and 2, respectively. In both figures, “KZ” and “Modified KZ” refer to Algorithms \[a:KZ\] and \[a:mKZ\], respectively.
![Average CPU time versus $\n$ for Case 1[]{data-label="fig:CPUT1"}](CPUT1.eps){width="3.2in"}
![Average CPU time versus $\n$ for Case 2[]{data-label="fig:CPUT2"}](CPUT2.eps){width="3.2in"}
Figure \[fig:CPUT2\] gives the results for only $n=2:2:10$. This is because when $n\geq12$, Algorithm 2 often did not terminate within ten hours. In Case 1, sometimes Algorithm \[a:KZ\] did not terminate within a half hour and we just ignored this instance and gave one more run. The number of such instances was much smaller than that for Case 2.
From Figures \[fig:CPUT1\] and \[fig:CPUT2\], we can see that Algorithm \[a:mKZ\] is faster than Algorithm \[a:KZ\] for Case 1 and much faster for Case 2. Also, when we ran Algorithm \[a:KZ\] we got a warning message “Warning: Inputs contain values larger than the largest consecutive flint. Result may be inaccurate” several times, for both Cases 1 and 2 in the tests. But this did not happen to Algorithm \[a:mKZ\]. Thus Algorithm \[a:mKZ\] is more numerically reliable.
Summary and comment {#s:sum}
===================
In this paper, we modified the KZ reduction algorithm proposed by Zhang et al. in [@ZhaQW12]. The resulting algorithm can be much faster and more numerically reliable.
The modified basis expansion strategy proposed in this paper can be applied in designing algorithms for the Minkowski reduction (see, e.g., [@ZhaQW12]) and the block KZ reduction (see [@Sch87] and [@CheN11]).
[^1]: This work was supported by NSERC of Canada grant 217191-12.
|
---
abstract: 'The effect of a constant applied external force, induced for instance by an electric or gravitational field, on the dispersion of Brownian particles in periodic media with spatially varying diffusivity, and thus mobility, is studied. We show that external forces can greatly enhance dispersion in the direction of the applied force and also modify, to a lesser extent and in some cases non-monotonically, dispersion perpendicular to the applied force. Our results thus open up the intriguing possibility of modulating the dispersive properties of heterogeneous media by using externally applied force fields. These results are obtained via a Kubo formula which can be applied to any periodic advection diffusion system in any spatial dimension.'
author:
- 'T. Guérin'
- 'D. S. Dean'
title: 'Force-induced dispersion in heterogeneous media'
---
In diverse systems ranging from fluid mechanics, hydrology, soft matter to solid state physics, at mesoscopic length and time scales, the dynamics of tracer particles is described by stochastic differential equations (SDEs) and their associated Fokker-Planck equations [@VanKampen1992; @oksendal2003stochastic; @gardiner1983handbook]. In heterogeneous media, the local transport coefficients such as the diffusivity and the mobility can vary in space depending on the local material properties. In a locally isotropic material where a uniform force ${\bf F}$ acts on a tracer particle, the probability density function (PDF) $p({\bf x},t)$ for the tracer position at time $t$ obeys $$\begin{aligned}
\partial_t p({\bf x},t)= \nabla\cdot\left[\kappa({\bf x})\nabla p - \beta\ \kappa({\bf x})\ {\bf F}\ p\ \right]. \label{eqkF}\end{aligned}$$ The first term on the right hand side of Eq. (\[eqkF\]) above corresponds to diffusion with a spatially varying diffusion constant. The second term represents the drift due to a constant applied external force and the term $\beta\kappa({\bf x})
=\mu({\bf x})$ is the local mobility. The factor of the inverse temperature $\beta$ results from the local Einstein relation between mobility and diffusivity. Physical examples include charge carriers in heterogeneous media, where $\mu({\bf x})$ is proportional to the local electrical conductivity, in the presence of an external electric field, as well as colloidal diffusion in porous media, with local diffusivity $\kappa({\bf x})$, with an external field induced by gravitational or buoyancy forces. Here we study the effect that a constant external applied field has on the late time dispersion as characterized by the effective drift of a cloud of tracer particles $$V_{i} = \lim_{t\to\infty} {\langle X_i(t)-X_i(0)\rangle\over t}, \label{DefVi}$$ (where ${\bf X}(t)$ denotes the position of a tracer particle and $\langle \cdot\rangle$ denotes ensemble averaging) and the effective diffusivity $$D_{ii}=\lim_{t\to\infty} {\langle[ X_i(t)-X_i(0)]^2\rangle_c\over 2t},\label{DefDii}$$ ($c$ denotes the connected part, thus the variance of the displacement $X_i(t)-X_i(0)$) characterizing the dispersion the cloud about its mean position. Effective transport coefficient are important for estimating the spread of pollutants and chemical reaction times [@condamin2007].
When ${\bf F}={\bf 0}$, the problem of determining $D_{ii}$ and $V _i$ dates back to Maxwell [@MaxwellBook], where the equivalent problem of determining the dielectric constant of heterogeneous media was addressed. The Wiener bounds [@wiener1910] state that $(\overline{\kappa^{-1}})^{-1}\leq D\leq \overline\kappa$, where $\overline{\ \cdot\ }$ indicates spatial averaging. In higher dimensions there are few exact results [@dykhne1971conductivity] but numerous approximations schemes exist [@jeffrey1973; @drummond1987effective; @deWit1995correlation; @abramovich1995effective; @dean2008self]. However, the case where there is a finite external force appears not to have been studied and in this Letter we will address the force’s effect on the dispersion of tracer particles.
![(color online) (a) The 2D periodic diffusivity field $\kappa(x,y)=\kappa_0[1+0.8\cos(2\pi x/L)\cos(2\pi y/L)]$, in units of $\kappa_0$ on the fundamental rectangular unit cell. The arrow indicates the direction of the external force. (b) Stationary PDF in the diffusivity field shown in (a) with an external force of magnitude $\beta FL=10$. (c) Components $D_{xx}$ and $D_{yy}$ of the effective diffusion tensor predicted by Eqs. (\[FunctionF\],\[DispersionFunctionOfF\]) and the normalized effective drift $V_x/\beta F$ from Eq. (\[strat\]) (lines) along with simulations results for the SDE (\[SDE\_Ito\]) (symbols). (d) Same as (c) with different scales. The dashed line represents the behavior $D_{xx}\simeq c F^2$ with the coefficient $c$ predicted by Eq. (\[DxxForceSquared\]).[]{data-label="FIG1"}](ARTICLE_FIG1.pdf){width="8cm"}
To gain a flavor for the phenomenology of this problem we consider diffusion in a two dimensional medium, where $\kappa(x,y)$ is shown in Fig. \[FIG1\](a), with an applied force ${\bf F}$ oriented in the $x$ direction. We show in Fig. \[FIG1\](c) the results of numerical simulations of the corresponding SDE for the quantities $D_{xx},\ D_{yy}$ and $V_x/\beta F$. At zero force, all the quantities shown are equal, this is a result of the Stokes-Einstein relation $D_{xx}={\beta \partial_F V_x} $ which holds [*only*]{} [@commentStokes] when $F=0$. At small $F$ upon increasing $F$, we see that both $D_{xx}$ and $V_x/F$ decrease while $D_{yy}$ increases. As $F$ increases further, $V_x/F$ continues to decrease monotonically, however $D_{xx}$ and $D_{yy}$ attain minimal and maximal values respectively and eventually cross. This remarkable behavior shows that the fast and slow directions of dispersion can be interchanged by an applied force and that $D_{yy}(F)$ is a non-monotonic function. In Fig. \[FIG1\](d), we see that $D_{xx}$ grows as $F^2$ at large forces and can thus be made arbitrarily large (thus exceeding the upper Wiener bound for the forceless case), giving rise to force induced dispersion enhancement. The key difference between systems with and without an external force is that in the latter case the steady state probability distribution $P_s(x,y)$ on the periodic unit cell of the system is constant, whereas in the presence of the field it becomes non-trivial as shown in Fig. \[FIG1\](b).
To explain these results we will derive a Kubo-type formula for the transport coefficients for general Fokker-Planck equations with arbitrary periodic diffusion tensors and advection fields. This formula generalizes a number of existing results for convection by incompressible velocity fields with constant molecular diffusivity as in the case of Taylor dispersion [@taylor1953dispersion]. Examples include diffusion in Rayleigh-Bénard convection cells [@rosenbluth1987effective; @Shraiman1987; @McCarty1988], diffusion in frozen turbulent flows [@majda1999simplified] and transport by a fluid in porous media [@brenner1980dispersion; @souto1997dispersion; @alshare2010modeling; @carbonell1983dispersion]. Our formula also encapsulates results for diffusion in periodic potentials [@dean2007effective; @derrida1983velocity; @zwanzig1988diffusion; @deGennes1975brownian; @lifson1962self]. In one dimension, results on diffusion in periodic potentials plus constant forces have been derived [@reimann2001giant; @reimann2002diffusion; @reimann2008weak; @Machura2005; @lindner2001optimal], as well as the more general case where the noise amplitude is a periodic function of position [@lindner2002; @reguera2006entropic; @burada2008entropic].
The Kubo formula we derive here is valid in any dimension. The terms in the Kubo formula can be analytically evaluated when the diffusivity varies only in one direction, and we give analytical results for such stratified systems. We also solve the generic problem analytically in the limit of large forces, proving that the coefficient of $D_{ii}$, where $i$ is the direction of the force, is generically proportional to $F^2$. Finally, the Kubo formula can be evaluated by solving a set of associated partial differential equations numerically [@commentStokes], the excellent agreement between this calculation and the simulations is shown in Figs. \[FIG1\](c,d).
*Kubo formula for the dispersion.* Consider the general Fokker-Planck equation $$\begin{aligned}
\partial_t p =\sum_{i,j=1}^d\partial_{x_i}\{ -u_i({\mathbf{x}})p+\partial_{x_j}[\kappa_{ij}({\mathbf{x}})p]\}\equiv \mathcal{L}_{{\mathbf{x}}}\ p,\label{FKPEq}\end{aligned}$$ where $\kappa_{ij}({\mathbf{x}})$ is a local (symmetric) diffusion tensor, ${\mathbf{u}}({\mathbf{x}})$ is the drift field, and $\mathcal{L}_{{\mathbf{x}}}$ is the transport operator. Our only assumption in the following is that the fields $u_i({\mathbf{x}})$ and $\kappa_{ij}({\mathbf{x}})$ are periodic in space. Let $\Omega$ denote the fundamental unit cell of the periodic structure. We call $p({\mathbf{x}},t\vert {\mathbf{y}})$ the propagator of the stochastic process in infinite space, defined as the solution of Eq. (\[FKPEq\]) in infinite space with initial condition $p({\mathbf{x}},0\vert {\mathbf{y}})=\delta({\mathbf{x}}-{\mathbf{y}})$. We distinguish this infinite space propagator $p({\mathbf{x}},t\vert {\mathbf{y}})$ from the propagator calculated with periodic boundary conditions on the boundaries of $\Omega$, denoted $P({\mathbf{x}},t\vert {\mathbf{y}})$, and representing the probability density to observe a particle at time $t$ at a position ${\mathbf{x}}$ modulo an integer number of translations along the lattice vectors of the periodic structure. Finally, we define $P_{\mathrm{s}}({\mathbf{x}})=\lim_{t\rightarrow\infty}P({\mathbf{x}},t,\vert {\mathbf{y}})$ the stationary PDF of the particles with periodic boundary conditions.
In the Ito prescription, the SDE corresponding to the Fokker-Planck equation (\[FKPEq\]) in the direction $i$ [@gardiner1983handbook; @oksendal2003stochastic] is $$\begin{aligned}
dX_i=u_i({\mathbf{X}}(t)) \ dt + \sum_{j=1}^d(\kappa^{1/2}({\mathbf{X}}(t)))_{ij}dW_j \label{SDE_Ito},\end{aligned}$$ where $\kappa^{1/2}$ represents the square-root matrix of the positive symmetric matrix $\kappa$. The noise increments $dW_i$ are Gaussian, independent, of zero mean and are only correlated at equal times as $\langle dW_i dW_j\rangle=2\delta_{ij}dt$. Ensemble averaging Eq. (\[SDE\_Ito\]) yields the Stratonovich result [@stratonovich1958oscillator] $$\begin{aligned}
V_i = \int_{\Omega} d{\mathbf{x}} \ P_{\mathrm{s}}({\mathbf{x}})\ u_i({\mathbf{x}}). \label{strat}\end{aligned}$$ To calculate the effective diffusivity we first substract $u_i dt$ from both sides of Eq. (\[SDE\_Ito\]), integrate over time, square both sides of the resulting equation and then average to find $$\begin{aligned}
\langle [X_i(t)-X_i(0)]^2\rangle&+ \int_0^t dt_1 \int_0^t dt_2 \langle u_i({\mathbf{X}}(t_1))u_i({\mathbf{X}}(t_2)) \rangle \nonumber\\
-2 \int_0^t dt' \langle \{X_i(t)&-X_i(t')+X_i(t')-X_i(0)\}u_i({\mathbf{X}}(t'))\rangle\nonumber\\
& =
2 t\int_{\Omega} d{\mathbf{x}} \ P_{\mathrm{s}}({\mathbf{x}}) \kappa_{ii}({\mathbf{x}}).\label{ExpansionSquareDx}\end{aligned}$$ The average of the right hand side of Eq. (\[ExpansionSquareDx\]) follows from the independence of the $dW_i$ at different time steps. Exploiting the periodicity of the field ${\mathbf{u}}({\mathbf{x}})$, we can evaluate the second term of Eq. (\[ExpansionSquareDx\]) for $t_1<t_2$ as $$\begin{aligned}
& \langle u_i({\mathbf{X}}(t_1))u_i({\mathbf{X}}(t_2)) \rangle =\nonumber \\
& \iint_{\Omega} d{\mathbf{x}}_1 d{\mathbf{x}}_2 u_i({\mathbf{x}}_2)u_i({\mathbf{x}}_1) P({\mathbf{x}}_2,t_2-t_1\vert {\mathbf{x}}_1)P_{\text{s}}({\mathbf{x}}_1) .\end{aligned}$$ The second line of Eq. (\[ExpansionSquareDx\]) contains the term [@commentkubo] $$\begin{aligned}
\langle [X_i&(\tau)-X_i(0)]u_i({\mathbf{X}}(0))\rangle = \nonumber\\
& \int_{\mathbb{R}^d} d{\mathbf{x}}\int_{\Omega} d{\mathbf{y}} \ p({\mathbf{x}},\tau\vert {\mathbf{y}}) P_{\mathrm{s}}({\mathbf{y}}) (x_i-y_i) u_i({\mathbf{y}}). \label{ExpressionCorruu}\end{aligned}$$ Differentiating with respect to $\tau$, using Eq. (\[FKPEq\]) and integrating by parts over ${\mathbf{x}}$, we obtain $$\begin{aligned}
&\partial_{\tau} \langle [X_i(\tau)-X_i(0)]u_i({\mathbf{X}}(0))\rangle =\int_{\Omega} d{\mathbf{y}} \ P_{\mathrm{s}}({\mathbf{y}}) \ u_i({\mathbf{y}}) \times \nonumber\\
& \int_{\mathbb{R}^d} d{\mathbf{x}}\Big[u_i({\mathbf{x}}) p({\mathbf{x}},\tau\vert {\mathbf{y}}) - \sum_{j=1}^d\partial_{x_j}\kappa_{ij}({\mathbf{x}})p({\mathbf{x}},\tau\vert {\mathbf{y}})\Big]. \label{9457150}\end{aligned}$$ Finally, exploiting the periodicity of the field ${\mathbf{u}}$, we can replace the integral over ${\mathbf{x}}$ over the infinite space by an integral over the unit cell $\Omega$ if one replaces the infinite space propagator $p$ by the propagator with periodic boundary conditions $P$, yielding for any $t>t'$ [@commentkubo2] $$\begin{aligned}
\partial_t \langle &[X_i(t)-X_i(t')]u_i({\mathbf{X}}(t'))\rangle = \nonumber\\
&\int_{\Omega} d{\mathbf{x}}\int_{\Omega} d{\mathbf{y}} \ u_i({\mathbf{y}}) u_i({\mathbf{x}}) P({\mathbf{x}},t-t'\vert {\mathbf{y}}) P_{\mathrm{s}}({\mathbf{y}}). \label{958I2}\end{aligned}$$ The last term to be computed in Eq. (\[ExpansionSquareDx\]) is $$\begin{aligned}
\langle [X_i&(t)-X_i(0)]u_i({\mathbf{X}}(t))\rangle = \nonumber\\
& \int_{\mathbb{R}^d} d{\mathbf{x}}\int_{\Omega} d{\mathbf{y}} \ p({\mathbf{x}},t\vert {\mathbf{y}})P_{\mathrm{s}}({\mathbf{y}}) (x_i-y_i) u_i({\mathbf{x}}).\end{aligned}$$ Due to the periodicity, we can exchange the integration domains of ${\mathbf{y}}$ and ${\mathbf{x}}$ in this equation. We now use the backward Fokker-Planck equation [@gardiner1983handbook] $\partial_t p({\mathbf{x}},t\vert {\mathbf{y}})=\mathcal{L}_{{\mathbf{y}}}^{\dagger}p$, (where $\mathcal{L}^{\dagger}$ is the adjoint of the transport operator $\mathcal{L}$) to find $$\begin{aligned}
&\partial_{t} \langle [X_i(t)-X_i(0)]u_i({\mathbf{X}}(t))\rangle = \nonumber\\
&\int_{\Omega} d{\mathbf{x}} \int_{\mathbb{R}^d} d{\mathbf{y}} \ [\mathcal{L}_{{\mathbf{y}}}^{\dagger}p({\mathbf{x}},t\vert {\mathbf{y}})] P_{\mathrm{s}}({\mathbf{y}}) (x_i-y_i) u_i({\mathbf{x}}).\end{aligned}$$ Using the definition of the adjoint operator, we write $$\begin{aligned}
& \partial_{t} \langle [X_i(t)-X_i(0)]u_i({\mathbf{X}}(t))\rangle = \nonumber\\
& \int_{\mathbb{R}^d} d{\mathbf{y}}\int_{\Omega} d{\mathbf{x}} \ u_i({\mathbf{x}}) p({\mathbf{x}},t\vert {\mathbf{y}}) \mathcal{L}_{{\mathbf{y}}} \{P_{\mathrm{s}}({\mathbf{y}}) (x_i-y_i)\} .\label{9582391}\end{aligned}$$ Again exploiting the periodicity of ${\mathbf{u}}$ and explicitly calculating $\mathcal{L}_{{\mathbf{y}}} \{P_{\mathrm{s}}({\mathbf{y}}) (x_i-y_i)\}$ gives $$\begin{aligned}
& \partial_{t} \langle [X_i(t)-X_i(0)]u_i({\mathbf{X}}(t))\rangle = \int_{\Omega}d{\mathbf{x}}\ u_i({\mathbf{x}})\times \nonumber\\
&\int_{\Omega} d{\mathbf{y}} P({\mathbf{x}},t\vert {\mathbf{y}}) \Bigg\{J_{\mathrm{s},i}({\mathbf{y}})-\sum_{j=1}^d\partial_{y_j} [ \kappa_{ij}({\mathbf{y}}) P_{\mathrm{s}}({\mathbf{y}})]\Bigg\} ,\label{049141}\end{aligned}$$ where ${\mathbf{J}}_{\mathrm{s}}({\mathbf{y}})$ the local current in the stationary state at position ${\mathbf{y}}$, given by $$\begin{aligned}
J_{\mathrm{s},i}({\mathbf{y}})= u_i({\mathbf{y}}) P_{\mathrm{s}}({\mathbf{y}}) -\sum_{j=1}^d\partial_{y_j} [ \kappa_{ij}({\mathbf{y}}) P_{\mathrm{s}}({\mathbf{y}})].\end{aligned}$$ Finally, all the terms in Eq. (\[ExpansionSquareDx\]) can be evaluated by using Eqs. (\[ExpressionCorruu\],\[958I2\],\[049141\]). Taking the large time limit, we obtain the Kubo formula for the effective diffusion tensor $$\begin{aligned}
D_{ii}&=\int_{\Omega}d{\mathbf{y}} \ P_{\mathrm{s}}({\mathbf{y}})\kappa_{ii}({\mathbf{y}})\ +\nonumber\\
&\iint_{\Omega}d{\mathbf{x}}d{\mathbf{y}} \ u_i({\mathbf{x}})G({\mathbf{x}}\vert {\mathbf{y}})[2 J_{\mathrm{s},i}({\mathbf{y}})-u_i({\mathbf{y}})P_{\mathrm{s}}({\mathbf{y}})] ,\label{ResultDiffCoeff}\end{aligned}$$ where $G({\mathbf{x}}\vert {\mathbf{y}})=\int_0^{\infty}dt \{P({\mathbf{x}},t\vert {\mathbf{y}})-P_s({\mathbf{x}})\}$ is the pseudo-Green function [@barton1989elements] of $\mathcal{L}$ on $\Omega$. The equation (\[ResultDiffCoeff\]) gives in an explicit way the dispersion properties in terms of quantities that are defined at the level of an individual cell $\Omega$, with periodic boundary conditions. We may re-express $D_{ii}$ by introducing ${\mathbf{f}}({\mathbf{x}})$, the solution of $$\begin{aligned}
\mathcal{L}_{{\mathbf{x}}}f_i({\mathbf{x}}) =&- 2 J_{\mathrm{s},i}({\mathbf{x}})+u_i({\mathbf{x}})P_{\mathrm{s}}({\mathbf{x}})\nonumber\\
&+P_{\mathrm{s}}({\mathbf{x}})\int_{\Omega}d{\mathbf{y}}\ [2 J_{\mathrm{s},i}({\mathbf{y}})-u_i({\mathbf{y}})P_{\mathrm{s}}({\mathbf{y}})] \label{FunctionF},\end{aligned}$$ again with periodic boundary conditions on $\Omega$, and with the integral condition $\int_{\Omega}d{\mathbf{x}}\ {\mathbf{f}}({\mathbf{x}})={\mathbf{0}}$. The diffusion tensor is then given by $$\begin{aligned}
D_{ii}=&\int_{\Omega}d{\mathbf{x}} \left\{ P_{\mathrm{s}}({\mathbf{x}})\kappa_{ii}({\mathbf{x}})+u_i({\mathbf{x}})f_i({\mathbf{x}})\right\} .\label{DispersionFunctionOfF}\end{aligned}$$ Non-equilibrium effects are manifested in Eq. (\[ResultDiffCoeff\]) by the presence of the local currents of the stationary state, generalizing similar Kubo formulas derived for equilibrium problems. In the case of transport by incompressible fluid flows, $P_s({\mathbf{x}})$ is uniform, ${\mathbf{J}}_s$ is equal to the flow ${\mathbf{u}}$ and one recovers the equations describing dispersion in incompressible hydrodynamic flows (compare for example Eqs. (\[FunctionF\],\[DispersionFunctionOfF\]) to Eqs. (35,48) of Ref. [@carbonell1983dispersion]).
*Periodic diffusivity with an external uniform force.* We now focus on advection-diffusion systems described by Eq. (\[eqkF\]), which fall in the class of the general equation (\[FKPEq\]) with $$\begin{aligned}
\kappa_{ij}({\mathbf{x}})=\delta_{ij}\kappa({\mathbf{x}}),\ \ {\mathbf{u}}({\mathbf{x}})=\kappa({\mathbf{x}}) \beta {\mathbf{F}}+\nabla\kappa({\mathbf{x}}). \end{aligned}$$ The effective dispersion tensor $D_{ii}$ can be obtained by solving numerically the partial differential equations (\[FunctionF\],\[DispersionFunctionOfF\]), leading to the results on Fig. \[FIG1\], which compare very well to numerical simulations of the SDE (\[SDE\_Ito\]).
*Stratified media.* In systems where the local diffusivity varies only in one dimension, $\kappa(x,y)=\kappa(x)$ as illustrated in Fig. \[FIG2\](a), ${\mathbf{f}}$ depends only on $x$ and can be calculated analytically [@commentStokes]. For vanishing forces, the diffusivity tensor reads $$\begin{aligned}
D_{xx}= 1/\overline{\kappa^{-1}}, \ D_{yy}=\overline{\kappa},\ D_{xy}=0 \hspace{0.6cm} (|{\mathbf{F}}|\rightarrow 0).\label{DispersionStratesLowForce}\end{aligned}$$ Here the anisotropy of the dispersion is imposed by the anisotropy of the field $\kappa$ ; from Jensen’s inequality we see that $D_{xx}\le D_{yy}$, indicating that dispersion is faster in the direction parallel to the strata of the medium \[Fig. \[FIG2\](b)\]. For large forces however, we find that $$\begin{aligned}
D_{ij} = (\overline{\kappa^{-1}})^{-1}\left\{\delta_{ij}+\frac{F_i F_j}{\vert {\mathbf{F}}\cdot{\mathbf{e}}_x\vert^2} \left[ \frac{\ \overline{\kappa^{-2}}}{(\overline{\kappa^{-1}})^{2}} -1\right]\right\},\label{DispersionStratesLargeForce}\end{aligned}$$ so the dispersion becomes larger in the direction parallel to the force than in the perpendicular direction [@commentStrata]. The dispersion is highly sensitive to the projection of the force normal to the strata \[Fig. \[FIG2\](c)\], and the diffusion coefficients in the planes of the strata diverge when ${\mathbf{F}}$ is in the plane of the strata (in fact they grow as $|{\mathbf{F}}|^2$).
![(color online) (a) The 2D periodic diffusivity field for our example of stratified medium, $\kappa(x,y)=\kappa_0[1+0.95\cos(2\pi x/L)]$, shown in units of $\kappa_0$ on the fundamental rectangular unit cell. (b) and (c): Cloud of particles diffusing in the local diffusivity field shown in (a) in the presence of external force at a time $t=10L^2/\kappa_0$. In (b) no external force and in (c) the force has magnitude given by $\beta F L=100$, and acts in the direction indicated by the arrow. The ellipses represent the region in which $95\%$ of the points should fall and are determined from Eqs. (\[DispersionStratesLowForce\],\[DispersionStratesLargeForce\]). \[FIG2\]](ARTICLE_FIG2){width="8cm"}
*Force induced dispersion enhancement in 2D.* Consider the general 2D problem in the case of large forces. For large forces, it is natural to suppose that the equilibration time in the direction (here $x$) of the force is much shorter than in the other direction. We thus make the quasi-static approximation $P(x,y,t)\simeq \pi(y,t) P_s(x\vert y)$, where $P_s(x\vert y)\sim \kappa^{-1}(x,y)$ is the stationary probability to observe $x$ given the value of $y$. An effective Fokker-Planck equation can then be derived for the PDF $\pi(y,t)$ by integrating over $x$, and using Eqs. (\[FunctionF\],\[DispersionFunctionOfF\]), to obtain [@commentStokes] $$\begin{aligned}
D_{xx}=\frac{[\beta F R(L)]^2}{W(L)}\int_0^{L} dy \left[\frac{W(y)}{W(L)}-\frac{R(y)}{R(L)}\right]^2 e^{-\overline{\ln\kappa}(y)}, \label{DxxForceSquared}\end{aligned}$$ where $L$ is the length of the period in the direction $y$, the notation $\overline{g}(y)$ representing uniform spatial averaging over $x$ for any function $g(x,y)$, and where $$\begin{aligned}
&R(y)=\int_0^{y}du\ e^{\overline{\ln\kappa}(u)}; W(y)=\int_0^{y}du\ \overline{\kappa^{-1}}(u) e^{\overline{\ln\kappa}(u)}.\end{aligned}$$ Equation (\[DxxForceSquared\]) shows that local heterogeneities generically give rise to diffusion coefficients scaling as the square of the force for large forces, implying that *the force-induced diffusivity can be much larger than the microscopic diffusion coefficients*. Quadrature of the integrals in Eq. (\[DxxForceSquared\]) give a coefficient of $F^2$ which is in agreement with the simulations, as seen in Fig. \[FIG1\](d).
*Conclusion.* Taylor dispersion [@taylor1953dispersion] is a textbook example of a phenomenon where spatial variations of a time-independent compressible velocity field, along with locally constant molecular diffusivity, lead to enhanced dispersion. Here, external uniform forces lead to increased dispersion in the direction of the force. The mechanism is similar to that behind Taylor dispersion in that particles with different trajectories experience very different advection by the applied force due to its coupling to the local mobility/diffusivity. We have also seen that an external force can non-monotonically modify the dispersion in the direction perpendicular to the applied force. This surprising effect is due to the fact that an applied force yields a non-uniform stationary distribution over the fundamental periodic cell. It is possible that one may construct experimental systems where the effects predicted here could be observed. Periodic optical potentials, in which colloidal particles can be tracked, can be generated by lasers [@dalle2011dynamics; @Evstigneev2008] and it would be interesting to see if experimental realizations of media with spatially modulated diffusivities could be similarly produced in order to observe the effects predicted in this Letter. Finally, we stress that the results here can be applied to any periodic advection-diffusion system and thus have a wide range of applicability. For instance, one can use the formulas to study the dispersion in periodic potentials in *any* dimension in the presence of an external force [@reimann2001giant; @reimann2002diffusion] (even with varying local mobility) as well as in systems with no local detailed balance, such as active particle systems.
[**SUPPLEMENTAL MATERIAL**]{}
The Generalized Stokes-Einstein Relation
========================================
The Stokes-Einstein relation is a relationship between effective diffusivity and effective drift or mobility which applies in equilibrium systems, and is often used to deduce diffusivity from mobility (for a recent example see [@putzel2014nonmonotonic]). Here we show how, beyond the regime of linear response, the Stokes-Einstein relation breaks down due to the presence of currents associated with the stationary distribution.
The effective drift is given by the Stratonovich formula [@stratonovich1958oscillator] (Eq. (6) of the main text) $$V_i = \int_{\Omega} d{\mathbf{x}} \ P_{\mathrm{s}}({\mathbf{x}})\ u_i({\mathbf{x}}),$$ where $P_{\mathrm{s}}$ is the stationary distribution on the unit cell $\Omega$. Now consider a system where the local drift $u_i$ is perturbed by a small external force ${\bf F} $ so that the local drift $u_i$ changes to $$u'_i({\bf x})=u_i({\bf x}) + \beta\sum_{j=1}^d\kappa_{ij}({\bf x}) F_j.$$ The induced local drift due to the force ${\bf F}$ takes this form as the local mobility tensor is given, using the local Stokes-Einstein formula or detailed balance, by $\mu_{ij} = \beta\kappa_{ij}$, where $\beta=1/k_BT$ is the inverse of the thermal energy and $\kappa_{ij}$ is the local diffusivity tensor. From the above formulas we then see that $${\partial V_i\over \partial F_i} = \beta \int_\Omega d{\bf x} \ P_{\mathrm{s}}({\mathbf{x}})\kappa_{ii}({\bf x})
+ \int_\Omega d{\bf x}\ {\partial P_{\mathrm{s}}({\mathbf{x}})\over \partial F_i}{u}_i({\bf x}).\label{intdif}$$ Differentiating the stationary Fokker-Planck equation $\mathcal{L}_{{\mathbf{x}}}P_{\mathrm{s}}=0$ with respect to $F_i$ then yields $$\mathcal{L}_{{\mathbf{x}}}{\partial P_{\mathrm{s}}({\mathbf{x}})\over \partial F_i} - \beta\sum_{j=1}^d{\partial \over \partial x_j}\left[\kappa_{ji}({\bf x})P_{\mathrm{s}}({\mathbf{x}})\right]=0.\label{stokes1}$$ The boundary conditions for $\partial P_{\mathrm{s}}({\mathbf{x}})/\partial F_i$ are clearly that it is periodic on the boundaries of $\Omega$, but also we must have, by conservation of probability, that $$\int_\Omega d{\bf x}\ {\partial P_{\mathrm{s}}({\mathbf{x}})\over \partial F_i} = 0.\label{intc}$$ By definition (see *e.g.* Ref. [@barton1989elements]), the pseudo-Green’s function $G({\bf x}|{\bf y})$ for $\mathcal{L}_{{\mathbf{x}}}$ on $\Omega$ obeys $$\mathcal{L}_{{\mathbf{x}}}G({\bf x}|{\bf y})= -\delta({\bf x}-{\bf y}) + P_{\mathrm{s}}({\mathbf{x}}).$$ We can use this pseudo-Green’s function $G({\bf x}|{\bf y})$ to construct the solution of Eq. (\[stokes1\]) as $${\partial P_{\mathrm{s}}({\mathbf{x}})\over \partial F_i}= -\int_\Omega d{\bf y}\ G({\bf x}|{\bf y})\beta\sum_{j=1}^d{\partial \over \partial y_j}\left[\kappa_{ji}({\bf y})P_{\mathrm{s}}({\mathbf{y}})\right],$$ which clearly satisfies the integral condition Eq. (\[intc\]). Substituting this solution into Eq. (\[intdif\]) then yields $$\begin{aligned}
{\partial V_i\over \partial F_i} = &\beta \Big\{ \int_\Omega d{\bf x} \ P_{\mathrm{s}}({\mathbf{x}})\kappa_{ii}({\bf x})\nonumber\\
&- \iint_\Omega d{\bf x}d{\bf y}\ {u}_i({\bf x}) G({\bf x}|{\bf y})\sum_{j=1}^d{\partial \over \partial y_j}\left[\kappa_{ji}({\bf y})P_{\mathrm{s}}({\mathbf{y}})\right]
\Big\}.\end{aligned}$$ Now, using Eq. (17) of the main text and the definition Eq. (16) of the current $J_{\mathrm{s},i}$ we can write $${\partial V_i\over \partial F_i} = \beta D_{ii} -\beta \iint_\Omega d{\bf x}d{\bf y} \ {u}_i({\bf x}) G({\bf x}|{\bf y})J_{\mathrm{s},i}({\bf y}).\label{mse}$$ The Stokes Einstein relation ${\partial V_i/ \partial F_i} = \beta D_{ii}$ between the effective drift and diffusivity thus in general holds only when the current of the stationary state ${\bf J}_{\mathrm{s}}$ vanishes. We also note that the first term of Eq. (\[mse\]), being the diffusion constant, it clearly positive. However, the sign of the second term is not obvious. In Ref. [@eichhorn2002brownian], it was found that a particle with constant applied force in a two dimensional periodic, ratchet-like, potential can exhibit absolute negative mobility - it would be interesting to see if the formalism developed here could be used to better understand this phenomenon.
Results in one dimension
========================
In more than one dimension the resolution of the partial differential equation Eq. (18) to evaluate the Kubo formula for the diffusion equation is not possible analytically. However, in one dimension the corresponding differential equation can be evaluated analytically and we give the general result for any diffusion advection in one dimension and then specialize the result to the case of diffusion in a medium of varying diffusivity subject to an external field. Refs [@lifson1962self; @reimann2002diffusion; @reimann2001giant; @reguera2006entropic; @burada2008entropic; @lindner2002; @lindner2001optimal] are landmark papers in the study of diffusion in non-equilibrium systems, as the problem of diffusion in a periodic potential plus a constant force was first studied in Refs. [@reimann2002diffusion; @reimann2001giant] and the result generalized to arbitrary advection diffusion was obtained in Refs. [@lindner2002; @reguera2006entropic; @burada2008entropic] (although in Ref. [@lindner2002] a Stratonovich prescription for the Langevin equation was used and in Ref. [@reguera2006entropic; @burada2008entropic] a specific problem related to Fick-Jacobs diffusion was studied, the results given are in fact the most general possible in one dimension). The approach of Refs. [@lifson1962self; @reimann2002diffusion; @reimann2001giant; @reguera2006entropic; @burada2008entropic; @lindner2002; @lindner2001optimal] was based on an expression for the diffusion constant in one dimension deduced from moments of first passage times, we will show here how the general result can be rederived via the Kubo formula Eq. (17).
In what follows, we compute the dispersion properties for the model described by Eq. (4) of the main text in one dimension, and we show how to use these formulas to derive the effective diffusion tensor in stratified media. We use notation based on the aforementioned references to aid the reader who wished to compare the results. In one dimension, the stationary probability distribution is given by $$P_{\mathrm{s}}(x)= J_{\mathrm{s}} I_+(x),$$ where $J_{\mathrm{s}}$ is the (constant) current in one dimension and $$\begin{aligned}
&I_+(x) = {\exp\left(\Gamma(x)\right)\over \kappa(x)}\int_x^\infty dx'\ \exp\left(-\Gamma(x')\right),\label{I+} \\
&\Gamma(x) = \int_0^x dx' {u(x')\over \kappa(x')}.\end{aligned}$$ Due to the periodicity of $u$ and $\kappa$ the function $\Gamma$ obeys the relation $$\Gamma(x+L) = \Gamma(x) + \Gamma(L).$$ When $\Gamma(L)=0$ the system clearly has a steady state equilibrium distribution with no current. In writing Eq. (\[I+\]) we have assumed, without loss of generality, that $\Gamma(L)>0$ so that the integral on the right hand side converges. The steady state current is then obtained from the condition of normalization of $P_{\mathrm{s}}$ and is thus given by $$J_ {\mathrm{s}}= {1\over \int_0^L dx I_+(x)}.\label{jo1d}$$ and thus the effective drift is given by $V =J_0L$. The Eq. (18) of the main text can be solved in terms of the function $I_+$ and the function $I_-$ defined as $$I_-(x)= {\exp\left(-\Gamma(x)\right)}\int_{-\infty}^x dx'\ {\exp\left(\Gamma(x')\right)\over \kappa(x')}.\label{I-}$$ After some algebra we obtain the general compact expression for the effective large scale diffusivity $$D = {L^2\int_0^L dx\ \kappa(x) I_\pm(x)^2 I_\mp (x) \over \int_0^L dx\ I_\pm(x)^3}, \label{d1dfinal}$$ where $\pm$ indicates that one may (consistently) take the sign $+$ or $-$ in the above. The formula Eq. (\[d1dfinal\]) agrees with those given in Refs. [@lifson1962self; @reimann2002diffusion; @reimann2001giant; @reguera2006entropic; @burada2008entropic; @lindner2002; @lindner2001optimal].
In the case of the diffusion in a periodic diffusivity field with constant applied force we find $$\begin{aligned}
& I_+(x) = \exp(\beta F x)\int_x^\infty dx' {\exp(-\beta F x')\over \kappa(x')},\\
&I_-(x)= {1\over \beta F \kappa(x)}.\end{aligned}$$ Now, we write the inverse of $\kappa(x)$ as a Fourier series, [*i.e.*]{} $$\kappa(x) ={1\over \overline{ \kappa^{-1}}\sum_k a_k \exp({2\pi k ix\over L}) }$$ where $a_0=1$ and $a_{-k} =\overline{a}_k$. This then gives $$I_+(x) = \overline{ \kappa^{-1}}\sum_k {a_k \exp({2\pi k ix\over L})\over \beta F-{2\pi k i\over L}},$$ which yields the following expression for the effective diffusivity $$D(F)={1\over \overline{ \kappa^{-1}}}\left[1 +2\beta^2 F^2\sum_{k>0} {|a_k|^2\over \beta^2F^2+{4\pi^2 k^2 \over L^2}}\right].
\label{onedfield}$$ When $F=0$ we recover the classic result (Eq. (21)) $D(0) = \overline{ \kappa^{-1}}^{-1}$, that is to say that $D(0)$ only depends on the mean value of the inverse diffusivity. We see that for finite $F$ the diffusivity depends on all the Fourier coefficients of the inverse diffusivity, this means that in principle that measurements of the effective diffusion constant with applied external forces could be used to reconstruct the diffusivity field in one dimension. For large $F$, $D(F)$ saturates at the value $$D(\infty)={1\over \overline{ \kappa^{-1}}}\left[1 +2\sum_{k>0} {|a_k|^2}\right] = {\overline{\kappa^{-2}}\over \overline{ \kappa^{-1}}^3}.$$ The above formula recovers Eq. (22) for $D_{xx}$ when the force is directed in the $x$ direction (here the diffusion in the $y$ direction has no effect on that in the $x$). Note that this saturation is specific to the case of one dimension or for diffusion in stratified media in the direction parallel to the force when there are no variations of the diffusivity in the direction perpendicular to the applied force.
Details on numerical calculations and simulations.
==================================================
**Numerical solution of Eqs (18,19) of the main text.** The numerical resolution of Eq. (18) of the main text was carried out with the finite element software FlexPDE (www.pdesolutions.com). In the examples considered the unit cell $\Omega$ was a square, with periodic boundary conditions. First the equation for the steady state distribution $P_{\mathrm{s}}({\mathbf{x}})$ was solved, either directly or by relaxing an initially uniform probability solution to its steady state fixed point by numerically integrating the time dependent Fokker-Planck equation (in cases where there were convergence problems with the direct solution). This solution was then used to solve Eq. (18) for the two components of ${\bf f}$. Finally the diffusion coefficients $D_{xx}$ and $D_{yy}$ were obtained by numerical evaluation of the two integrals in Eq. (19) within the same software.
**Numerical simulations.** Numerical simulation of the stochastic differential equation for particles in a medium of varying diffusivity with applied external force were based upon integrating the simple discrete version of the Ito stochastic differential equation $$\begin{aligned}
X_i(t+\Delta t) = &X_i(t) + \left[ \partial_ {x_i} \kappa({\bf X}(t)) + \beta F_i \kappa({\bf X}(t))\right] \Delta t\nonumber\\
&+ \sigma_i \sqrt{2 \kappa({\bf X}(t))\Delta t}.\end{aligned}$$ Here $\sigma_i$ are independent Gaussian random variables of zero mean and unit variance. Performing several runs enables the measurement of $X_i(t)-X_i(0)$ and therefore to evaluate the effective drift and diffusivities defined in Eq. (2,3) of the main text. For the simulations shown in Fig.1. and Fig.2. of the main text, the time step was chosen to be $\Delta t=10^{-5}L^2/\kappa_0$, where $L$ is the size of the square unit cell and $\kappa_0$ is the diffusivity averaged over the unit cell. The effective diffusivity was obtained by evaluating the variance of $[X_i(t)-X_i(0)]/\sqrt{2t}$ and a time $t$ large enough to be in the diffusive regime (we took $t=10L^2/\kappa_0$). Averages were taken over more than $150,000$ runs, and controls were made to ensure that the simulation results do not depend on the time step.
Effective diffusivity at large forces in 2D
===========================================
In this section, we derive Eq. (23) of the main text, giving the effective diffusivity of particles submitted to a large force in a varying periodic two-dimensional diffusivity field. Here, we assume that the force is oriented in the direction $x$, and we call $h=\beta F_x$ the external field, and for simplicity, we assume that the fundamental unit cell of the structure is a rectangle of sides $L_x,L_y$. The Fokker-Planck equation is Eq. (1) of the main text: $$\begin{aligned}
\partial_t p(x,y,t)= \partial_x [\kappa(x,y) \partial_x p- h\ \kappa(x,y)p] + \partial_y \kappa(x,y) \partial_y p. \label{FKPEq90481}\end{aligned}$$ At high fields, $h\rightarrow\infty$, the stationary distribution $P_{\mathrm{s}}$ satisfies $$\begin{aligned}
0= \partial_x \left[ h\ \kappa(x,y)\ \ P_{\mathrm{s}}(x,y)\ \right]. \end{aligned}$$ so that the leading order term in $h$ vanishes. Therefore, the stationary distribution $P_{\mathrm{s}}$ takes the following general form: $$\begin{aligned}
P_{\mathrm{s}}(x,y)\simeq C(y)\kappa^{-1}(x,y), \label{9581441}\end{aligned}$$ where $C(y)$ is a still unknown function of $y$. At high forces, it is natural to assume that the equilibration time in the direction $x$ is much shorter than the one in the direction $y$. Therefore, we approximate the propagator of the process by $$\begin{aligned}
p(x,y,t)\simeq \pi(y,t) P_{\mathrm{s}}(x\vert y), \label{QuasiStaticApprox}\end{aligned}$$ where $P_{\mathrm{s}}(x\vert y)$ is the probability to observe a particle with an $x$-coordinate of value $x$, given that the coordinate in the other direction is $y$, and $\pi(y,t)$ is the marginal distribution of particles in the direction $y$ at time $t$. From (\[9581441\]) and the normalization condition, we find that $$\begin{aligned}
P_{\mathrm{s}}(x\vert y)=\frac{1}{\kappa(x,y)\ L_x \ \overline{\kappa^{-1}}(y)}.\label{ExprePstatXGivenY}\end{aligned}$$ where we call $\overline{g}(y)=L_x^{-1}\int_0^{L_x} dx \ g(x,y)$ for any function $g$, with $L_x$ the length of the period in the direction $x$. Inserting the approximation (\[QuasiStaticApprox\]) into (\[FKPEq90481\]) and integrating over $x$ leads to an effective Fokker-Planck equation for $\pi(y,t)$: $$\begin{aligned}
\partial_t \pi(y,t)\simeq \int_0^{L_x} dx \ \partial_y \{\kappa(x,y) \partial_y [\pi(y,t)P_{\mathrm{s}}(x\vert y)]\}.\end{aligned}$$ Performing explicitly the integral over $x$ by using (\[ExprePstatXGivenY\]), we find $$\begin{aligned}
\partial_t\pi(y,t)=\partial_y^2 [\kappa_e(y)\pi(y,t)]- \partial_y\{[\partial_y\overline{\ln\kappa}(y)] \kappa_e(y)\pi(y,t)\}, \label{95215}\end{aligned}$$ where we have posed $\kappa_e(y)=1/ \overline{\kappa^{-1}}(y)$. For large times, the stationary distribution of the effective Fokker-Planck equation (\[95215\]) is $$\begin{aligned}
\pi_{\mathrm{s}}(y)=\frac{e^{\overline{\ln\kappa}(y)}}{\kappa_e(y)\int_0^{L_y} du \ e^{\overline{\ln\kappa}(u)}/\kappa_e(u)}.
\label{PstatEffeciveDynalmics}\end{aligned}$$ Now, selecting the term of order $h^2$ in the equation (19) of the main text, we get: $$\begin{aligned}
D_{xx}\simeq &h^2 \int_0^{L_y} dy\int_0^{L_y} dy_0 \ \kappa_e(y)\nonumber\\
&\times \kappa_e(y_0)\int_0^{\infty}dt\ [\pi(y,t\vert y_0)-\pi_{\mathrm{s}}(y)] \pi_{\mathrm{s}}(y_0),\label{757815}\end{aligned}$$ where $\pi(y,t\vert y_0)$ is the propagator for the effective dynamics in the direction $y$. It is useful to introduce the function $f_e$ defined as the solution of $$\begin{aligned}
\partial_y^2 [\kappa_e(y)f_e(y)]- \partial_y\{[\partial_y\overline{\ln\kappa}(y)] \kappa_e(y)f_e(y)\}=\nonumber\\
-\kappa_e(y)\pi_{\mathrm{s}}(y)+\pi_{\mathrm{s}}(y) \int_0^{L_y} du\ \kappa_e(u)\pi_{\mathrm{s}}(u), \label{Equation_fe}\end{aligned}$$ with the orthogonality condition $\int_0^{L_y}dy f_e(y)=0$. From Eqs. (\[95215\],\[757815\]), we deduce that the effective diffusion coefficient can be written in terms of $f_e$ as $$\begin{aligned}
D_{xx}\simeq h^2 \int_0^{L_y} dy \ \kappa_e(y) f_e(y).\end{aligned}$$ Now, we introduce the functions $R$ and $W$ introduced in Eq. (24) of the main text: $$\begin{aligned}
&R(y)=\int_0^{y}du\ e^{\overline{\ln\kappa}(u)}; W(y)=\int_0^{y}du\ \kappa_e^{-1}(u) e^{\overline{\ln\kappa}(u)}.\end{aligned}$$ Rewriting the right-hand side of Eq. (\[Equation\_fe\]) by using (\[PstatEffeciveDynalmics\]) and expressing the result in terms of these two functions $R$ and $W$, we find: $$\begin{aligned}
\partial_y^2 [\kappa_e(y)f_e(y)]- \partial_y\{[\partial_y\overline{\ln\kappa}(y)] \kappa_e(y)f_e(y)\}=\nonumber\\
- \frac{\partial_y R(y)}{W(L_y)}+\frac{\partial_y W(y) R(L_y)}{W(L_y)^2}.\end{aligned}$$ Then, we can integrate once with respect to $y$. The resulting equation is a first order differential equation of a single variable function, and can be solved analytically. Taking into account the orthogonality condition $\int_0^1 dy f_e(y)=0$, we arrive after some lines of algebra, at the expression (23) of the main text.
[39]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ** (, ).
, ** (, ).
, ** ( ).
, , , , , ****, ().
, ** (, ).
, , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
See Supplementary Material where we show that the steady state distribution in presence of a force has a non-zero current steady state ${\bf J}_s$, which we will see is responsible for the deviation from the Stokes-Einstein relation. The Supplementary Material includes Refs. [@putzel2014nonmonotonic; @eichhorn2002brownian].
, , , ****, ().
, , , ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, ****, ().
, , , , , , ****, ().
, , , ****, ().
, ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, ****, ().
, ** (, ).
, , , , , , ****, ().
, , , , , , ****, ().
|
---
abstract: 'A local hidden variables model is exhibited which gives predictions in agreement with the quantum ones for the recent experiment by Go et al., quant-ph/0702267 (2007)'
author:
- Emilio Santos
date: 'March, 15, 2007 '
title: 'A local hidden variables model for the measured EPR-type flavour entanglement in $Y\left( 4S\right) \rightarrow B^{0}\overline{B^{0}}$ decays '
---
The aim of this note is to show that the results of the recent experiment measuring EPR-type flavour entanglement in $Y\left( 4S\right) \rightarrow
B^{0}\overline{B^{0}}$ decays[@Go] are compatible with local realism. It is known that a Bell inequality test cannot be performed[@Bramon] but this does not prove that the experiment is compatible with local realism. I shall prove the compatibility by exhibiting a local hidden variables (LHV) model which reproduces the quantum prediction (and agrees with the obtained results within experimental errors). The time-dependent rate for decay into two flavour-specific states are[@Go] $$R_{i}\left( \triangle t\right) =\frac{1}{4\tau }\exp (-\triangle t/\tau
)\left( 1+\left( -1\right) ^{i}\cos \left( \triangle m\triangle t\right)
\right) ,\;\triangle t=\left| t_{2}-t_{1}\right| , \label{1}$$ where $\triangle m$ is the mass difference between the two $B^{0}-\overline{%
\text{ }B^{0}}$ mass eigenstates, $i=1$ corresponds to the decays $%
B^{0}B^{0} $ or $\overline{B^{0}}\overline{B^{0}}$ and $i=2$ to the decays $%
B^{0}\overline{B^{0}}$ or $\overline{B^{0}}B^{0}$. Actually $R_{i}$ are the probability densities of decay at time $t_{2}$ of the second particle (say the one going to the right) conditional to the decay of the first (say going to the left) at time $t_{1}$, both $t_{1}$ and $t_{2}$ being proper times of the corresponding particles. For our purposes it is more convenient to consider the joint probability densities, $r_{kl}\left( t_{1},t_{2}\right) $ for decay of the first particle at time $t_{1}$ and second at time $t_{2}$, where $k=1\;(l=1)$ means that the first (second) particle decays as $B^{0}$ and $k=2\;(l=2)$ means that the first (second) particle decays as $\overline{%
B^{0}}.$
According to Bell's definition of LHV model[@Bell], appropriate for our case, we should attach hidden variables $\lambda _{1}$ and $\lambda _{2}$ to the first and second particles, respectively, and define probability densities $\rho ,P_{k},Q_{l}$ such that $$r_{kl}\left( t_{1},t_{2}\right) =\int \rho \left( \lambda _{1},\lambda
_{2}\right) P_{k}\left( \lambda _{1},t_{1}\right) Q_{l}\left( \lambda
_{2},t_{2}\right) d\lambda _{1}d\lambda _{2}. \label{3}$$ The function $\rho ,$ giving the initial distribution of the hidden variables in an ensemble of $Y\left( 4S\right) $ decays, should be positive and normalized, that is $$\rho \left( \lambda _{1},\lambda _{2}\right) \geq 0,\int \rho \left( \lambda
_{1},\lambda _{2}\right) d\lambda _{1}d\lambda _{2}=1. \label{3a}$$ The functions $P_{k}\left( \lambda _{1},t_{1}\right) $ represent the probability density that a particle with label $\lambda _{1}$ decays at time $t_{1}$ as a $B^{0}\left( \overline{B^{0}}\right) $ if $k=1(2)$ and similar for $Q_{l}.$ Thus these functions should be positive and, as all $B^{0}$ or $%
\overline{B^{0}}$ particles decay sooner or later, they should be normalized for any $\left\{ \lambda _{1},\lambda _{2}\right\} $, that is $$P_{k},Q_{l}\geq 0,\int_{0}^{\infty }dt_{1}\sum_{k=1}^{2}P_{k}\left( \lambda
_{1},t_{1}\right) =1,\int_{0}^{\infty }dt_{2}\sum_{l=1}^{2}Q_{l}\left(
\lambda _{2},t_{2}\right) =1. \label{3b}$$ Any choice of functions $\{\rho ,P_{k},Q_{l}\}$ fulfilling eqs.$\left( \ref
{3}\right) $ to $\left( \ref{3b}\right) $ provides a LHV model predicting the joint probabiliy densities of decay $r_{kl}\left( t_{1},t_{2}\right) .$
I propose the following. For the initial distribution of hidden variables $$\rho \left( \lambda _{1},\lambda _{2}\right) =\frac{1}{4\tau N\left( \lambda
_{2}\right) }\delta \left( \lambda _{1}-\lambda _{2}\right) ,\;\lambda
_{1},\lambda _{2}\in [0,2\pi ], \label{4}$$ where $\delta \left( {}\right) $ is Dirac's delta, and the functions $%
N\left( \lambda _{2}\right) $ will be defined below, after eqs.$\left( \ref
{4c}\right) ,$ where the normalization of $\rho \left( \lambda _{1},\lambda
_{2}\right) $ will be proved. For the probabilities of decay $$P_{k}\left( \lambda _{1},t_{1}\right) =\frac{1}{\tau }\exp \left( -\frac{%
t_{1}}{\tau }\right) \Theta \left( t_{1}\right) \sum_{n=0}^{\infty }\Theta
\left( \frac{\pi }{2}-\left| \lambda _{1}+(2n-k)\pi -\triangle mt_{1}\right|
\right) , \label{4a}$$ $$Q_{l}\left( \lambda _{2},t_{2}\right) =N\left( \lambda _{2}\right) \exp
\left( -\frac{t_{2}}{\tau }\right) \Theta \left( t_{2}\right) \left[ \cos
\left( \lambda _{2}-(l+1)\pi -\triangle mt_{2}\right) \right] _{+},
\label{4d}$$ where $\Theta \left( t\right) =1\;(0)$ if $t>0\;(t<0)$ and $\left[ x\right]
_{+}$ means putting $0$ if $x<0$. Thus all four functions are decaying exponentials modulated by periodic funcions which oscillate with period $%
2\pi /\triangle m.$ Physically this means that each particle “lives” as a $%
B^{0}$ during a time interval of duration $\pi /\triangle m$, then becomes a $\overline{\text{ }B^{0}}$ during another time interval $\pi /\triangle m$, and so on, until it decays. The particles are always anticorrelated in the sense that, at equal proper times, one of them is $B^{0}$ and the other one is $\overline{\text{ }B^{0}}.$
From eqs.$\left( \ref{4a}\right) $ and $\left( \ref{4d}\right) $ it is easy to see that the total probability densities $\left( \text{i. e.
independently of flavour}\right) $ for the decay of particles 1 and 2, are respectively $$\begin{aligned}
\sum_{k=1}^{2}P_{k}\left( \lambda _{1},t_{1}\right) &=&\frac{1}{\tau }\exp
\left( -\frac{t_{1}}{\tau }\right) , \nonumber \\
\sum_{l=1}^{2}Q_{l}\left( \lambda _{2},t_{2}\right) &=&N\left( \lambda
_{2}\right) \exp \left( -\frac{t_{2}}{\tau }\right) \left| \cos \left(
\lambda _{2}-\triangle mt_{2}\right) \right| , \label{4c}\end{aligned}$$ where $t_{1},t_{2}\geq 0.$ We see that the decay of the first particle is given by a standard exponential, but the decay law of the second particle is more involved. The functions $N\left( \lambda _{2}\right) $ are chosen so that the normalization eq.$\left( \ref{3b}\right) $ holds true. It is not necessary to calculate explicitly the functions $N\left( \lambda _{2}\right)
$, which are rather involved, but I derive an important property, namely $$\begin{aligned}
\int_{0}^{2\pi }\frac{1}{N\left( \lambda \right) }d\lambda &=&\int_{0}^{2\pi
}d\lambda \int_{0}^{\infty }\exp \left( -\frac{t}{\tau }\right) \left| \cos
\left( \lambda -\triangle mt\right) \right| dt \nonumber \\
&=&\int_{0}^{\infty }\exp \left( -\frac{t}{\tau }\right) dt\int_{0}^{2\pi
}\left| \cos \left( \lambda -\triangle mt\right) \right| d\lambda =4\tau .
\label{4f}\end{aligned}$$ This relation proves that the distribution $\rho \left( \lambda _{1},\lambda
_{2}\right) ,$ eq.$\left( \ref{4}\right) ,$ is indeed normalized.
In order to get $r_{kl}\left( t_{1},t_{2}\right) $ we should insert eqs.$%
\left( \ref{4a}\right) $ and $\left( \ref{4d}\right) $ in eq.$\left( \ref{3}%
\right) $ and perform integrals which are straightforward. Introducing the new variable $$x=\lambda _{1}+2n\pi -k\pi -\triangle mt_{1}, \label{5a}$$ and performing the integral in $\lambda _{2},$ using Dirac's delta, we get $$\begin{aligned}
r_{kl}\left( t_{1},t_{2}\right) &=&\frac{1}{4\tau ^{2}}\exp \left( -\frac{%
t_{1}+t_{2}}{\tau }\right) I_{kl}, \nonumber \\
I_{kl} &=&\int_{-\pi /2}^{\pi /2}dx\left[ \cos \left( x+\left( k-l-1\right)
\pi +s\right) \right] _{+},\,s\equiv \triangle m(t_{1}-t_{2}), \label{6}\end{aligned}$$ where we have taken into account that only one term of the sum in $n$ may contribute, depending on the values of $t_{1}$ and $t_{2}$, and we have removed the irrelevant term $2n\pi $ in the argument of the cosinus function. It is easy to see that the functions $I_{kl}$ are periodic in the variable $s$ with period $2\pi .$ Thus it is enough to consider the interval $s\in \left[ 0,2\pi \right] .$ Thus in the particular cases $k=l=1$ or $%
k=l=2 $ the integral $\left( \ref{6}\right) $ becomes, for $s\in \left[
0,\pi \right] $ $$I_{11}\left( t_{1},t_{2}\right) =I_{22}\left( t_{1},t_{2}\right) =\int_{\pi
/2-s}^{3\pi /2-s}\left[ \cos x\right] _{+}dx=\int_{\pi /2-s}^{\pi /2}\cos
xdx=1-\cos s, \label{7}$$ and for $s\in \left[ \pi ,2\pi \right] $ $$I_{11}\left( t_{1},t_{2}\right) =I_{22}\left( t_{1},t_{2}\right) =\int_{\pi
/2-s}^{3\pi /2-s}\left[ \cos x\right] _{+}dx=\int_{-\pi /2}^{3\pi /2-s}\cos
xdx=1-\cos s. \label{7a}$$ Similarly we get, for any $s=\triangle m\,(t_{1}-t_{2}),$$$I_{12}\left( t_{1},t_{2}\right) =I_{21}\left( t_{1},t_{2}\right) =1+\cos s,
\label{7b}$$ Finally we obtain $$r_{kl}\left( t_{1},t_{2}\right) =\frac{1}{4\tau ^{2}}\exp \left( -\frac{%
t_{1}+t_{2}}{\tau }\right) [1-(-1)^{l-k}\cos \left( \triangle m\,\triangle
t\right) ],\triangle t=\left| t_{1}-t_{2}\right| . \label{8}$$ Hence we may get eq.$\left( \ref{1}\right) $ via the equality which defines the conditional probability reported in the commented paper[@Go] in terms of the joint probability, namely $$R_{i}=\tau \exp (2t_{i}/\tau )r_{kl},\;j=\left| k-l\right| +1,$$ where $i=1(2)$ if particle $1(2)$ is the one decaying first. This proves that our LHV model's prediction agrees with the quantum one for the said experiment.
The model may be interpreted physically saying the either particle produced in the decay of the Y(4S) oscillates between the two flavour states in such a way that the flavours or the two particles in a pair are opposite at equal proper times. The model looks somewhat contrived due to the lack of symmetry, in the sense that the functions $P_{k}$ are quite different from the functions $Q_{l}$. A more symmetrical model may be obtained assuming that the assignement of the functions $P_{k}$ and $Q_{l}$ to the particles in a pair is at random. In any case our purpose was only to show the compatibility of the experiment with local realism, and not to make a physically plausible model.
I acknowledge useful comments by Albert Bramón and Alberto Ruiz.
[9]{} A. Go et al., Archiv quant-ph/0702267 (2007).
R. A. Bertlmann, A. Bramon, G. Garbarino and B. C. Hiesmayr, *Phys. Lett. A* **332**, 355 (2004); A. Bramón, R. Escribano and G. Garbarino, *J. Mod. Opt.* **52,** 1681 (2005).
J. S. Bell, *Physics* **1**, 195 (1964).
|
Sedimentation [@Blanc] is a rich and complex phenomenon in suspension science and a frontier problem in nonequilibrium statistical mechanics. The average sedimentation speed $v_{\rm sed}$ of solute particles drifting down in a solvent is determined by balancing the driving force (gravity) against the dissipative force (viscous drag). Giant non-thermal fluctuations in the velocity and concentration fields in a steadily settling suspension, observed even for non-Brownian systems, have been a puzzle for some years. Caflisch and Luke (CL) [@Caflisch] showed, for steady sedimentation in a container of smallest linear dimension $L\/$, that the assumption of purely [*random*]{} local concentration fluctuations led to velocity fluctuations with a variance $\langle v^2 \rangle \sim L\/$. Most experiments, however, find [*no*]{} dependence of $\langle v^2 \rangle \/$ on $L$ [@Nicolai; @Xue; @Segre], although Ladd’s simulations [@Ladd] and the data of Tory [*et al.*]{} [@Tory] appear to be consistent with CL.
In this Letter we propose a resolution of this puzzle by means of a set of coarse-grained, fluctuating nonlinear hydrodynamic equations for the long-wavelength dynamics of concentration and velocity fluctuations in a suspension settling steadily in the $-z$ direction, at vanishingly small Reynolds number. Our theory is similar in spirit to the Koch-Shaqfeh (KS) [@Koch] “Debye-like” screening approach but differs in several important details and predictions.
The central conclusion of our study is that there are [*two*]{} qualitatively distinct nonequilibrium phases for a sedimenting suspension. In the “unscreened” phase $\langle v^2 \rangle$ diverges as $L$, as in CL and, in addition, concentration fluctuations with wavevector ${\bf k} = ({\bf
k}_{\perp}, k_z)$ relax at a rate $\propto \, k^{1/2}\/$. The “screened” phase is characterized by a [*correlation length*]{} $\xi$ similar to that predicted by KS such that $\langle v^2 \rangle \sim L$ for $L \ll \xi$ and $\langle v^2 \rangle \sim \xi$ for $L \gg \xi$. Deep in the screened phase we predict $\xi \sim \phi^{-1/3}$ where $\phi$ is the particle volume fraction. This is in agreement with the experiments of Segrè [*et al.*]{} [@Segre], but not with KS [@Koch]. The relaxation rate in the screened phase is [*independent*]{} of $k$ for $k_z = 0$ and ${\bf k_{\perp}
\rightarrow 0}$. Detailed, experimentally testable expressions for the structure factor and velocity correlations in the screened phase are presented after we outline our calculations. The two phases are separated in our “phase-diagram” (Fig. 1) by a striking [*continuous nonequilibrium phase transition*]{} where $\xi$ diverges at least as rapidly as $\left( K - K_c
\right)^{-1/3}$ as a control parameter $K$ is decreased towards a critical value $K_c$.
The hydrodynamic equations we used to arrive at these results are $$\label{main}
\frac{\partial c}{\partial t} + {\bf v} \cdot \bbox{\nabla} c =
[D_{0\perp} \nabla_{\perp}^2 + D_{0z} \nabla_z^2]c +
\bbox{\nabla} \cdot {\bf f}({\bf r},t)$$ and $$\label{main2}
\eta \nabla^2 v_i({\bf r},t) = m_R g
P_{iz} c({\bf r},t),$$ where $c({\bf r}, t\/)$ and ${\bf v}({\bf r}, t )\/$ are the fluctuations about the mean concentration $c_0$ and the mean sedimentation velocity $- v_{\rm sed}
\hat{z}$ respectively. We justify these equations briefly below; for a more detailed discussion we refer the reader to Ref. [@levine]. Eq. \[main\] is the anisotropic randomly forced advection-diffusion equation with bare uniaxial diffusivities $( D_{0 z}, D_{0 \perp} )$ and a random stirring force ${\bf f}({\bf r},t)$ [@mult]. The Stokes equation, Eq. \[main2\], which expresses the balance between the driving by gravity and the dissipation by the viscosity $\eta$, describes how the concentration fluctuations produce velocity fluctuations. Here $m_R g\/$ is the buoyancy-reduced weight of a particle, while the pressure field has been eliminated by imposing incompressibility via the transverse projection operator $P_{ij} = \delta_{ij} - \nabla_i \nabla_j
(\nabla^2)^{-1}\/$.
Hydrodynamic equations such as Eqs. \[main\] and \[main2\] arise from a coarse-graining of the microscopic equations of motion. The latter, for the main case of interest here, [*viz.*]{}, non-Brownian suspensions at zero Reynolds number, are the deterministic equations of Stokesian dynamics for $N$ hydrodynamically coupled particles, and are known to be chaotic [@chaos]. The noise, or random stirring current ${\bf f}({\bf r},t)$ and the diffusivities in Eq. \[main\] represent a phenomenological description of the deterministic chaos at length scales below the coarse-graining length $\ell$ (which must be large compared to the particle radius $a$). We use these hydrodynamic equations to predict the velocity and concentration fluctuations at length scales large compared to $\ell$ driven by the random stirring at short distances.
We assume, as is reasonable, that ${\bf f}({\bf r},t)$ is Gaussian white noise with uniaxial symmetry: $$\label{corr}
\langle f_i({\bf r},t) f_j({\bf r'},t') \rangle =
2 c_0 N_{0}^{ij} \delta({\bf r}-{\bf r'}) \delta(t-t')$$ with an anisotropic noise amplitude $ N_{0}^{ij} =
N_{0\perp}\delta^{\perp}_{ij} + N_{0z} \delta^z_{ij}$, where $\delta^z_{ij}$ and $\delta^{\perp}_{ij}$ are the projectors along and normal to the $z$ axis, respectively. Because of the nonequilibrium origin of the noise and diffusion constants, we may not[@driven] assume that $N_{0 \perp}/N_{0 z} = D_{0
\perp}/D_{0 z}$ as would be true for the Langevin equation of a dilute suspension at thermal equilibrium. Note that no correlations have been fed in via the noise: any that emerge in the long-wavelength properties are a result of the interplay of advection and diffusion.
Let us now consider the nature of the spatio-temporal correlations implied by Eqs. \[main\] and \[main2\]. We will focus on the structure factor for concentration fluctuations $$\label{sq}
S(q) \equiv c_0^{-1} \int d^dr \langle c({\bf 0}) c({\bf
r})\rangle e^{-i {\bf q}.{\bf r}}$$ from which the velocity structure factor can be derived through Eq. \[main2\]. If we ignore the advective nonlinearity ${\bf v} \cdot
\bbox{\nabla} c$, then $S({\bf q})$ can be computed by straightforward Fourier transformation of Eq. \[main\], resulting in $$\label{sq:2}
S({\bf q}) = S_0({\bf q}) \equiv \frac{N_{0 \perp} q_\perp^2 +
N_{0 z} q_z^2}{D_{0
\perp} q_\perp^2 + D_{0 z} q_z^2}.$$ Using Eq. \[sq:2\] in Eq. \[main2\] we can compute $\langle v^2 \rangle$ as a function of the system size $L$ with the result: $$\label{vel_var}
\langle v^2 \rangle \sim \int_{q > 1/L} \frac{S({\bf q})}{q^4}
\sim L.$$ In other words, neglecting large-scale advection by the velocity fluctuations leads to the CL [@Caflisch] result.
To include the effect of the advective nonlinearity we have performed a self-consistent mode coupling calculation [@gal] on Eqs. \[main\]-\[corr\]. Our results can be expressed in terms of a [ *renormalized*]{} relaxation rate $$\label{rateren}
R({\bf q}) = D_{\perp}({\bf q}) q_{\perp}^2 + D_z({\bf q})
q_z^2 + \Gamma({\bf q})$$ and a [*renormalized*]{} structure factor of the form $$\label{sq:new}
S({\bf q}) = \frac{N_\perp({\bf q}) q_\perp^2 + N_z({\bf q})
q_z^2}{R({\bf q})}.$$ The quantities $D_{z,\perp}({\bf q})$ and $N_{z,\perp}({\bf q})$ represent renormalized diffusivities and noise amplitudes [@freq]. But, most importantly, the advective nonlinearity to lowest-order perturbation theory leads to an additional term in the renormalization of the relaxation rate which is of the form $\Gamma({\bf q}) = \gamma({\bf q}) q_{\perp}^2/q^2$. Starting from the stochastic hydrodynamic equations, Eqs.\[main\]-\[corr\], it turns out that the amplitude of this singular contribution becomes a constant, $\lim_{q \rightarrow 0} \gamma ({\bf q}) \propto I(\beta_N, \beta_D)$, which depends on the anisotropy ratios of the noise and diffusivity coefficients $$\begin{aligned}
\label{beta}
\beta_N = \frac{N_\perp}{N_z}, \quad {\rm and} \quad
\beta_D = \frac{D_\perp}{D_z}.\end{aligned}$$ In particular $I(\beta_N, \beta_D)$ is proportional to $\beta_N - \beta_D$. and consequently may change sign upon varying the noise and diffusivity ratios. For $I(\beta_N, \beta_D) < 0$ this would lead to exponentially growing concentration fluctuations in the limit of long wavelength. Here we do not pursue this intriguing possibility further but instead restrict our attention to $I(\beta_N, \beta_D) \geq 0$, for which the model can either be treated within dynamic renormalization group theory or using self-consistency methods.
We start our discussion at the borderline of stability, $\beta_N = \beta_D$. For these parameter values it can be shown that the fluctuating hydrodynamic equations describe a dynamics which obeys detailed balance [@detailed_balance]: the advective nonlinearity does not affect the equal-time correlations, and $S({\bf q})$ in particular is just the constant $N_{\perp}/D_{\perp}$. There are singularities in $N_{\perp,z}$ and $D_{\perp,
z}$ which we discuss later.
For $\beta_N \geq \beta_D$, detailed balance is violated and a singular diffusion term $\Gamma ({\bf q})$ is generated within perturbation theory. In order to analyze the dynamics in this regime we use one-loop self-consistent theory (mode coupling theory) and arrive at the expression $$\begin{aligned}
\label{gamma:eq}
\Gamma({\bf q}) = && c_0 \left(\frac{m_R g}{\eta}\right)^2
\int_k \frac{q_i P_{iz}({\bf
k})k_j P_{jz}({\bf q})}{k^2 q^2} \nonumber \\ \times
&&\frac{\left[S ({\bf q - k}) - S ({\bf k})\right]}{R({\bf
k}) + R({\bf q - k})}\end{aligned}$$ with $R({\bf q})$ given by (\[rateren\]), and similar self-consistent integral equations for $D_\perp({\bf q})$, $D_z(\bf q)$, $N_\perp(\bf q)$, and $N_z(\bf q)$. We find that there are two types of iteratively stable solutions to these coupled self-consistent equations: those with $\gamma(q \rightarrow 0)
> 0$, which we obtain below the solid line in the phase diagram spanned by the two anisotropy ratios (“screened” phase in Fig. 1), and those with $\gamma(q
\rightarrow 0) = 0$, which arises for values of the anisotropy parameters that lie above the solid line and below the dashed line of the same figure, i.e., in the “unscreened” phase. Note that within the self-consistent theory the line in the phase diagram where $\gamma(q=0)$ changes sign (solid line) has shifted with respect to the result of the one-loop perturbation theory discussed above (dashed line).
#### Screened Phase: {#screened-phase .unnumbered}
In the screened phase, $\Gamma({\bf q})$ is of the form $\gamma q_\perp^2/q^2$ in the small $q$ limit, with $\gamma$ a finite constant. This implies that the structure factor at small wavenumber becomes $$\label{structure:gamma}
S({\bf q}) \simeq \frac{N_\perp q_\perp^2 + N_z q_z^2}{D_\perp q_\perp^2
+ D_z q_z^2 + \gamma q_\perp^2 / q^2}$$ with $N_{\perp,z}$ and $D_{\perp,z}$ constants. From Eq. \[structure:gamma\] we can define a correlation length $\xi \equiv \left(
D_\perp/\gamma\right)^{1/2}$ such for $q_\perp \gg 1/\xi$ the structure factor is not significantly affected by advection. On the other hand, for $
q_\perp \ll 1/\xi$ the in-plane structure factor reads $S({{\bf q}}_\perp, q_z
= 0) \simeq \left(N_\perp/\gamma \right) q_\perp^2$, while $S({{\bf q}}_\perp =
0, q_z) \simeq \left(N_z/D_z \right)$. Physically, this means that at long wavelength advection strongly suppresses in-plane concentration fluctuations.
Using Eq. \[structure:gamma\] in conjunction with Eq. \[main2\], one finds that for length scales $L$ less than $\xi$, $\langle v^2 \rangle \propto L$, consistent with CL, while for $L$ large compared to $\xi$, $\langle v^2 \rangle
\propto \xi$. Velocity fluctuations on length scales small compared to $\xi$ are thus highly correlated while they become uncorrelated at larger length scales.
Deep inside the screened phase, i.e., for large $\gamma$, the renormalization of the diffusion and noise parameters is negligible and we can explicitly compute $\gamma$, and thus $\xi$, by inserting Eq. \[sq:new\] in Eq. \[gamma:eq\] using the bare values for the $N$’s and $D$’s. We find $$\label{eq:xi}
\xi = 8 (\frac{m_R g}{ \eta D})^{-2/3} c_0^{-1/3} \left( 1 -
\frac{2}{\beta_N} \right)^{-1/3},$$ where for simplicity we have set $D_{0 \perp} = D_{0 z} = D$. According to Eq. \[eq:xi\], the correlation length increases as we decrease the $\beta_N$ parameter (which could be done by increasing the [*thermal*]{} noise amplitude) and diverges at $\beta_N = 2$. Strictly speaking, as $\beta_N
\rightarrow 2$, the diffusivity corrections are no longer negligible, and the actual divergence of $\xi$ is probably stronger than (\[eq:xi\]), and occurs at a larger value of $\beta_N$. An explicit analytical (but lengthy) result for the correlation length $\xi$ can also be obtained throughout the screened phase as a function of both anisotropy parameters[@levine] and the phase boundary can also be computed. The phase boundary resulting from this result is shown in Fig. 1 as the solid line separating the screened from the unscreened phases. The dashed line in the figure corresponds to the set of parameter values where the hydrodynamic equations correspond to a Langevin dynamics in thermal equilibrium.
#### Unscreened Phase: {#unscreened-phase .unnumbered}
As already noted above, the hydrodynamic equations obey detailed balance [@detailed_balance] along the line $\beta_N
= \beta_D$ in the phase diagram. As a consequence the ratio of noise to diffusivity can be identified as a direction-independent “noise-temperature”. Furthermore, the structure factor $S (\bf q)$ becomes a constant $D_\perp/N_\perp$ and we recover the CL result. In conjunction with an exponent identity resulting from Galilean invariance this is enough to determine the dynamic exponent exactly, $z=d/2-1$. This implies that the diffusivities and noise amplitudes scale as $q^{-\epsilon/2} = q^{-3/2}$ for long wavelength. Even though there are now singular corrections to $D_{z,\perp}(\bf q)$ and $N_{z,\perp}(\bf q)$, the anomalous $\Gamma(\bf q)$ term is zero. For parameter values in the regime between the dashed line (detailed balance line) and the solid line, which marks the location of the nonequilibrium phase transition, renormalization group methods may be used to determine the renormalization of the noise and diffusivity amplitudes. In view of the results from the above self-consistency calculation ($\gamma =0$ in the unscreened phase) and the exact results at the detailed balance line it is quite likely that the resulting renormalization group flow will tend towards a fixed point which obeys detailed balance. We leave the details of such an investigation for a future publication [@levine].
=0.95
\[phdiag\]
The analysis of our hydrodynamic equations thus confirms that screening can suppress the CL divergence of $\langle v^2 \rangle$ with $L$, as argued by KS, while it allows for a second, unscreened phase. This result may help explain the conflicting results on $\langle v^2 \rangle$ obtained by different workers [@Nicolai; @Xue; @Segre; @Ladd; @Tory]. The self-consistent structure factor, Eq. \[sq:2\] we obtained differs significantly from the one proposed by KS. Experimental test will thus be of considerable importance. Measurements of $S({\bf q})$, for example by PIV [@piv] (Particle Imaging Velocimetry), would constitute the most direct test of the theory since our prediction that $S({\bf q}_\perp, q_z = 0) \propto q_\perp^2$ does not hold in the KS description. Detailed measurements of $S({\bf q})$ for sedimenting solutions are not yet available. However, Segrè [*et al.*]{} [@Segre] do report that the size-dependence of the amplitude $\langle v^2 \rangle$ of the velocity fluctuations depends on a characteristic length scale $\xi_S$ such that $\langle v^2 \rangle \propto \xi_S$ for length scales $L \gg \xi_S$ while for $L \ll \xi_S$, $\langle v^2 \rangle$ grows with $L$. They report that $\xi_S
\sim a \phi^{-1/3}$ with $\phi$ the particle volume fraction.
Our correlation length $\xi$, in Eq. \[eq:xi\], has the same physical interpretation as $\xi_S$. Deep in the screened phase, i.e., for $I(\beta_N,
\beta_D ) \gg 0$, $\xi$ can be written as: $$\label{answer}
\xi(\phi) \sim (m_R g / \eta D)^{-2/3}
a \phi^{-1/3} I(\beta_N, \beta_D )^{-1/3}$$ On scaling grounds, we expect that $D \propto \delta \, \vrms \xi$ with $\delta
\, \vrms$ the root mean square of the velocity field fluctuations. Experimentally, $\delta \, \vrms \xi$ is found to be independent of volume fraction $\phi$. In that case, Eq. \[answer\] reproduces the experimentally observed volume-fraction dependence, in contrast to KS [@Koch]. It should be noted that this volume fraction dependence of the correlation length implies that there is a fixed number of colloids within a correlation volume independent of volume fraction.
The observation of a transition from the screened to the unscreened phase would obviously be the most conclusive evidence supporting our theory, in particular if the transition were accompanied by a divergence of the velocity fluctuation correlation length. Even in the absence of such direct evidence, the observation of screened behavior combined with our theory requires that the anisotropies in the noise and diffusivity lie in the lower region of our dynamical phase diagram, Fig. 1. A complete test of our theory thus requires measurement of the $N$ and $D$ parameters. These could be obtained from the measurement of the steady-state static structure factor $S({\bf q})$, [ *e.g.*]{} by particle imaging or light scattering experiments both along the $z$ direction and in the $x-y$ plane, coupled with tracer diffusion measurements.
Finally, it would be interesting to vary the effective noise and diffusion constants in a controlled manner in an experiment. While there is, as yet, no method to calculate these constants directly from a microscopic theory it is reasonable to expect that by decreasing the Peclet number (i.e., increasing the role of [*isotropic*]{} thermal diffusion) one could drive the sedimenting system into the unscreened phase. Thus by repeating the experiments of Segrè [*et al.*]{} [@Segre] with colloids that are more nearly density matched to the solvent one could test our prediction of a transition to an unscreened phase.
We would like thank M. Rutgers, P. Chaikin, and P. Segrè for communicating unpublished results and for useful discussions. We would also like to thank J. Brady, D. Durian, E. Herbolzheimer, S. Milner, R. Pandit, J. Rudnick and U.C. Täuber for useful discussions. S.R. thanks F. Pincus and C. Safinya and the Materials Research Laboratory, UCSB (NSF DMR93-01199 and 91-23045), as well as the ITP Biomembranes Workshop (NSF PHY94-07194) for partial support in the early stages of this work . A.L. acknowledges support by an AT&T Graduate Fellowship. E.F. acknowledges support by a Heisenberg fellowship (FR 850/3-1) from the Deutsche Forschungsgemeinschaft.
R. Blanc and E. Guyon, [*La Recherche*]{}, [**22**]{}, 866 (1991).
R.E. Caflisch and J.H.C. Luke, Phys. Fluids [**28**]{}, 759 (1985).
H. Nicolai and E. Guazzelli, Phys. Fluids [**7**]{}, 3 (1995).
J.Z. Xue, [*et al.*]{}, Phys. Rev. Lett. [**69**]{}, 1715 (1992).
P.N. Segrè, E. Herbolzheimer, and P.M. Chaikin, Phys. Rev. Lett. [**79**]{}, 2574 (1997).
A.J.C. Ladd, Phys. Rev. Lett. [**76**]{}, 1392 (1996).
E.M. Tory, M.T. Kamel, and C.F. Chan Man Fong, Powder Technology [**73**]{}, 219 (1992).
D.L. Koch and E.S.G. Shaqfeh, J. Fluid Mech. [**224**]{}, 275 (1991).
A. Levine, S. Ramaswamy, E. Frey, and R. Bruinsma, in preparation.
In Eqs.(\[main\] -\[main2\]) other possible nonlinearities, e.g. those arising from the concentration dependence of mobilities and viscosities, as well as multiplicative noise terms of the form $\bbox{\nabla}
\cdot (c {\bf h})\/$, where ${\bf h}\/$ is a spatio-temporally white vector noise, can readily be shown, by power-counting, to be subdominant at small wavenumber relative to the advection term $\bbox{\nabla} \cdot {\bf v} c\/$, as can advection by [*thermal*]{} fluctuations in the fluid velocity field (S. Ramaswamy, unpublished).
I.M. Jánosi [*et al.*]{}, Phys. Rev. E [**56**]{}, 2858 (1997); J.F. Brady and G. Bossis, Ann. Rev. Fluid Mech. [**20**]{}, 111 (1988).
See e.g. B. Schmittmann and R.K.P. Zia, in [*Phase transitions and critical phenomena*]{}, Vol. 17, ed. by C. Domb and J. Lebowitz (Academic Press, London, 1994).
Including frequency-dependence will not alter the critical exponents and the structure of the scaling variable; it may, however, affect the functional from of the full relaxation rates.
The “Galilean” invariance of Eqs. \[main\] and \[main2\] under ${\bf v}({\bf r},t) \rightarrow {\bf v}({\bf r},t) + {\bf U}, {\bf r}
\rightarrow {\bf r} - {\bf U} t$ guarantees that the nonlinear coupling will not renormalize. This means we need to worry only about the corrections to the noise strength and relaxation rate; A. Levine, S. Ramaswamy, E. Frey, and R. Bruinsma (unpublished); see also D. Forster, D.R. Nelson, and M.J. Stephen, Phys. Rev. A [**16**]{}, 732 (1977).
For nonequilibrium random processes the stationary probability distribution function is not known a priori, except for certain cases where so called “potential conditions” (see e.g. R. Graham, in [ *Springer Tracts in Modern Physics*]{} Vol. 66, Springer Verlag, Berlin, 1973) are fulfilled. Then the random process has detailed balance property and the stationary distribution function can be calculated explicitly. In the present case detailed balance holds for $\beta_N=\beta_D$ with the equilibrium distribution function given by $P_{\rm st} [c] \propto \exp
\left\{ - \int d^d x (D_\perp/N_\perp) c^2 ({\bf x}) \right\}$.
R.J. Adrian, Annu. Rev. Fluid Mech. [**23**]{}, 261 (1991).
|
---
abstract: 'We consider equilibrium relaxation properties of the end-to-end distance and of principal components in a one-dimensional polymer chain model with nonlinear interaction between the beads. While for the single-well potentials these properties are similar to the ones of a Rouse chain, for the double-well interaction potentials, modeling internal friction, they differ vastly from the ones of the harmonic chain at intermediate times and intermediate temperatures. This minimal description within a one-dimensional model mimics the relaxation properties found in much more complex polymer systems. Thus, the relaxation time of the end-to-end distance may grow by orders of magnitude at intermediate temperatures. The principal components (whose directions are shown to coincide with the normal modes of the harmonic chain, whatever interaction potential is assumed) not only display larger relaxation times but also subdiffusive scaling.'
author:
- 'S. Fugmann and I. M. Sokolov'
title: Internal friction and mode relaxation in a simple chain model
---
Introduction
============
The dynamics of polymers and peptides attracted large attention in the past decade, mostly due to the relations of such dynamics to biological function. First such interest was due to the dynamics of nonequilibrium states (connected with the folding problem and with the biological functioning of the proteins), and only later some interest to *equilibrium* fluctuations in proteins arose. This interest was caused by the two reasons: one has to do with the thermal stability of proteins as connected to their structures [@Granek05], another one is related to the luminescent measurements of fluctuations of distance between the two groups in equilibrium [@Xie04] and the discovery of the anomalous kinetics and of extremely large characteristics times in such fluctuations. The results of these investigations, put together, lead to an enigma: On one hand, the thermal stability and many other properties of proteins can be well-described within simple bead-spring models of such systems [@Togashi07_PNAS; @Cressmann08_PRE], which for small deviations from equilibrium can be reduced to a standard picture of normal modes in a complex harmonic network. The evaluation of the characteristic times in such systems, however, leads to results being off by orders of magnitudes when compared to the observed times [@Tang06_PRE]. The strong discrepancy in correlation times implies the existence of a strong additional [*internal*]{} friction mechanism slowing down the dynamics compared the linear Rouse one.
Recently, the anomalous kinetics (power-law decay of correlations) was observed even in single modes and even in relatively short and flexible peptides, i.e. linear chains lacking secondary structure [@Neusius08], and the behavior found here is very close to the one observed in protein simulations [@Senet08]. Therefore such kinetics probably is an intrinsic property of a large class of polymers, and is not connected to the specific properties of the secondary structure of proteins. Furthermore it has been shown that trap models cannot account for the anomalous dynamics [@Neusius08]. Therefore, it is necessary to analyze, what kind of the minimal assumptions about the intramolecular potentials have to be made to build a model mimicking the behavior observed in realistic molecular dynamics simulations.
Typical mechanisms of internal friction involve the existence of barriers in the free energy landscape of the system, being of entropic or of enthalpic nature. The necessity to overcome such barriers slows down the overall dynamics by the Arrhenius factor which might be quite large [@Fixman78]. However, the existence of a complex, nonlinear energy landscape may lead to a strong mixing of modes appearing in the linearized description, and make the whole analysis based on such a picture problematic. As we proceed to show, this is not the case: although the dynamics of modes is strongly influenced, the directions of the principal components (PCs) in configuration space follow those of the normal modes, which resolves the enigma.
In what follows we first consider the Brownian dynamics of a three-dimensional polymer chain, as applied e.g. for simulating polyethylene molecule within the valent-angle model [@Ryckaert75; @Rigby87; @Binder97], and show that it follows slow kinetics at intermediate times, provided that the temperature is low enough. On the other hand, at high temperatures the typical Rouse dynamics sets on.
To simplify the model even more, we consider a one-dimensional chain of beads, similar to the Rouse model of the polymer dynamics, with the only difference being changing the harmonic interaction between the beads for a nonlinear one. We discuss the end-to-end distance of the chain and the relaxation properties of the PCs [@Kitao99] of the system.
The one-dimensional model discussed here is a close continuous analogue of the so-called “necklace model" for reptation [@Guidoni03; @Drz06], so that the results might have a broader applicability also outside of the scope of present investigations.
The results of extensive numerical simulations show that while a whatever single-well potential tested does not affect drastically single-mode relaxation properties, the double-well potentials lead to intermediate-time behavior quite similar to the one observed in realistic systems and to strong increase in relaxation times compared to the harmonic case. Therefore such a model may be considered as a possible candidate for the explanation of the corresponding findings. The multiwell intramolecular potentials appear at different scales and are ubiquitous in polymers [@Ryckaert75; @Binder97]. They appear quite naturally e.g. within the valent-angle model. The anomalous behavior is present within the finite temperature range (which in the realistic case may be relevant for the biological functioning), and disappears both for low and for high temperatures, where the dynamics can therefore be described by an effective harmonic model, albeit with the values of parameters strongly different from the “microscopic” ones. This property can explain that the generalized Gaussian models work quite well in predicting thermal stability properties while failing to mimic the temporal fluctuation behavior at moderate temperatures.
The work is organized as follows: In the next section we present the relaxation properties found in a complex three-dimensional polymer model and introduce in Sec. \[s\_model\] the one-dimensional chain model. In Sec. \[s\_relate\] we consider the relaxation properties of the end-to-end distance of the chain and discuss in Sec. \[s\_pcs\] the behavior of its PCs. We focus on their directions and their temporal scaling, which is found to coincide with the one found in the realistic three-dimensional model. Finally we summarize our results.
Internal friction in a three-dimensional chain model {#s_comp}
====================================================
In the first part of the work we consider the relaxation properties of a complex polymer model which was proposed to mimic a polyethylene molecule. The model equations can be found in [@Ryckaert75; @Rigby87; @Binder97]. They include valence bond-, valence angle- and torsional angle-interactions. For our purpose we neglect the Lenard Jones interactions and focus on the influence of the angle interactions which follow a multistable potential energy landscape. All parameter values are the same as in [@Rigby87] except for the bond length $\tilde{l}_0=1.$, and the constants $k_b=3.$, $k_{\Theta}=3.$, and $k_{\phi}=0.1$ which only set the timescale of simulation. We perform Brownian dynamics simulations and study the relaxation properties of the PCs and their autocorrelation functions.
For a discrete set of $M$ measurements of a trajectory $x(t)$ the unbiased autocorrelation function (ACF) of an observable $x(t)$ (coordinate, end-to-end distance, etc.) $\phi(t)$ ($t\in 0,...,M-1$) is given by $$\phi(t)=\frac{1}{(M-t)\sigma^2}\sum_{n=0}^{M-t-1}\left(x(n+t)-\mu\right)\left(x(n)-\mu\right)\,,
\label{eq:ACF}$$ with the sample mean $\mu$ and sample variance $\sigma^2$. Since (for $\mu=0$) the mean squared displacement $\langle x^2(t)\rangle=\langle \left(x(t+t')-x(t')\right)^2\rangle
=\langle x^2(t+t')-2x(t+t')x(t')+x^2(t')\rangle
=2\sigma^2\left(1-\phi(t)\right)$, the values of $1-\phi(t)$ and $\langle x^2(t)\rangle$ (e.g. used in [@Neusius08]) contain the same information and differ only in normalization. We denote the unbiased ACF of the unbiased ACF of the $k-$th PC by $\phi_{k}$.
In Fig. \[fig:CMPCs\] we show the scaling of $1-\phi_k(t)$ for some of the PCs. In panel (a) we have chosen a high value of the temperature yielding a trans-gauche barrier height of $\Delta E_{tg}/k_BT=0.66$, a cis barrier height of $\Delta E_{c}/k_BT=2.4$ and $\Delta E_{t\rightarrow g}/k_BT=0.16$. All PCs show a scaling $t^{\alpha}$ with $\alpha=1$, which is the same as for the relaxation of single modes in the Rouse chain [@Doi96]. The scaling for a lower value of the temperature with $\Delta E_{tg}/k_BT=7.6$, a cis barrier height of $\Delta E_{c}/k_BT=27.6$ and $\Delta E_{t\rightarrow g}/k_BT=1.8$ is presented in panel (b). The relaxation behavior strongly differs compared to the previous case. For small times $1-\phi_k(t)$ still follows follows the $t^{1}$ scaling (dotted line); but at longer time it crosses over to anomalous, subdiffusive behavior. The curve’s slope is slightly higher for the first mode. Compared to the high temperature case, the typical relaxation times are shifted to longer times. Thus the existence of barriers in the free energy landscape at intermediate temperatures slows down the dynamics. However, the theoretical analysis of a three-dimensional model is still too involved, so that we simplify it even more in the next section.
The minimal model {#s_model}
=================
In order to mimic the behavior found in the complex three-dimensional model we consider a one dimensional chain of $N$ beads with coordinates $q_1$, ..., $q_N$. The interaction potential is given by $W(l_i)$, with $l_{i} = q_{i}-q_{i-1}$ being the relative displacements of the neighboring beads, $i=1,\cdots,N$. The chain is free, hence $W(l_0)=W(l_{N+1})=0$. For a harmonic potential $W(l_i)$ this model corresponds to a Rouse chain in 1d. The dynamics is overdamped and the system of coupled Langevin equations reads $$\dot{q}_i=-\frac{1}{\gamma}\frac{d}{dq_i}\left\{W(l_{i+1})+W(l_i)\right\}+\sqrt{2\frac{k_BT}{\gamma}}\xi_i\,,
\label{eq:langevin}$$ with $\xi_i$ being Gaussian white noise of strength $k_BT/\gamma$. The friction coefficient $\gamma$ is set to unity in what follows. The simulations correspond to solving the set of coupled equations by use of a Heun integration scheme. In order to get reliable statistics numerical simulations have to cover many orders of magnitude in time. Therefore we confine ourselves to relatively short chains consisting of $N=25$ beads.
We concentrate on three simple prototypes of coupling potentials $W(l_i)$, either single-well or double-well. For the single-well coupling potentials we consider a soft and asymmetric Toda interaction (T-potential) [@Toda89] and a hard and symmetric quartic interaction (Q-potential), i.e, $W(l_i)=l_i+\exp(-l_i)-1$ and $W(l_i)=\frac{1}{2}l_i^2+\frac{1}{4}l_i^4$, respectively. The double-well potential (DW-potential) has the form $W(l_i)=\frac{a}{4} l_i^4-\frac{b}{2} l_i^2$. The latter potential exhibits two minima separated by a maximum of height $\Delta E=b^2/(4a)$. We define $e=\Delta E/(k_BT)$. Note, that for small deviations from their equilibria all potentials can be approximated by a harmonic spring with coupling constant $\kappa=1$ for the T-and Q-potentials and $\kappa=2b$ for the DW-potential.
Relaxation of the end-to-end distance {#s_relate}
=====================================
First, let us consider the ACF of the end-to-end distance $R_{ete}=q_N-q_1$, see Fig. \[fig:dwETE\]. We denote the unbiased ACF of the end-to-end distance by $\phi_{ete}$. Panel (a) shows the scaling of $1-\phi_{ete}$ vs. time for the different single-well potentials. Compared to the corresponding harmonic chain, the autocorrelation time reduces for the Q-potential while it grows for the T-potential, and this effect is stronger for higher temperatures. An explanation is as follows: the relaxation time of $R_{ete}$ is to a good approximation the largest relaxation time of the harmonic chain. This one behaves as $\tau_1\sim 1/\kappa$ [@Doi96]. For a hard Q-potential the effective value of $\kappa$ is larger than for a harmonic one and grows with temperature, so that $\tau_1$ is expected to be smaller, while for the soft T-potential the opposite is true. In panel (b) $1-\phi_{ete}$ is depicted for the DW-potential. For different values of $e$ the curves strongly differ. At a low temperature ($e=20$, dotted line) the curve coincides with the one of the corresponding harmonic chain (solid gray line) for a finite simulation time (the beads hardly jump between the two minima of the coupling potential): this is exactly what should be observed when such a conformation change mechanism is frozen out [@Tournier03_PRL]. For intermediate barrier heights ($e=10$, dashed-dotted line) the behavior of the ACF is strongly different. It shows distinctly different behaviors for short and long times interpolated by a plateau. The characteristic relaxation time in the long-time domain (where the kinetics is dominated by rare fluctuations following the Arrhenius law) is by about four orders of magnitude larger than for the harmonic chain. We also considered a case when the barrier heights are randomly distributed with the density $\rho(e)\sim \exp(\lambda (e-e_{min}))$, $e\geq e_{min}=5$, $\lambda=0.1$ with a cut-off at $e_{max}=11$ to avoid freezing (solid black line). In this case the plateau is smoothed out, but the increase of characteristic relaxation time persists. For high temperatures ($e=1$, dashed line) the correlation time is much smaller compared to the previous case (but still larger than in the harmonic case due to the softer potential) and its scaling follows the one of the harmonic chain. We conclude that the increase of the correlation time (related to the timescale of rare barrier crossings) becomes large at intermediate temperatures. Qualitatively the observed behavior is in full agreement with the one made for the realistic three-dimensional model.
Principal components (PCs) {#s_pcs}
==========================
Directions of the PCs
---------------------
Let us now turn to the behavior of principal components. We proceed to show that in a homogeneous linear chain with symmetric interaction potentials $W(l_i)$ between the beads, with the overall potential energy given by $\tilde{W}=\sum_i W(l_{i})$, the PCs follow the directions of the normal modes of the harmonic chain, independently on the exact form of the potential $W(l_i)$. This is due to the fact that the directions of PCs are essentially thermodynamical quantities.
The proof of this fact involves the following steps. Consider a grafted chain of $N+1$ beads, whose first bead is attached to the origin of coordinates by a weak spring, with the small elastic constant $\varepsilon$ (an “asymptotically free” chain). Let $\mathbf{q}$ be the vector of the coordinates, and $\mathbf{l}$ be the vector with components $l_0=q_0$ and $l_i$ for $i \geq 1$. For a given interaction potential the distribution of $\mathbf{l}$ is given by $$p(l_0,...,l_n)=\frac{1}{Z} \exp \left[-\frac{1}{k_BT}\left(
\frac{\varepsilon}{2}l_0^2 +\sum_i W(l_{i}) \right)\right].
\label{Canonical}$$ This overall canonical distribution factorizes in the product of the distributions of single $l_i$-components, which means that the corresponding random variables are independent. Due to symmetry of all interaction potentials, the mean values of $l_i$ are zero. Hence, the corresponding variables are uncorrelated too: $\left\langle l_0^2 \right\rangle =l_0^2 = 2k_BT/\varepsilon$, $\left\langle l_i^2 \right\rangle = l^2$ for $i=1,...,N$ and $\left\langle l_i l_j \right\rangle = 0$ for all $i \neq j$. The variables $q_i$ are obtained through $l_i$ via linear transformation, $q_i = \sum_{j=0}^i l_j$, so that their correlator $C_{ij} =
\left\langle q_i q_j \right\rangle = l_0^2
+ \min\{i,j\} l^2 $. The matrix $\hat{C}$ can be diagonalized. Its eigenvectors are the PCs of the grafted chain, whether harmonic or not. The normalized PCs can not depend on $l^2$ or $l_0^2$ independently, but only on their ratio $l^2/l_0^2 = l^2 \varepsilon/(2k_BT)$. Note that for the free chain, $\varepsilon \to 0$, $l_0 \to \infty$, the PCs of a harmonic and anharmonic chain coincide. The same is true also for asymmetric interaction potentials, where the lengths $l_i$ have to be corrected for thermal expansion.
Turning to a harmonic chain, we can change $l_i$ for $q_i$ in Eq.(1), which now reads $$p(q_0,...,q_N)=\frac{1}{Z'} \exp \left[-\frac{\kappa}{2k_BT}\left( \mathbf{q} \hat{M} \mathbf{q}
\right)\right],
\label{Canonical2}$$ where $\hat{M}$ is the tridiagonal force matrix, whose elements are: $m_{00} =1+\varepsilon$, $m_{NN}=1$, the rest of diagonal elements $m_{ii}=2$, the elements on the sub- and super-diagonals are equal to $-1$. Since the corresponding distribution is a multivariate Gaussian, the elements of the correlation matrix $\hat{C}$ are proportional to the ones of the inverse of the matrix $\hat{M}$. The eigenvectors of the matrix $\hat{M}$ are the normal modes of the harmonic chain. Since the matrix $\hat{M}$ and its inverse $\hat{C}$ shear the eigenvectors, those are also the PCs of the harmonic chain. The idea of considering the grafted chain is connected to the wish to have $\hat{M}$ invertible. In the last step one shows that for $\varepsilon \to 0$ the limits of all corresponding eigenvectors exist (the fact having to do with the nondegenerate nature of a harmonic chain’s spectrum).
Thus assuming whatever interaction potential between neighboring beads, the PCs follow the directions of the normal modes of the harmonic chain. However, as we proceed to show, the dynamical properties of each single mode can be drastically different from the ones of a harmonic chain.
Relaxation properties of the PCs
--------------------------------
From the normal mode analysis of a harmonic chain it is known that the autocorrelation time of the $k-$th normal mode scales as $\tau_k\sim N^2/k^2$ [@Doi96; @deGennes79] and does not depend on the noise strength $k_BT/\gamma$. Although the autocorrelation time becomes temperature dependent for the single-well nonlinear potentials, the scaling of $1-\phi_k(t)$ remains the same as in the harmonic situation. This is shown in Fig. \[fig:DWPCs\] in panel (a). The curves order relative to the corresponding harmonic limiting case is the same as in Fig. \[fig:dwETE\]. Furthermore we find that the scaling with $k$ and $N$ persits (not shown). Thus as in the harmonic chain higher modes have smaller correlation times.
In contrast, for the DW-potential with intermediate barrier height of $e=10$, Fig. \[fig:DWPCs\] panel (b), the relaxation times of the lowest three modes are almost equal: the corresponding curves merge for $t>10^2$. The overall shape of the curves is similar to the one for $R_{ete}$, albeit with different scaling. For a harmonic chain the scaling is given by $t^{\alpha}$ with $\alpha=1$ (corresponding to the dotted line in the plot). For small times $1-\phi_k(t)$ still follows follows this scaling; but at longer time it crosses over to anomalous, subdiffusive behavior with $\alpha\approx 0.82$ over two orders of magnitude in time. The case of distributed barrier heights is shown in panel (c). Here we observe again a subdiffusive scaling of the ACF. Interestingly the shown curves for different values of $k$ do not merge at longer times and their slope is mode dependent with smaller $\alpha$ for higher values of $k$ ($k=1$: $\alpha\approx 0.79$, $k=2$: $\alpha\approx 0.72$, $k=3$: $\alpha\approx 0.68$). For modes $k\geq 4$ the slope changes only slightly (not shown). The relaxation behavior of the realistic chain presented in Fig. \[fig:CMPCs\] is the same, albeit with somewhat different exponents. However, the exponents are expected to depend crucially on the choice of parameters and the ratio of different timescales in the system. The proposed minimal model is able to mimic qualitatively the relaxation behavior of a much more complex structure and therewith it helps to understand the underlying mechanism of internal friction.
Summary {#s_summary}
=======
To summarize, it was found that under nonlinear interaction potentials, the scaling of the end-to-end ACF is essentially the one of the harmonic chain, though with shifted correlation times. While these changes are not too large for the single-well potentials, for the double-well ones (describing the internal friction) these can lead to the increase in the relaxation times by the orders of magnitude. Turning to the principal components we have shown that for homogeneous chains they follow the normal modes of the harmonic chain, but can exhibit vastly different kinetics, both with respect to the characteristic times and to the overall scaling. For the double-well potentials the last can correspond to subdiffusion. It was shown that the same behavior can be found in realistic polymer chain models. Therefore a one-dimensional chain with double-well potentials might be a possible minimal model describing the similar observations of anomalous kinetics in realistic molecular dynamics simulations. Moreover, the model is a close continuous analogue of the so-called “necklace" model for reptation, so that the results might have a broader applicability also outside of the scope of present investigations.
The authors thankfully acknowledge financial support by DFG within the SFB 555 research collaboration program.
[18]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, , , , ****, ().
, , , , , ****, ().
, ****, ().
, ****, (), ISSN .
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, (pages ) ().
, ** (, ).
, ** (, ).
, ****, ().
, ** (, ).
Figure captions {#figure-captions .unnumbered}
===============
Figure \[fig:CMPCs\]. {#figurefigcmpcs. .unnumbered}
---------------------
$1-\phi_k(t)$ of some of the PCs. Panel (a): High temperature value. Panel (b): Intermediate temperature value for which strong internal friction is observed. The parameter values are given in [@Rigby87] and in the text.
Figure \[fig:dwETE\]. {#figurefigdwete. .unnumbered}
---------------------
ACF of the end-to-end distance $R_{ete}$. Shown is $1-\phi_{ete}(t)$. Panel (a): Single-well interaction potentials. The noise strengths is $k_BT/\gamma=5$. Panel (b): DW-potential. The barrier heights are given in the legend, $b=2$.
Figure \[fig:DWPCs\]. {#figurefigdwpcs. .unnumbered}
---------------------
(Color online) $1-\phi_k(t)$ of the lowest PCs. Panel (a): Single-well potentials. The noise strength is the same as in Fig. \[fig:dwETE\]. Panel (b): DW-potential with the parameter values $a=1$, $b=2$, and $e=10$. Panel (c): DW-potential with barrier heights distributed as described in the text.
Figures {#figures .unnumbered}
=======
|
---
abstract: ' Data quality and data cleaning are context dependent activities. Starting from this observation, in previous work a context model for the assessment of the quality of a database instance was proposed. In that framework, the context takes the form of a possibly virtual database or data integration system into which a database instance under quality assessment is mapped, for additional analysis and processing, enabling quality assessment. In this work we extend contexts with dimensions, and by doing so, we make possible a multidimensional assessment of data quality assessment. Multidimensional contexts are represented as ontologies written in Datalog$\pm$. We use this language for representing [*dimensional constraints*]{}, and [*dimensional rules*]{}, and also for doing [*query answering*]{} based on dimensional navigation, which becomes an important auxiliary activity in the assessment of data. We show ideas and mechanisms by means of examples.'
title: Extending Contexts with Ontologies for Multidimensional Data Quality Assessment
---
Introduction
============
The quality of data cannot be assessed without contextual knowledge about the production or the use of data. Actually, the notion of data quality is based on the degree in which the data fits or fulfills a form of usage [@BT06; @JIA08]. As a consequence, the quality of data depends on their use context. It becomes clear that context-based data quality assessment requires a formal model of context, at least for the use of data.
In this work we follow and extend the approach proposed in [@BR10]. According to it, the assessment of a database ${\mathcal{ D}}$ is performed by mapping it into a context ${\mathcal{ C}}$ that is represented as another database, or as a database schema with partial information, or, more generally, as a virtual data integration system with possibly some materialized data and access to external sources of data. The quality of data in ${\mathcal{ D}}$ is determined through additional processing of data within the context. This process leads to a new (or possible several) quality version(s) of ${\mathcal{ D}}$, whose quality is measured in terms of how much it departs from its quality version(s).
In [@BR10], dimensions are not considered as contextual elements for data quality analysis. However, in practice dimensions are naturally associated to contexts. For example, in [@TC13], they become the basis for building contexts, and in [@TRL10] they are used for data access data at query answering time.
In order to capture general dimensional aspects of data for inclusion in contexts, we take advantage of the Hurtado-Mendelzon (HM) multidimensional data model [@HG05], whose inception was mainly motivated by data warehouse and OLAP applications. We extend and formalize it in ontological terms. Actually, in [@MLK12] an extension of the HM model was proposed, with applications to data quality assessment in mind. That work was limited to a representation of this extension in description logic (actually, an extension of DL-Lite [@DCAL07]), but data quality assessment was not developed.
In this work we propose an ontological representation in Datalog$\pm$ [@ACL09-2] of the extended HM model, and also mechanisms for data quality assessment based on query answering from the ontology via dimensional navigation. Our extension of the HM model includes [*categorical relations*]{} associated to categories at different levels in the dimensional hierarchies, possibly to more than one dimension. The extension also considers [*dimensional constraints*]{} and [*dimensional rules*]{}, which could be treated both as [*dimensional integrity constraints*]{} on categorical relations that involve values from dimension categories.
However, dimensional constraints are intended to be used as [*denial constraints*]{} that forbid certain combinations of values, whereas the dimensional rules are intended to be used for data completion, to generate data through their enforcement. Dimensional constraints can be [*intra-dimensional*]{}, i.e. putting restrictions on data values of categorical relations associated to categories in a single dimension, or [*inter-dimensional*]{}, i.e. putting restrictions on data values of categorical relations associated to categories in different dimensions.
The next example illustrates the intuition behind categorical relations, dimensional constraints and rules, and how the latter can be used for data quality assessment. In it we assume, according to the HM model, that a dimension consists of a number of categories related to each other by a partial order. Later on, we use the example to show how contextual data can be captured as a Datalog$\pm$ ontology.
\[exp:intr\] Consider a relational table [*Measurements*]{} with body temperatures of patients in an institution (Table \[tab:measurements\]). A doctor in this institution needs the answer to the query: [*“The body temperatures of [*Tom Waits*]{} for [*September 5*]{} taken around noon with a thermometer of brand [*B1*]{}"*]{} (as he expected). It is possible that a nurse, unaware of this requirement, used a thermometer of brand [*B2*]{}, storing the measurements in [*Measurements*]{}. In this case, not all the measurements in the table are up to the expected quality. However, table [*Measurements*]{} alone does not discriminate between expected or intended values (those taken with brand [*B2*]{}) and the others.
Now, for assessing the quality of the data in [*Measurements*]{} according to the doctor’s quality requirement, extra contextual
information about the thermometers used may be useful. For instance, there is a table [*PatientWard*]{}, linked to the [*Ward*]{} category, that stores patients of each ward of the institution (Fig. \[fig:dim\]). In addition, the institution has a [*guideline*]{} prescribing that: [*“Temperature measurement for patients in standard care unit have to be taken with thermometers of Brand B1"*]{}.
This guideline, which will become a dimensional rule in the ontology, can be used for data quality assessment when combined with an intermediate virtual relation, [*PatientUnit*]{}, linked to the [*Unit*]{} category, that is generated from [*PatientWard*]{} by upward-navigation through dimension [[*Hospital*]{}]{} (on left-hand-side of Fig. \[fig:dim\]), from category [[*Ward*]{}]{} to category [[*Unit*]{}]{}.
Now it is possible to conclude that on certain days, Tom Waits was in the standard care unit, where his temperature was taken, and with the right thermometer according to the guideline (patients in wards ${\it W_1}$ or ${\it W_2}$ had their temperatures taken with a thermometer of brand [*B1*]{}). These clean data appear in relation ${{\it Measurements}}^q$ (Table \[tab:qualitymeasurements\]), which can be seen as a quality answer to the doctor’s request.
[c|c|c|c|]{} & **Time** & **Patient** & **Value**\
& Sep/5-12:10 & Tom Waits & 38.2\
& Sep/6-11:50 & Tom Waits & 37.1\
& Sep/7-12:15 & Tom Waits & 37.7\
& Sep/9-12:00 & Tom Waits & 37.0\
& Sep/6-11:05 & Lou Reed & 37.5\
& Sep/5-12:05 & Lou Reed & 38.0\
[c|c|c|c|]{} & **Time** & **Patient** & **Value**\
& Sep/5-12:10 & Tom Waits & 38.2\
& Sep/6-11:50 & Tom Waits & 37.1\
Elaborating on this example, it could be the case that there is a [*constraint*]{} imposed on dimensions and relations linked to their categories. For instance, one capturing that the intensive care unit was closed since August/2005: [*“No patient was in intensive care unit during the time after August /2005"*]{}. Again, through upward-navigation to the next category, we should conclude that the third tuple in table [*PatientWard*]{} should be discarded. This [*inter-dimensional constraint*]{} involves dimensions [*Hospital*]{} and [*Time*]{} (right-hand-side of Fig. \[fig:dim\]), to which the ward and the day values in [*PatientWard*]{} are linked. [$\Box$]{}
The example shows a processing of data that involves changing the level of data linked to a dimension. This form of [*dimensional navigation*]{} may be required for query answering both in the [*downward*]{} and [*upward*]{} directions (Example \[exp:intr\] shows the latter). Our ontological multidimensional contexts support both.
\[exp:downward\] Two additional categorical relations, [*WorkingSchedules*]{} and [*Shifts*]{} (Table \[tab:ws\] and Table \[tab:shifts\]), store shifts of nurses in wards and schedules of nurses in units. A query to [*Shifts*]{} asks for dates when [*Mark*]{} was working in ward [*W2*]{}, which has no answer with the extensional data in Table \[tab:shifts\]. Now, an institutional guideline states that if a nurse works in a unit on a specific day, he/she has shifts in every ward of that unit on the same day. Consequently, the last tuple in Table \[tab:ws\] implies that [*Mark*]{} has shifts in both [*W1*]{} and [*W2*]{} on [*Sep/9*]{}. This date would be an answer obtained via downward navigation from the [*Standard*]{} unit to its wards (including [*W2*]{}). [$\Box$]{}
[c|c|c|c|c|]{} & **Unit** & **Day** & **Nurse** & **Type**\
& Intensive & Sep/5 & Cathy & cert.\
& Standard & Sep/5 & Helen & cert.\
& Standard & Sep/6 & Helen & cert.\
& Terminal & Sep/5 & Susan & non-c.\
& Standard & Sep/9 & Mark & non-c.\
[c|c|c|c|c|]{} & **Ward** & **Day** & **Nurse** & **Shift**\
& W4 & Sep/5 & Cathy & night\
& W1 & Sep/6 & Helen & morning\
& W4 & Sep/5 & Susan & evening\
Example \[exp:downward\] shows that downward navigation is necessary for query answering, in this case, for propagating data in [*WorkingSchedules*]{} at the [*Unit*]{} level down to [*Shifts*]{} at the lower [*Ward*]{} level). In this process a unit may drill-down to more than one ward, e.g. [*Standard*]{} unit is connected to wards [*W1*]{} and [*W2*]{}), generating more than one tuple in [*Shifts*]{}.
Contexts should be represented as formal theories into which other objects, such as database instances, are mapped into, for contextual analysis, assessment, interpretation, additional processing, etc. [@BR10]. Consequently, we show how to represent multidimensional contexts as logic-based ontologies (c.f. Section \[sec:dlrep\]). These ontologies represent and extend the HM multidimensional model (cf. Section \[sec:preliminaries\]). Our ontological language of choice is Datalog$\pm$ [@ACL12]. It allows us to give a clear semantics to our ontologies, to support some forms of logical reasoning, and to apply some query answering algorithms. Furthermore, Datalog$\pm$ allows us to generate explicit data by completion where they are missing, which is particularly useful for data generation though dimensional navigation.
Our ultimate goal is to use multidimensional ontological contexts for data quality assessment [@BR10], which is achieved by introducing and defining in the context relational predicates standing for the [*quality versions of relations in the original instance*]{}. Their definitions use additional conditions on data, to make them contain quality data. In this work, going beyond [@BR10], the context also contains an ontology in Datalog$\pm$ that represents all the multidimensional elements shown in the examples above.
Our ontologies fall in the [*weakly-sticky*]{} (WS) class [@ACL12] of the Datalog$\pm$ family of languages [@ACL09-2] (cf. Section \[sec:dlrep\]) with [*separable*]{} equality generating dependencies (when used as dimensional constraints), which guarantees that conjunctive query answering can be done in polynomial time in data. We have developed and implemented a deterministic algorithm for boolean conjunctive query answering, which is based on a non-deterministic algorithm for WS Datalog$\pm$ [@ACL12]. The algorithm can be used with ontologies containing dimensional rules that support both upward or downward navigation (cf. Section \[sec:qa\]). Section \[sec:cdqa\] shows how to use the ontology to populate the quality versions of original relations.
This paper is an extended abstract. We show concepts, ideas, ontologies, and mechanisms only by means of an extended example. The general approach and its analysis in detail will be presented in an extended version of this work.
Preliminaries {#sec:preliminaries}
=============
We start from the HM multidimensional (MD) data model [@HG05]. In it, dimensions represent the hierarchical data; and facts describe data as points in an MD space. A [[*dimension*]{}]{} is composed of a schema and an instance. A [[*dimension schema*]{}]{} includes a directed acyclic graph (DAG) of [*categories*]{}, which defines [*levels*]{} of the category hierarchy. A dimension hierarchy corresponds to a partial-order relation between the categories, a so-called [[*parent-child relation*]{}]{}. A [[*dimension instance*]{}]{} consists of set of members for each category. The instance hierarchy corresponds to a partial-order relation between members of categories, that parallels the [*parent-child*]{} relation between categories. [*Hospital*]{} and [*Time*]{}, at the right- and left-hand sides of Fig. \[fig:dim\], resp., are dimensions.
We extend the HM model with, among other elements, [*categorical relations*]{}, which can be seen as a generalization of fact tables, but at different dimension levels and not necessarily containing numerical data. Categorical relations represent the entities associated to the factual data. A [[*categorical relation*]{}]{} has a schema and an instance. A [[*categorical relation schema*]{}]{} is composed of a relation name and a list of attributes. Each attribute is either [*categorical*]{} or [*non-categorical*]{}. A categorical attribute takes as values the members of a category in a dimension. A non-categorical attribute takes values from an arbitrary domain.
\[exp:crelation\] In Fig. \[fig:dim\], the categorical relation ${\it PatientWard(Ward, Day, Patient)}$ has its categorical attributes, [*Ward*]{} and [*Day*]{}, connected to the [*Hospital*]{} and [*Time*]{} dimensions. [*Patient*]{} is a non-categorical attribute with patient names as values (there could be a foreign key to another categorical relation that stores data of patients).[$\Box$]{}
Datalog$\pm$ [@ACL09-2] is a family of languages that extends plain Datalog with additional elements: (a) existential quantifiers in heads of [*tuple-generating dependencies*]{} (TGDs); (b) [*equality-generating dependencies*]{} (EGDs), that use equality in heads; and (c) [*negative constraints*]{}, that use $\bot$ in heads. With these extensions, Datalog$\pm$ captures ontological knowledge that cannot be expressed in classical Datalog.
Although the [*chase*]{} with these rules does not necessarily terminate, syntactic restrictions imposed on the set of rules aim to ensure decidability of conjunctive query answering, and is some cases, also tractability in data complexity. Datalog$\pm$ has sub-languages, such as [*linear*]{}, [*guarded*]{}, [*weakly-guarded*]{}, [*sticky*]{}, and [*weakly-sticky*]{}, that depend on the kind of predicates and syntactic interaction of TGD rules that appear in the Datalog$\pm$ program.
In this paper, our MD ontologies turn out to be written in [*weakly-sticky*]{} (WS) Datalog$\pm$. This sublanguage extends [*sticky*]{} Datalog$\pm$ [@ACL10-2]. WS Datalog$\pm$ allows joins in the body of TGDs, but with a milder restriction on the repeated variables. Boolean conjunctive query answering is tractable for WS Datalog$\pm$ [@ACL10-2].
The Extended MD Model in Datalog$\pm$ {#sec:dlrep}
=====================================
We will represent our extended MD model as a Datalog$\pm$ ontology ${\mathcal{ M}}$ that contains a schema ${\mathcal{ S}}_{\mathcal{ M}}$, an instance ${\mathcal{ D}}_{\mathcal{ M}}$, and a set of dimensional rules and constraints $\Sigma_{\mathcal{ M}}$. ${\mathcal{ S}}_{\mathcal{ M}} = {\mathcal{ K}} \cup {\mathcal{ O}} \cup {\mathcal{ R}}$ is a finite set of predicates (relation names), where ${\mathcal{ K}}$ is a set of [*category predicates*]{} (unary predicates), ${\mathcal{ O}}$ is a set of [*parent-child predicates*]{}, i.e. partial-order relations between elements of adjacent categories, and ${\mathcal{ R}}$ is a set of [*categorical predicates*]{}. In Example \[exp:intr\], ${{\it K}}$ contains, e.g. ${{\it Ward}}(\cdot), {{\it Unit}}(\cdot)$; ${\mathcal{ O}}$ contains, e.g. a predicate for connections from [[*Ward*]{}]{} to [[*Unit*]{}]{}; and ${\mathcal{ R}}$ contains, e.g. [[*PatientWard*]{}]{}. An [*instance*]{}, ${\mathcal{ D}}_{\mathcal{ M}}$, is a relational instance that gives (possibly infinite) extensions to the predicates in ${\mathcal{ S}}_{\mathcal{ M}}$, and satisfies a given set of TGDs, EGDs, and negative constraints $\Sigma_{\mathcal{ M}}$ (cf. below). The constants for ${\mathcal{ D}}_{\mathcal{ M}}$ come from an infinite underlying domain.
The dimensional rules and constraints in $\Sigma_{\mathcal{ M}}$ constitute the intentional part of ${\mathcal{ M}}$. Rules (\[frm:gf1\])-(\[frm:gf4\]) below show the general form of elements of $\Sigma_{\mathcal{ M}}$. In what follows, each $R_i(\bar{e}_i;\bar{a}_i)$ is a categorical atom, with $\bar{e}_i$ a sequence of categorical attributes (values) and $\bar{a}_i$ a sequence of non-categorical attributes; $D_i(e_i,e'_i)$ is a parent-child atom with $e_i,e'_i$ parent/child elements, resp.; and $K_i(e_i)$ is a category atom, with $e_i$ a category element. That is, $K_i \in {\mathcal{ K}}, D_i \in {\mathcal{ O}}$, $R_i \in {\mathcal{ R}}$. As an instance in (\[frm:exref\]) and (\[frm:exegd\]), ${\it Unit}(u)$ is a category atom and ${\it UnitWard}(u,w)$ is a parent-child atom.
- To capture the [*referential constraint*]{} between a categorical attribute of a categorical relation and a category, we use a negative constraint, with $e \in \bar{e}_i$:[^1] $$\begin{aligned}
\bot \ &\leftarrow~ R_{i}(\boldsymbol{\bar{e}_i};\bar{a}_i),\lnot K(e).\label{frm:gf1}\end{aligned}$$
- A [*dimensional constraint*]{} is either an EGD of the form (\[frm:gf2\]) (where $x,x'$ also appear in the body) or a negative constraint of the form (\[frm:gf3\]): $$\begin{aligned}
x=x' ~\leftarrow~& R_i(\boldsymbol{\bar{e}_i};\bar{a}_i),...,R_j(\boldsymbol{\bar{e}_j};\bar{a}_j), \label{frm:gf2}\\
&D_n(e_n,e'_n),...,D_m(e_m,e'_m).\nonumber \vspace{-3mm}\\
\bot ~\leftarrow~& R_i(\boldsymbol{\bar{e}_i};\bar{a}_i),...,R_j(\boldsymbol{\bar{e}_j};\bar{a}_j), \label{frm:gf3}\\
&D_n(e_n,e'_n),...,D_m(e_m,e'_m). \nonumber \vspace{-4mm}\end{aligned}$$
- A [*dimensional rule*]{} is a Datalog$\pm$ TGD of the form: $$\begin{aligned}
\label{frm:gf4}
\exists \bar{a}_z\;R_k(\bar{e}_k;\bar{a}_k)\ ~\leftarrow& \hspace{-4mm}R_i(\boldsymbol{\bar{e}_i};\bar{a}_i),...,R_j(\boldsymbol{\bar{e}_j};\bar{a}_j),\\
& D_n(e_n,e'_n),...,D_m(e_m,e'_m).\nonumber\end{aligned}$$\
Here, $\bar{a}_z \subseteq \bar{a}_k$, $\bar{e}_k \subseteq \bar{e}_i \cup ... \cup \bar{e}_j \cup \{e_n,...,e_m\} \cup \{e'_n,...,e'_m\}$ and $\bar{a}_k \! \smallsetminus \! \bar{a}_z \subseteq \bar{a}_i \cup ... \cup \bar{a}_j$. Furthermore, shared variables in bodies of TGDs correspond only to categorical attributes of categorical relations.
With rule (\[frm:gf4\]) (an example is (\[frm:upward\]) below), the possibility of doing dimensional navigation is captured by joins between categorical predicates, e.g. $R_i(\boldsymbol{\bar{e}}_i;\bar{a}_i),...,R_j(\boldsymbol{\bar{e}}_j;\bar{a}_j)$ in the body, and parent-child predicates, e.g. $D_n(e_n,e'_n),...,D_m(e_m,e'_m)$. Rule (\[frm:gf4\]) allows navigation in both upward and downward directions. The [*direction of navigation*]{} is determined by the level of categorical attributes that participate in the join in the body. Assuming the join is between $R_i(\bar{e}_i;\bar{a}_i)$ and $D_n(e_n,e'_n)$, upward navigation is enabled when $e'_n \in \bar{e}_i$ (i.e. $e'_n$ appears in $R_i(\bar{e}_i;\bar{a}_i)$) and $e_n \in \bar{e}_k$ (i.e $e_n$ appears in the head). On the other hand, if $e_n$ occurs in $R_i$ and $e'_n$ occurs in $R_k$, then downward navigation is enabled, from $e_n$ to $e'_n$.
The existential variables in (\[frm:gf4\]) make up for missing non-categorical attributes due to different schemas (i.e. the existential variables may appear in positions of non-categorical attributes but not in categorical attributes). As a result, when drilling down, for each tuple of a categorical relation linked to a parent member, the rule generates tuples for all the child members of the parent member (or children specifically indicated in the body).
\[exp:ont\] The categorical attribute [[*Unit*]{}]{} in categorical relation [*PatientUnit*]{} takes values from the [*Unit*]{} category. We use a constraint of the form (\[frm:gf1\]). Similar constraints are in the ontology that capture the connection between other categorical relations and their corresponding categories. $$\begin{aligned}
\label{frm:exref}
\bot &~\leftarrow~ {\it PatientUnit(\boldsymbol{u},\boldsymbol{d};p)},\lnot {\it Unit}(u).\end{aligned}$$\
For the constraint in Example \[exp:intr\] requiring [*“No patient was in intensive care unit during the time after August 2005"*]{}, we use a dimensional constraint of the form (\[frm:gf3\]): $$\begin{aligned}
\bot ~\leftarrow~& {\it PatientWard(\boldsymbol{w},\boldsymbol{d};p)},{\it UnitWard}({\tt Intensive}, w),\nonumber\\
&{\it MonthDay}({\tt August/2005},d).\nonumber\end{aligned}$$\
Similarly, the following rule, of form (\[frm:gf2\]), states that [*“All the thermometers used in a unit are of the same type"*]{}: $$\begin{aligned}
t=t' ~\leftarrow~& {\it Thermometer(\boldsymbol{w},\boldsymbol{t};n)},{\it Thermometer(\boldsymbol{w'},\boldsymbol{t'};n')},\nonumber\\
&{\it UnitWard(u,w)},{\it UnitWard(u,w')},\label{frm:exegd}\end{aligned}$$\
with ${\it Thermometer(Ward,Thermometertype;}$${{\it Nurse}})$ a categorical relation with thermometers used by nurses in wards.
Finally, the following dimensional rules of the form (\[frm:gf4\]) capture how data in [[*PatientWard*]{}]{} and [[*WorkingSchedules*]{}]{} generate data for [[*PatientUnit*]{}]{} and [[*Shifts*]{}]{}, resp.:[^2] $$\begin{aligned}
{\it PatientUnit(\boldsymbol{u},\boldsymbol{d};p)} ~\leftarrow~& {\it PatientWard(\boldsymbol{w},\boldsymbol{d};p)},\label{frm:upward}\\
&{\it UnitWard(u,w)}. \nonumber\end{aligned}$$ $$\begin{aligned}
\exists z\;{\it Shifts(\boldsymbol{w},\boldsymbol{d};n,z)} ~\leftarrow~& {\it WorkingSchedules(\boldsymbol{u},\boldsymbol{d};n,t)},\nonumber\\
&{\it UnitWard(u,w)}. \label{frm:downward1}\end{aligned}$$
In (\[frm:upward\]), dimension navigation is enabled by the join between [*PatientWard*]{} and [*UnitWard*]{}. The rule generates data for [*PatientUnit*]{} (at a the higher level of [*Unit*]{}) from [*PatientWard*]{} (at the lower level of [*Ward*]{}) via upward navigation. Notice that (\[frm:upward\]) is in the general form (\[frm:gf4\]), but since in this case the schemas of the two involved categorical relations match, no existential quantifiers are necessary.
Rule (\[frm:downward1\]) captures downward navigation while it generates data for [*Shifts*]{} (at the level of [*Ward*]{}) from [*WorkingSchedules*]{} (at the level of [*Unit*]{}). In this case, the schemas of the two categorical relations do not match. So, the existential variable $z$ represents missing data for the [*shift*]{} attribute.[$\Box$]{}
It is possible to verify that [*the Datalog$\pm$ MD ontologies with rules of the forms (\[frm:gf1\])-(\[frm:gf4\]) are weakly-sticky*]{}. This follows from the fact that shared variables in the body of dimensional rules, as defined in (\[frm:gf4\]), may occur only in positions of categorical attributes, where only limited values may appear, which depends on the assumption that the MD ontology has a fixed dimensional structure, in particular, with a fixed number of category members. No new category member is generated when applying the dimensional rules of the form (\[frm:gf4\]).
The [*separability property*]{} [@ACL10-1; @ACL12] in relation to the interaction of dimensional EGDs of the form (\[frm:gf2\]) and TGDs of the form (\[frm:gf4\]) must be checked independently. However, [*when the EGDs have only categorical variables in the heads, the separability condition holds*]{}, which is the case with rule (\[frm:exegd\]).
To illustrate query answering via downward navigation, reconsider the query about the dates that [*Mark*]{} works in [*W1*]{}: $\mathcal{Q}'(d)\leftarrow {\it Shifts}({\tt W1}, d, {\tt Mark}, s)$. Considering (\[frm:downward1\]) and the last tuple in [*WorkingSchedules*]{}, the chase will generate a new tuple in [*Shifts*]{} for [*Mark*]{} on [*Sep/9*]{} in [*W2*]{}, with a fresh null value for his shift, reflecting incomplete knowledge about this attribute at the lower level. So, the answer to the query via (\[frm:downward1\]) is [*Sep/9*]{}. [$\Box$]{}
The general TGD (\[frm:gf4\]) only captures downward navigation when there is incomplete data about the values of non-categorical attributes, because existential variables are only non-categorical. However, in some cases we may have incomplete data about the categorical attributes, i.e. about parents and children involved in downward navigation.
\[tab:discharge\]
[c|c|c|c|]{} & **Inst.** & **Day** & **Patient**\
& H1 & Sep/9 & Tom Waits\
& H1 & Sep/6 & Lou Reed\
& H2 & Oct/5 & Elvis Costello\
There is an additional categorical relation [*DischargePatients*]{} (Table \[tab:discharge\]) with data about patients leaving an institution. Since each of them was in exactly one of the units, [*DischargePatient*]{} should generate data for [*PatientUnit*]{} through downward navigation from the [*Institution*]{} level to the [*Unit*]{} level. Since we do not have knowledge about which unit at the lower level has to be specified, the following rule could be used: $$\begin{aligned}
\exists u\;{\it InstitutionUnit(i,u)},&{\it PatientUnit(\boldsymbol{u},\boldsymbol{d};p)} ~\leftarrow~\label{frm:downward2}\\
&{\it DischargePatients(\boldsymbol{i},\boldsymbol{d};p)}, \nonumber\end{aligned}$$\
which is not of the form (\[frm:gf4\]), because it has an existentially quantified categorical variable, $u$, for units. It allows downward navigation while capturing incomplete data about units, and represents disjunctive knowledge at level of units. [$\Box$]{}
The general form of (\[frm:downward2\]), for this type of downward navigation is as follows: $$\begin{aligned}
\label{frm:gf5}
\exists \bar{z}\;R_k(\bar{e}_k;\bar{a}_k),D_n(e_n,e'_n)&,...,D_m(e_m,e'_m) \ \ \leftarrow\\
& R_i(\boldsymbol{\bar{e}_i};\bar{a}_i),...,R_j(\boldsymbol{\bar{e}_j};\bar{a}_j),\nonumber\end{aligned}$$\
where $\bar{z} \subseteq \bar{e}_k \cup \bar{a}_k \cup \{e_n,...,e_m\} \cup \{e'_n,...,e'_m\}$ and $\bar{e}_k \cup \{e_n,...,e_m\} \cup \{e'_n,...,e'_m\} \! \smallsetminus \! \bar{z} \subseteq \bar{e}_i \cup ... \cup \bar{e}_j$ and $\bar{a}_k \! \smallsetminus \! \bar{z} \subseteq \bar{a}_i \cup ... \cup \bar{a}_j$, and the categorical attributes of $R_i,\ldots,R_j$ refer to categories that are at a higher or same level than the categorical attributes of $R_k$. (In (\[frm:downward2\]), categories ${{\it Institution}}$ and ${{\it Day}}$ for ${{\it DischargePatients}}$ are higher and same level, resp. than ${{\it Unit}}$ and ${{\it Day}}$ for ${{\it PatientUnit}}$.)
[*If the MD ontology also includes rules of the form (\[frm:gf5\]), it still is weakly-sticky*]{}. This is because, despite the fact that these rules may generate new members (nulls), they can only generate a limited number of such members (because the rule only navigates in downward direction), i.e. there is no cyclic behavior. With these new rules, EGDs with only categorical attributes in heads do not guarantee separability anymore. So, checking this condition becomes application dependent.
Query Answering on MD Ontologies {#sec:qa}
================================
Weakly-stickyness guarantees that [*boolean conjunctive query answering from our MD contextual ontologies becomes tractable*]{} in data complexity [@ACL12]. Then, answering open conjunctive queries from the MD ontology is also tractable [@FG03].
We have developed and implemented a deterministic algorithm, [DeterministicWSQAns]{}, for answering boolean conjunctive queries from Datalog$\pm$ MD contextual ontologies. The algorithm is based on a non-deterministic algorithm, [WeaklyStickyQAns]{}, for WS Datalog$\pm$ that runs in polynomial time in the size of extensional database [@ACL12].
Given a set of WS TGDs, a boolean conjunctive query, and an extensional database, [WeaklyStickyQAns]{} builds an “accepting resolution proof schema", a tree-like structure which shows how query atoms can be entailed from the extensional instance. The algorithm rejects if there is no resolution proof schema; otherwise it builds it and accepts.
Our deterministic algorithm, [DeterministicWSQAns]{}, applies a top-down backtracking search for accepting resolution proof schemas. Starting from the query, the algorithm resolves the atoms of the query, from left to right. In each step, an atom is resolved either by finding a substitution that maps the atom to a ground atom in the extensional database (which makes a leaf node) or by applying a TGD rule that entails the atom (building a subtree). The decision at each step is stored on a stack to be restored later if the algorithm fails to entail the atoms of the query in the next steps. The algorithm accepts if it resolves all the atoms in the query (the content of the stack specifies the decisions that lead to the accepting resolution proof schema), and rejects if it cannot resolve an atom, no matter what decisions have been made before.
In this deterministic approach, possible substitutions of constants for query variables are derived by the ground atoms in the extensional database (as opposed to the non-deterministic version of the algorithm that guesses applicable substitutions). This enables us to extend [DeterministicWSQAns]{} for finding answers to open conjunctive queries, by building resolution proof schemas for all possible substitutions.
[WeaklyStickyQAns]{} runs in polynomial time in the size of the extensional database [@ACL10-2]. It can be proved that [DeterministicWSQAns]{} also runs in polynomial time. None of these algorithms are first-order (FO) query rewriting algorithms, which do exist for the Datalog$\pm$ more restrictive syntactic classes, e.g. [*linear*]{} and [*sticky*]{} [@ACL09-2; @ACL10-2].
The MD ontologies to which the complexity results and algorithms above apply support both upward and downward navigation. However, for simpler MD ontologies that support only upward navigation (which can be syntactically detected from the form of the dimensional rules), we developed a methodology for conjunctive query answering based on FO query rewriting. The rewritten query can be posed directly to the extensional database. Ontologies of this kind are common and natural in real world applications (Example \[exp:intr\] shows such a case). Interestingly, these “upward-navigating" MD ontologies [*do not*]{} necessarily fall into any of the “good" cases of Datalog$\pm$ mentioned above.
The algorithms mentioned in this section are rather proofs of concept than algorithms meant to be used with massive data. It is ongoing work the development and implementation of scalable polynomial time algorithms for answering open conjunctive queries.
MD Ontologies and Data Quality {#sec:cdqa}
==============================
In this section, we show how a Datalog$\pm$ MD ontology can be a part of -and used in- a context for data quality assessment or cleaning. Fig. \[fig:frmw\] shows such a context and the way it is used. The central idea in [@BR10] is that the original instance ${\mathcal{ D}}$ (on the left-hand-side) is to be assessed or cleaned through the context in the middle. This is done by mapping ${\mathcal{ D}}$ into the contextual schema/instance ${\mathcal{ C}}$. The context may have additional data, predicates ($C_i$), data quality predicates ($P_i$) specifying single quality requirements, and access to external data sources ($E_i$) for data assessment or cleaning. The clean version of ${\mathcal{ D}}$ is on the right-hand-side, with schema ${\mathcal{ S}}^q$, which is a copy of ${\mathcal{ D}}$’s schema [@BR10].
The new element in the context is the MD ontology ${\mathcal{ M}}$, which interacts with ${\mathcal{ C}}$, and represents the dimensional elements of the context. The categorical relations in ${\mathcal{ M}}$ provide dimensional data for the relations in ${\mathcal{ C}}$ and for quality predicates in ${\mathcal{ P}}$. ${\mathcal{ C}}$ also gets extensional data from initial database, ${\mathcal{ D}}$, and external sources. Here we concentrate on data cleaning, which here amounts to obtaining clean answers to queries, in particular, about clean extensions ($S^q_i$) for the original database relations ($S_i$) (a particular case of [*clean query answering*]{} [@BR10]).
The quality versions $S^q_i$ are specified in terms of the relations in ${\mathcal{ C}}$ and quality predicates, $P_i$. The data for the latter may be already in the context or come from ${\mathcal{ D}}$, the ontology ${\mathcal{ M}}$, or external sources. The problems become: (a) computing quality versions $S_i^q$ of the original predicates, and (b) computing quality answers to queries ${\mathcal{ Q}}$ expressed in terms of those original predicates. The second problem is solved by rewriting the query as ${\mathcal{ Q}}^q$, which is expressed (and answered) in terms of predicates $S^q_i$. Answering it is the part of the query answering process that may invoke dimensional navigation and data generation as illustrated in previous sections. Problem (a) is a particular case of (b).
![An MD context for data quality assessment[]{data-label="fig:frmw"}](framework-new.eps){width="6cm"}
(ex. \[exp:ont\] cont.) A query ${\mathcal{ Q}}$ about Tom Waits’ temperatures is initially expressed in terms of the initial predicates [[*Measurements*]{}]{}, but is rewritten into a query expressed and an-
swered via its quality extension ${{\it Measurements}}^q$ (see [@BR10] for more details).[^3] More specifically, the query is about [*“The body temperatures of Tom Waits on September 5 taken around noon by a certified nurse with a thermometer of brand B1"*]{}: $$\begin{aligned}
{\mathcal{ Q}}(t,p,v) \ \leftarrow& \ {\it Measurements(t,p,v)},p={\tt Tom\;\;Waits},\\
& ~~{\tt Sep/5\mbox{-}11\mbox{:}45} \le t \le {\tt Sep/5\mbox{-}12\mbox{:}15}.\end{aligned}$$
[*Measurements*]{}, as initially given, does not contain information about nurses or thermometers. Hence the [*expected conditions*]{} are not expressed in the query. According to the general contextual approach in [@BR10], predicate [*Measurement*]{} has to be logically connected to the context, conceiving it as a footprint of a “broader" contextual relation that is given or built in the context, in this case one with information about thermometer brands ($b$) and nurses’ certification status ($y$): $$\begin{aligned}
{\it Measurement}^{\prime}(t,p,v,y,b)&\leftarrow&{\it Measurement}^c(t,p,v),\\
&&\hspace*{-3.8cm}{\it TakenByNurse}(t,p,n,y), {\it TakenWithTherm}(t,p,b),\end{aligned}$$
where ${\it Measurement}^c$ is a contextual copy of [*Measurement*]{}, i.e. the latter is mapped into the context.[^4] If we want quality measurements data, we impose the quality conditions: $$\begin{aligned}
{\it Measurement}^q(t,p,v)&\leftarrow&{\it Measurement}^{\prime}(t,p,v,y,b), \\
&&y={\tt Certified}, \ b={\tt B1},\end{aligned}$$
with the auxiliary predicates defined by: $$\begin{aligned}
\hspace*{-1mm}{\it TakenByNurse}(t,p,n,y)\hspace{-2mm}&\leftarrow&\hspace{-3mm}{\it WorkingSchedules}(u,d;n,y), \\
&&\hspace{-1.3cm}{\it DayTime}(d,t), {\it PatientUnit}(u,d;p).\\
{\it TakenWithTherm}(t,p,b)&\leftarrow&{\it PatientUnit}(u,d;p),\\
&&\hspace*{-1.8cm}{\it DayTime}(d,t),b={\tt B1},u={\tt Standard}.\end{aligned}$$
Here, [*DayTime*]{} is parent/child relation in [*Time*]{} dimension), and the last definition right above is capturing as a rule the guideline from Example \[exp:intr\], at the level of relation [*PatientUnit*]{}.
Summarizing, [*TakenByNurse*]{} and [*TakenWithTherm*]{} are contextual predicates (shown in Fig. \[fig:frmw\] as $P_i$). [*PatientWard*]{} and [*WorkingSchedules*]{} are categorical relations.
To obtain quality answers to the original query, we pose to the ontology the new query: $$\begin{aligned}
{\mathcal{ Q}}^q(t,p,v) \ \leftarrow& \ {\it Measurements(t,p,v)^q}, \ p={\tt Tom\;\;Waits},\\
& {\tt Sep/5\mbox{-}11\mbox{:}45} \le t \le {\tt Sep/5\mbox{-}12\mbox{:}15}.\end{aligned}$$
Answering it, which requires evaluating [*TakenWithTherm*]{}, triggers upward dimensional navigation from [*Ward*]{} to [*Unit*]{}, when requesting data for categorical relation [*PatientUnit*]{}. More specifically, dimensional rule (\[frm:upward\]) is used for data generation, and each tuple in [*PatientWard*]{} generates one tuple in [*PatientUnit*]{}, with its unit obtained by rolling-up . [$\Box$]{}
Conclusions {#sec:conc}
===========
We have described in general terms how to specify in Datalog$\pm$ a multidimensional ontology that extends a multidimensional data model. We have identified some properties of these ontologies in terms of membership to known classes of Datalog$\pm$, the complexity of conjunctive query answering, and the existence of algorithms for the latter task. Finally, we showed how to apply the ontologies to multidimensional and contextual data quality, in particular, for obtaining quality answers to queries through dimensional navigation. MD contexts are also of interest outside applications to data quality. They can be seen as logical extensions of the MD data model.
[ Research funded by NSERC Discovery, and the NSERC Strategic Network on Business Intelligence (BIN). L. Bertossi is a Faculty Fellow of IBM CAS. We thank Andrea Cali and Andreas Pieris for useful information and conversations on Datalog$\pm$.]{}
[10]{}
C. Batini, and M. Scannapieco. . , 2006.
L. Bertossi, F. Rizzolo, and J. Lei. . , 2010, pp. 52-67.
L. Bertossi. . Morgan & Claypool, 2011.
C. Bolchini, E. Quintarelli, and L. Tanca. . , 2013, 38:45-67.
A. Cali, G. Gottlob, and T. Lukasiewicz. . , 2009, pp. 14-30.
A. Cali, G. Gottlob, and A. Pieris. . , 2010, pp. 1-17.
A. Cali, G. Gottlob, and A. Pieris. . , 2011, pp. 161-174.
A. Cali, G. Gottlob, and A. Pieris. . , 2012, 193:87-128.
D. Calvanese, G. Giacomo, D. Lembo, M. Lenzerini, and R. Rosati. . , 2007, 39:385-429.
R. Fagin, P. G. Kolaitis, R. J. Miller, and L. Popa. . , 2005, 336:89-124.
G. Gottlob, G. Orsi, and A. Pieris. . , 2011, pp. 2-13.
C. Hurtado, C. Gutierrez, and A. Mendelzon. . , 2005, 30:854-886.
L. Jiang, A. Borgida, and J. Mylopoulos. . , 2008, pp. 55-68.
A. Maleki, L. Bertossi, and F. Rizzolo. . , 2012, pp. 196-209.
D. Martinenghi, and R. Torlone. . , 2010, pp. 377-390.
[^1]: Alternatively, we could have referential constraints between categorical relations and categories that are captured by Datalog$\pm$ TGDs, making it possible to generate elements in categories or categorical relations.
[^2]: A rule with a conjunction in the head can be transformed into a set of rules with single atoms in heads.
[^3]: This idea of cleaning data on-the-fly is reminiscent of [*consistent query answering*]{} [@bertossi11].
[^4]: It does not have to be a replica; it could also be mapped into a contextual relation having additional attributes and data [@BR10].
|
---
abstract: 'We show that there exist non-compact composition operators in the connected component of the compact ones on the classical Hardy space $\mathcal{H}^2$. This answers a question posed by Shapiro and Sundberg in 1990. We also establish an improved version of a theorem of MacCluer, giving a lower bound for the essential norm of a difference of composition operators in terms of the angular derivatives of their symbols. As a main tool we use Aleksandrov–Clark measures.'
address:
- 'Departamento de Matemáticas, Universidad de Zaragoza, Plaza San Francisco s/n, 50009 Zaragoza, Spain.'
- 'Departamento de Matemáticas, Universidad de Cádiz, Apartado 40, 11510 Puerto Real (Cádiz), Spain.'
- 'Department of Mathematics and Statistics, University of Helsinki, PO Box 68, FI-00014 Helsinki, Finland.'
- 'Department of Mathematics and Statistics, University of Jyväskylä, PO Box 35, FI-40014 Jyväskylä, Finland.'
author:
- 'Eva A. Gallardo-Gutiérrez'
- 'María J. González'
- 'Pekka J. Nieminen'
- Eero Saksman
date: '15 June 2007.'
title: On the connected component of compact composition operators on the Hardy space
---
[^1]
Introduction {#sec:Intro}
============
Let ${\mathbb{D}}$ denote the open unit disc of the complex plane and $\mathcal{H}^2$ the classical Hardy space, that is, the space of analytic functions $f$ on ${\mathbb{D}}$ for which the norm $${\lVertf\rVert}_2 = \left (\sup_{0\leq r<1} \int_{0}^{2\pi}
{\lvertf(re^{i\theta})\rvert}^2 \, \frac{d\theta}{2\pi}\right )^{1/2}$$ is finite. By a variant of Fatou’s theorem, any Hardy function $f$ has non-tangential limits on the boundary of the unit disc except on a set Lebesgue measure zero (see [@Du], for instance). Moreover, ${\lVertf\rVert}_2$ equals the $L^2$-norm of the boundary function. Throughout this work, $f(e^{i\theta})$ will denote the non-tangential limit of $f$ at $e^{i\theta}$.
If $\varphi$ is an analytic map which takes ${\mathbb{D}}$ into itself, a result proved by Littlewood in 1925 ensures that the composition operator induced by $\varphi$, $$C_{\varphi} f = f\circ \varphi,$$ is always a bounded linear operator on $\mathcal{H}^2$. The properties of such operators on $\mathcal{H}^2$ and many other function spaces have been studied extensively during the past few decades. We refer the reader to the monographs [@CMc; @ShBook] for an overview of the field as of the early 1990s.
Starting from Earl Berkson’s pioneering work [@Be], many authors have focused attention on the topological structure of the set ${{\rm Comp }}(\mathcal{H}^2)$ of all composition operators on $\mathcal{H}^2$. Here ${{\rm Comp }}(\mathcal{H}^2)$ is usually equipped with the metric induced by the operator norm. A remarkable contribution in this area is due to Joel H. Shapiro and Carl Sundberg [@ShSuIso], who provided several results and examples to describe the isolated members of ${{\rm Comp }}(\mathcal{H}^2)$. Towards the end of their paper, they also raised the general problem of determining the connected components of ${{\rm Comp }}(\mathcal{H}^2)$, and suggested the following conjecture:
- *$C_\varphi$ and $C_\psi$ lie in the same component of ${{\rm Comp }}(\mathcal{H}^2)$ if and only if $C_\varphi-C_\psi$ is compact.*
The most important special case of this conjecture, mentioned explicitly in [@ShSuIso], states that the compact composition operators themselves form a component in ${{\rm Comp }}(\mathcal{H}^2)$. In fact, Shapiro and Sundberg observed that the collection of the compact composition operators on $\mathcal{H}^2$ is arcwise connected, so the remaining question can be stated as follows:
- *Let ${{\rm Comp }}_K(\mathcal{H}^2)$ be the component of ${{\rm Comp }}(\mathcal{H}^2)$ that contains all the compact composition operators. Does any non-compact composition operator belong to ${{\rm Comp }}_K(\mathcal{H}^2)$?*
The general form (A) of the Shapiro–Sundberg conjecture has recently been answered negatively by Moorhouse and Toews [@MoTo] and Bourdon [@Bo]. They have provided fairly simple and concrete examples of symbols $\varphi$ and $\psi$ such that the operators $C_\varphi$ and $C_\psi$ lie in the same component of ${{\rm Comp }}(\mathcal{H}^2)$ but have a non-compact difference. However, in those examples both operators are non-compact, leaving question (B) unanswered.
In this work, we will show that the special case of the Shapiro–Sundberg conjecture fails, too. That is, we will give an affirmative answer to question (B).
For $0 \leq t \leq 1$ there are analytic maps $\varphi_t\colon
{\mathbb{D}}\to {\mathbb{D}}$ such that $t \mapsto C_{\varphi_t}$ is a continuous map from $[0,1]$ into ${{\rm Comp }}(\mathcal{H}^2)$, where $C_{\varphi_0}$ is compact and $C_{\varphi_1}$ is non-compact on $\mathcal{H}^2$.
Let us point out an important result of Barbara MacCluer [@Mc] which states that if two composition operators belong to the same component in ${{\rm Comp }}(\mathcal{H}^2)$, then their symbols must have the same angular derivative (possibly infinity) at each point of the unit circle ${\mathbb{T}}= \partial{\mathbb{D}}$. Hence any symbol that induces an operator belonging to ${{\rm Comp }}_K(\mathcal{H}^2)$ cannot have a finite angular derivative at any point of ${\mathbb{T}}$. This indicates that the construction of the map $\varphi_1$ above is probably not an elementary task. In particular, since non-existence of finite angular derivatives characterizes compact composition operators induced by finitely valent symbols, the valence of $\varphi_1$ has to be infinite.
As a main tool in the proof of Main Theorem we will employ Aleksandrov–Clark measures. These measures, associated to any analytic self-map of the unit disc, have lately found several applications in the study of composition operators (see Section \[sec:AC\]). The essence of our argument comprises a construction of a family of certain continuously singular measures on ${\mathbb{T}}$, one for each point of ${\mathopen[{0,1}\mathclose]}$, which are then used to define the maps $\varphi_t$ in terms of their Aleksandrov–Clark measures.
The rest of the paper is organized as follows. In Section \[sec:AC\], we collect some preliminaries on Aleksandrov–Clark measures and composition operators. In Section \[sec:Mac\], we revisit the theorem of MacCluer cited above and strengthen it slightly. This result will provide a clue for part of the proof of our Main Theorem, which is then carried out in Section \[sec:Main\] (see, in particular, Remark \[re:Heur\]). Finally, in Section \[sec:Further\] we make some additional observations related to Main Theorem and also present some open questions that arise from our work.
We finally remark that the questions raised by Shapiro and Sundberg have been studied in many classical function spaces besides the original $\mathcal{H}^2$. See, for example, [@McOhZh; @AGL; @HO; @HaMac; @Mo; @KrMo]. In most cases the situation seems to be considerably easier than in the setting of $\mathcal{H}^2$. In particular, for the standard Bergman space $\mathcal{A}^2$, MacCluer’s theory shows that the compact composition operators do form a component of ${{\rm Comp }}(\mathcal{A}^2)$ (see Remark \[re:Bergman\]). Also, in the setting of $\mathcal{H}^\infty$, the space of bounded analytic functions, a complete description of the component structure of ${{\rm Comp }}(\mathcal{H}^\infty)$ was found in [@McOhZh].
Aleksandrov–Clark measures {#sec:AC}
==========================
In this section we collect some preliminaries and background on Aleksandrov–Clark measures and their relation to composition operators. For more information on these measures and their applications in other areas of analysis, we refer the reader to the lecture notes [@Sa], the book [@CMR] and the surveys [@MaSt; @PoSa].
Definition {#sec:ACDef}
----------
Let $\varphi$ be an analytic self-map of ${\mathbb{D}}$. For any $\alpha \in {\mathbb{T}}$, the real part of the function $(\alpha+\varphi)/(\alpha-\varphi)$ is positive and harmonic in ${\mathbb{D}}$, so it may be expressed as the Poisson integral of a positive Borel measure $\tau_{\varphi,\alpha}$ supported on ${\mathbb{T}}$. That is, $${\operatorname{Re}}\frac{\alpha+\varphi(z)}{\alpha-\varphi(z)}
= \frac{1-{\lvert\varphi(z)\rvert}^2}{{\lvert\alpha-\varphi(z)\rvert}^2}
= \int_{{\mathbb{T}}} P_z \,d\tau_{\varphi,\alpha},$$ where $P_z(\zeta) = (1-{\lvertz\rvert}^2)/{\lvert\zeta-z\rvert}^2$ is the Poisson kernel for $z \in {\mathbb{D}}$. The family of measures $\{ \tau_{\varphi,\alpha}: \alpha\in{\mathbb{T}}\}$ are called the *Aleksandrov–Clark measures* associated to $\varphi$.
For any Borel measure $\tau$ on ${\mathbb{T}}$, we write $\tau = \tau^a \,dm + \tau^s$ for the Lebesgue decomposition of $\tau$, so that $\tau^a$ is the density of the absolutely continuous part, $m$ is the normalized Lebesgue measure on ${\mathbb{T}}$ and $\tau^s$ is singular. It follows from the basic properties of Poisson integrals that $$\tau_{\varphi,\alpha}^a(\zeta) =
\frac{1-{\lvert\varphi(\zeta)\rvert}^2}{{\lvert\alpha-\varphi(\zeta)\rvert}^2}.$$ Furthermore, $\tau_{\varphi,\alpha}^s$ is carried by the set where $\varphi(\zeta) = \alpha$.
Angular derivatives {#sec:Angular}
-------------------
Recall that if the quotient $(\varphi(z)-\eta)/(z-\zeta)$ has a finite non-tangential limit at $\zeta \in {\mathbb{T}}$ for some $\eta \in {\mathbb{T}}$, then this limit is called the *angular derivative* of $\varphi$ at $\zeta$ and denoted by $\varphi'(\zeta)$. It satisfies $\varphi'(\zeta) = {\lvert\varphi'(\zeta)\rvert} \overline{\zeta}\eta$ where $\eta = \varphi(\zeta)$. A nice feature of the Aleksandrov–Clark measures is that their discrete parts (i.e. mass points, or atoms) have a perfect correspondence with the finite angular derivatives of $\varphi$:
- *The map $\varphi$ has a finite angular derivative at $\zeta \in {\mathbb{T}}$ if and only if there is $\alpha \in {\mathbb{T}}$ such that $\tau_{\varphi,\alpha}(\{\zeta\}) > 0$. In that case $\varphi(\zeta) = \alpha$ and ${\lvert\varphi'(\zeta)\rvert} =
\tau_{\varphi,\alpha}(\{\zeta\})^{-1}$.*
For the proof of this result convenient references are [@CMR; @Sa], where it is established in conjunction with the classical Julia–Carathéodory theorem.
Relation to composition operators {#sec:CompOp}
---------------------------------
To bring Aleksandrov–Clark measures into the theory of composition operators, we follow Sarason’s [@Sar] idea of describing composition operators as integral operators acting on the unit circle. Let us denote by $\mathcal{M}$ the space of all complex Borel measures on ${\mathbb{T}}$ endowed with the total variation norm. Then, if $\mu \in \mathcal{M}$ is given, the Poisson integral $u(z) = \int_{\mathbb{T}}P_z \,d\mu$ defines a harmonic function on ${\mathbb{D}}$. Consequently the function $v = u \circ \varphi$ is also harmonic, and it follows easily that $v$ is the Poisson integral of a unique measure $\nu \in \mathcal{M}$. Thus it makes sense to define $C_\varphi\mu = \nu$. One can show that $C_\varphi\colon \mathcal{M} \to \mathcal{M}$ is bounded and, furthermore, that $C_\varphi$ restricts to a bounded operator $L_p \to L_p$, where $L_p = L_p({\mathbb{T}},m)$ for $1 \leq p \leq \infty$. Moreover, the restriction of $C_\varphi$ to the analytic Hardy spaces $\mathcal{H}^p$ (viewed as subspaces of $L^p$) coincides with the standard definition of $C_\varphi$.
By the definition of the Aleksandrov–Clark measures we see that $\tau_{\varphi,\alpha } = C_\varphi \delta_\alpha$, where $\delta_\alpha$ is the $\delta$-Dirac measure at $\alpha$. In addition, the correspondence $C_\varphi\mu = \nu$ can be written as $$\label{eq:IntOper}
\int_{\mathbb{T}}f\,d\nu =
\int_{\mathbb{T}}\biggl( \int_{\mathbb{T}}f\,d\tau_{\varphi,\alpha} \biggr)
\,d\mu(\alpha)$$ for a suitable class of functions $f$. Indeed, if $f$ is a Poisson kernel $P_z$, this follows directly from the definitions. The case of continuous $f$ is then obtained by approximating with linear combinations of Poisson kernels. Finally one may invoke a further approximation argument (e.g. a monotone class theorem; cf. [@CMR Sec. 9.4]) to establish for all bounded Borel functions $f$ on ${\mathbb{T}}$.
In [@Sar] Sarason characterized those composition operators $C_\varphi$ that are compact on $\mathcal{M}$ and $L^1$ by a condition which says that $\tau_{\varphi,\alpha}^s = 0$ for all $\alpha \in {\mathbb{T}}$; that is, the Aleksandrov–Clark measures of $\varphi$ are required to be absolutely continuous. Later Shapiro and Sundberg [@ShSuL1] observed that Sarason’s criterion is equivalent to Shapiro’s [@ShEss] characterization of compact composition operators on $\mathcal{H}^p$, $1 \leq p < \infty$, involving the Nevanlinna counting function. Moreover, Cima and Matheson [@CiMa] have shown that the essential norm (i.e. distance, in the operator norm, from the compact operators) of any $C_\varphi$ acting on $\mathcal{H}^2$ equals $\sup_\alpha{\lVert\tau_{\varphi,\alpha}^s\rVert}^{1/2}$. In particular, a necessary condition for the compactness of $C_\varphi$ on all the spaces mentioned is that the symbol $\varphi$ has no finite angular derivative at any point of ${\mathbb{T}}$. This condition, however, is not sufficient unless $\varphi$ is of finite valence (see e.g.[@ShBook]).
Aleksandrov–Clark measures have also been used to study differences and more general linear combinations of composition operators in [@KrMo; @NiSa; @JESh]. In particular, a characterization for compact differences of composition operators on $\mathcal{M}$ and $L^1$ was found in [@NiSa].
Extension of MacCluer’s Theorem {#sec:Mac}
===============================
In 1989 Barbara MacCluer obtained the following result concerning differences of composition operators on $\mathcal{H}^2$.
\[thm:Mac\] Assume that $\varphi,\psi\colon {\mathbb{D}}\to {\mathbb{D}}$ are analytic maps and $\varphi$ has a finite angular derivative at $\zeta \in {\mathbb{T}}$. Then, unless $\psi(\zeta) = \varphi(\zeta)$ and $\psi'(\zeta) = \varphi'(\zeta)$, one has $${\lVertC_\varphi-C_\psi\rVert}_e^2 \geq \frac{1}{{\lvert\varphi'(\zeta)\rvert}},$$ where ${\lVert\ \rVert}_e$ denotes the essential norm of an operator on $\mathcal{H}^2$.
The relationship between angular derivatives and the atoms of the Aleksandrov–Clark measures (see Sec. \[sec:Angular\]) allows us to restate Theorem \[thm:Mac\] as follows:
- *Assume that $\tau_{\varphi,\alpha}(\{\zeta\}) > 0$ for some $\alpha \in {\mathbb{T}}$. Then, unless $\tau_{\psi,\alpha}(\{\zeta\}) = \tau_{\varphi,\alpha}(\{\zeta\})$, one has ${\lVertC_\varphi-C_\psi\rVert}_e^2 \geq \tau_{\varphi,\alpha}(\{\zeta\})$.*
Theorem \[thm:Mac\] implies that, for each $\zeta \in {\mathbb{T}}$ and $d \neq 0$, the set of all $C_\varphi$ with $\varphi'(\zeta) = d$ is both open and closed in ${{\rm Comp }}(\mathcal{H}^2)$ (even in the topology induced by the essential norm). Hence a necessary condition for two composition operators to lie in the same component (or essential component) of ${{\rm Comp }}(\mathcal{H}^2)$ is that the angular derivatives of their symbols coincide. In particular, it follows that if $C_\varphi$ belongs to ${{\rm Comp }}_K(\mathcal{H}^2)$, the component containing all compact composition operators, then $\varphi$ has no finite angular derivative at any point of ${\mathbb{T}}$— or, equivalently, the Aleksandrov–Clark measure $\tau_{\varphi,\alpha}$ has no atoms for any $\alpha \in {\mathbb{T}}$.
\[re:Bergman\] MacCluer’s work was actually carried out in a general context of weighted Dirichlet (or Bergman) spaces $\mathcal{D}_\beta$, $\beta \geq 1$, which includes as special cases the Hardy space $\mathcal{H}^2$ ($\beta = 1$) as well as the standard Bergman space $\mathcal{A}^2$ ($\beta = 2$). For $\beta > 1$ it is known that the non-existence of finite angular derivatives is both necessary and sufficient for the compactness of a composition operator on $\mathcal{D}_\beta$ (see [@McSh] or [@CMc]). So, in these spaces, MacCluer’s theorem implies (e.g. by the argument at the beginning of the preceding paragraph) that the compacts indeed form a connected component of ${{\rm Comp }}(\mathcal{D}_\beta)$.
In another direction, Kriete and Moorhouse [@KrMo] have recently obtained various interesting refinements of MacCluer’s results. In particular, they establish a version of Theorem \[thm:Mac\] for higher-order boundary data of the symbols.
In this section we will provide a slight improvement of Theorem \[thm:Mac\]. Our lower bound will involve the whole discrete part of the Aleksandrov–Clark measure at $\alpha$. This result yields some heuristics for our construction in the proof of our Main Theorem in Section \[sec:Main\] (see Remark \[re:Heur\]).
\[thm:MacExt\] Let $\varphi,\psi\colon {\mathbb{D}}\to {\mathbb{D}}$ be analytic maps and $\alpha \in {\mathbb{T}}$. Write $$Z = \bigl\{ \zeta \in {\mathbb{T}}:
0 < \tau_{\varphi,\alpha}(\{\zeta\}) \neq
\tau_{\psi,\alpha}(\{\zeta\}) \bigr\}.$$ Then $${\lVertC_\varphi-C_\psi\rVert}_e^2 \geq
\tau_{\varphi,\alpha}(Z).$$
In the proof of Theorem \[thm:MacExt\] we will use as test functions the normalized reproducing kernels $$f_w(z) = \frac{\sqrt{1-{\lvertw\rvert}^2}}{1-\overline{w}z}.$$ They have the property that ${\lVertf_w\rVert}_2 = 1$ for all $w \in {\mathbb{D}}$ and $f_w \to 0$ weakly as ${\lvertw\rvert} \to 1$, whence ${\lVertC_\varphi-C_\psi\rVert}_e \geq \limsup_{{\lvertw\rvert}\to 1}
{\lVert(C_\varphi-C_\psi)f_w\rVert}_2$. We will borrow MacCluer’s idea of letting $w$ approach $\alpha$ along a curve which makes almost right angle with the radius to $\alpha$. However, instead of considering the adjoints of $C_\varphi$ and $C_\psi$ as in [@Mc] and [@KrMo], we will deal with the composition operators themselves. The following lemma contains the estimates crucial for our argument.
\[le:Kernel\] Let $\varphi\colon {\mathbb{D}}\to {\mathbb{D}}$ be analytic and fix $a > 0$. For $\delta, \kappa, \lambda, r > 0$, write $$I(\delta,\kappa,\lambda, r)
= \frac{1}{2\pi}
\int_{\kappa ra-\lambda r}^{\kappa ra+\lambda r}
{\bigl\lvert C_\varphi f_{(1-r)e^{i\kappa r}}
((1-\delta r)e^{it}) \bigr\rvert}^2 \,dt.$$
1. If $\tau_{\varphi,1}(\{1\}) = a$, then $$\lim_{r\to 0} I(\delta,\kappa,\lambda, r)
= \frac{a \cdot c(\delta,\lambda)}{1+\delta/a},$$ where $0 < c(\delta,\lambda) < 1$ and $\lim_{\lambda\to\infty} c(\delta,\lambda) = 1$ for all $\delta > 0$.
2. If $\tau_{\varphi,1}(\{1\}) \neq a$, then $$\lim_{r\to 0} I(\delta,\kappa,\lambda, r)
= {\varepsilon}(\delta,\kappa,\lambda),$$ where $\lim_{\kappa\to\infty} {\varepsilon}(\delta,\kappa,\lambda) = 0$ for all $\delta,\lambda > 0$.
Let us fix $\delta,\kappa,\lambda > 0$, and write $w_r = (1-r)e^{i\kappa r}$ and $z_r(t) = (1-\delta r)e^{it}$. Then $$\label{eq:Iintegral}
I(\delta,\kappa,\lambda,r)
= \frac{2r-r^2}{2\pi}
\int_{\kappa ra-\lambda r}^{\kappa ra+\lambda r}
\frac{dt}{{\lvert1-\overline{w_r} \varphi(z_r(t))\rvert}^2}.$$
We first consider the case when $\tau_{\varphi,1}(\{1\}) = b$ for some $b > 0$. That is, $\varphi(1) = 1$ and $\varphi$ has a finite angular derivative equal to $1/b$ at $1$. Note that the points $z_r(t)$ involved in for $0 < r < 1$ all lie in a non-tangential approach region for the point $1$ (whose opening angle depends on $\delta$, $\kappa$, $a$, and $\lambda$). Therefore, for these $z_r(t)$ we have $$1 - \varphi(z_r(t)) = b^{-1} (1-z_r(t)) + r {\varepsilon}_r(t),$$ uniformly in $t$. Here and elsewhere in this proof we use ${\varepsilon}_r$ (with or without additional parameters) to denote a quantity which tends to zero as $r \to 0$. With this notation, we may also write $1 - \overline{w_r} = r + i\kappa r + r{\varepsilon}_r$ and $1 - z_r(t) = \delta r - it + r{\varepsilon}_r(t)$. Consequently, $$\begin{split}
1 - \overline{w_r}\varphi(z_r(t))
&= (1 - \overline{w_r}) + \{1 - \varphi(z_r(t))\} + r{\varepsilon}_r(t) \\
&= r(1+\delta/b) + i(\kappa r-t/b) + r{\varepsilon}_r(t).
\end{split}$$ We substitute this expression into the integrand in and perform the change of variables $u = t/ra-\kappa$ to get $$\begin{split}
I(\delta,\kappa,\lambda,r)
&= \frac{(2r-r^2)ra}{2\pi} \int_{-\lambda/a}^{+\lambda/a}
\frac{du}{ {\lvertr(1+\delta/b) +
i (\kappa r - \kappa ra/b - rau/b) + r{\varepsilon}_r(u)\rvert}^2 } \\
&= \frac{(2-r)a}{2\pi} \int_{-\lambda/a}^{+\lambda/a}
\frac{du}{ {\lvert(1+\delta/b) +
i ((1-a/b)\kappa - au/b) + {\varepsilon}_r(u)\rvert}^2 }.
\end{split}$$ Hence $$\label{eq:Ilimit}
\lim_{r\to 0} I(\delta,\kappa,\lambda,r)
= \frac{a}{\pi} \int_{-\lambda/a}^{+\lambda/a}
\frac{du}{ (1+\delta/b)^2 + ((1-a/b)\kappa-au/b)^2 }.$$ If $b = a$, this limit equals $$\frac{a}{\pi} \int_{-\lambda/a}^{+\lambda/a}
\frac{du}{ (1+\delta/a)^2 + u^2 },$$ which is of the desired form $ac(\delta,\lambda)/(1+\delta/a)$. On the other hand, if $b \neq a$, then the integrand in tends to zero as $\kappa \to \infty$, uniformly in $u$. So, in this case goes to zero as $\kappa \to \infty$.
Finally assume that $\tau_{\varphi,1}(\{1\}) = 0$, so $\varphi$ has no finite angular derivative at $1$ or $\varphi(1) \neq 1$. By the Julia–Carathéodory theorem, we now have $(1-\varphi(z))/(1-z) \to \infty$ as $z \to 1$ non-tangentially. By considerations similar to those in the first part of the proof, this implies that $\{1-\overline{w_r}\varphi(z_r(t))\}/r \to \infty$ as $r \to 0$, uniformly in $t$, and hence $I(\delta,\kappa,\lambda,r) \to 0$ as $r \to 0$. We leave the details to the reader.
Without loss of generality, we may take $\alpha = 1$. We first treat the case of a single mass point and then indicate the general argument. Let us assume that $\tau_{\varphi,1}(\{1\}) = a \neq \tau_{\psi,1}(\{1\})$ for some $a > 0$. Then, for $\delta,\kappa,\lambda > 0$ and small enough $r > 0$, we have $$\begin{split}
{\bigl\lVert (C_\varphi-C_\psi) f_{(1-r)e^{i\kappa r}} \bigr\rVert}_2
&\geq \biggl( \frac{1}{2\pi}
\int_{\kappa ra-\lambda r}^{\kappa ra+\lambda r}
{\bigl\lvert (C_\varphi-C_\psi) f_{(1-r)e^{i\kappa r}}
((1-\delta r) e^{it}) \bigr\rvert}^2
\,dt \biggr)^{1/2} \\
&\geq I_\varphi(\delta,\kappa,\lambda, r)^{1/2} -
I_\psi(\delta,\kappa,\lambda, r)^{1/2},
\end{split}$$ where $I_\varphi$ and $I_\psi$ refer to the integrals of Lemma \[le:Kernel\] corresponding to $\varphi$ and $\psi$, respectively. Passing to the limit as $r \to 0$, we then get the following type of lower bound for the essential norm of $C_\varphi-C_\psi$: $${\lVertC_\varphi-C_\psi\rVert}_e
\geq \biggl(
\frac{a\cdot c(\delta,\lambda)}{1+\delta/a}\biggr)^{1/2}
- {\varepsilon}(\delta,\kappa,\lambda)^{1/2}.$$ Letting $\kappa \to \infty$, $\lambda \to \infty$ and $\delta \to 0$ now yields ${\lVertC_\varphi-C_\psi\rVert}_e \geq a^{1/2}$ as desired.
To prove the theorem in full (assuming still $\alpha = 1$), we observe that the above reasoning is local in the sense that the interval ${\mathopen[{\kappa ra-\lambda r,\;\kappa ra+\lambda r}\mathclose]}$ shrinks to $0$ as $r \to 0$. Let $Z_0 = \{\zeta_1,\ldots,\zeta_n\}$ be any finite subset of the (possibly infinite) set $Z$, where $\zeta_k \neq \zeta_l$ for $k \neq l$. Write $t_k = \arg\zeta_k$ and $a_k = \tau_{\varphi,1}(\{\zeta_k\})$. We proceed as above, just integrating over the union of the intervals ${\mathopen[{t_k+\kappa ra_k-\lambda r,\;t_k+\kappa ra_k+\lambda r}\mathclose]}$, $k = 1,\ldots,n$. Since these are disjoint for small $r$, we get, after passing to the appropriate limits as above, $${\lVertC_\varphi-C_\psi\rVert}_e
\geq \biggl(\sum_{k=1}^n
\tau_{\varphi,1}(\{\zeta_k\}) \biggr)^{1/2}
= \tau_{\varphi,1}(Z_0)^{1/2}.$$ Finally, if $Z$ is infinite, we take the supremum over all finite subsets $Z_0 \subset Z$ to complete the proof of the theorem.
Proof of Main Theorem: non-compact composition operators in the component of compacts {#sec:Main}
=====================================================================================
In this section we will establish our Main Theorem, giving a positive answer to the question (B) stated in Section \[sec:Intro\]. We will actually find a continuous path that connects compact composition operators to a non-compact one. Moreover, the same construction turns out to work for a variety of spaces in addition to $\mathcal{H}^2$.
For $0 \leq t \leq 1$ there are analytic maps $\varphi_t\colon {\mathbb{D}}\to {\mathbb{D}}$ such that $C_{\varphi_0}$ is compact and $C_{\varphi_1}$ is non-compact on $X$, and $t \mapsto C_{\varphi_t}$ is continuous from $[0,1]$ into ${{\rm Comp }}(X)$, where $X$ is any of the spaces $\mathcal{M}$, $L^p$ or $\mathcal{H}^p$ with $1 \leq p < \infty$.
We begin with some preliminary observations and lemmas. First of all, it is enough to deal with the case $X = \mathcal{M}$. Indeed, as we pointed out in Section \[sec:CompOp\], the compactness of composition operators is equivalent in any two of the spaces mentioned. Furthermore, we may apply interpolation between $L^1$ (a subspace of $\mathcal{M}$) and $L^\infty$ to conclude that for any $1 \leq p < \infty$ and $s,t \in {\mathopen[{0,1}\mathclose]}$, $$\begin{split}
&{\lVertC_{\varphi_s}-C_{\varphi_t}\colon L^p\to L^p\rVert} \\
&\qquad
\leq {\lVertC_{\varphi_s}-C_{\varphi_t}\colon L^1\to L^1\rVert}^{1/p} \,
{\lVertC_{\varphi_s}-C_{\varphi_t}\colon
L^\infty\to L^\infty\rVert}^{1-1/p}
\\
&\qquad
\leq 2^{1-1/p} \, {\lVertC_{\varphi_s}-C_{\varphi_t}\colon
\mathcal{M}\to\mathcal{M}\rVert}^{1/p}.
\end{split}$$ (See e.g. [@BeSh Sec. 4.1] for the classical Riesz–Thorin interpolation theorem.)
Throughout the proof we will utilize Sarason’s way of viewing composition operators as acting on the unit circle (cf.Sec. \[sec:CompOp\]). If $\varphi$ is an analytic self-map of ${\mathbb{D}}$ and $E \subset {\mathbb{T}}$ is a Borel set, we let $\chi_E C_\varphi$ denote the restriction of $C_\varphi$ to $E$. More precisely, if $\mu \in \mathcal{M}$ and $C_\varphi\mu = \nu$, then $\chi_E C_\varphi\mu$ refers to the Borel measure $B \mapsto \nu(E \cap B)$ on ${\mathbb{T}}$. For functions $f \in L^1$, this simply means that $\chi_E C_\varphi f(\zeta) =
\chi_E(\zeta) f(\varphi(\zeta))$ for $m$-a.e. $\zeta \in {\mathbb{T}}$. In this context, equation easily yields that $$\label{eq:MNorm}
{\lVert\chi_E C_\varphi\colon \mathcal{M} \to \mathcal{M}\rVert} =
\sup \{\tau_{\varphi,\alpha}(E) : \alpha \in {\mathbb{T}}\}.$$
We will also need a tool to estimate the size of the difference of two composition operators in terms of the boundary values of their symbols. We use $\rho$ to denote the hyperbolic distance in ${\mathbb{D}}$; it is the conformally invariant metric induced by the arc length element $2{\lvertdz\rvert}/(1-{\lvertz\rvert}^2)$ (see e.g.[@Ga Sec. I.1]). When working with hyperbolic distances, it is often convenient to shift to the right half-plane ${\mathbb{H}}= \{z': {\operatorname{Re}}z' > 0\}$, where the hyperbolic metric $\rho_{\mathbb{H}}$ is induced by the arc length element ${\lvertdz'\rvert}/{\operatorname{Re}}z'$. For any $\alpha \in {\mathbb{T}}$, this is accomplished through the Möbius transformation $z' = (\alpha+z)/(\alpha-z)$, which takes ${\mathbb{D}}$ onto ${\mathbb{H}}$ isometrically relative to $\rho$ and $\rho_{\mathbb{H}}$. This transformation we have already encountered in the definition of Aleksandrov–Clark measures.
\[le:MDiff\] Let $\varphi,\psi\colon {\mathbb{D}}\to {\mathbb{D}}$ be analytic, and let $E \subset {\mathbb{T}}$ be a Borel set such that $\tau_{\varphi,\alpha}(\partial E) =
\tau_{\psi,\alpha}(\partial E) = 0$ for all $\alpha \in {\mathbb{T}}$. Also let $0 < {\varepsilon}< 1$. Suppose that for $m$-a.e. $\zeta \in E$ the following holds:if one of $\varphi(\zeta)$ and $\psi(\zeta)$ is unimodular, then $\varphi(\zeta) = \psi(\zeta)$, and otherwise $\rho(\varphi(\zeta),\psi(\zeta)) \leq {\varepsilon}$. Then $${\lVert\chi_E (C_\varphi-C_\psi)\colon
\mathcal{M} \to \mathcal{M}\rVert}
\leq C {\varepsilon}/ (1-{\lvert\varphi(0)\rvert}),$$ where $C > 0$ is a universal constant.
We first note that the Poisson kernel functions $P_z$ satisfy the following estimate: for all $z,w \in {\mathbb{D}}$ with $\rho(z,w) \leq 1$ and $\alpha \in {\mathbb{T}}$, $$\label{eq:PEst}
{\lvertP_z(\alpha)-P_w(\alpha)\rvert} \leq C\rho(z,w)\, P_z(\alpha),$$ where $C > 0$ is a universal constant. In fact, one may use the transformation $z' = (\alpha+z)/(\alpha-z)$ to pass to the right half-plane where becomes $${\lvert{\operatorname{Re}}(z'-w')\rvert} \leq C \rho_{\mathbb{H}}(z',w') \,{\operatorname{Re}}z',$$ which is easy to verify by geometric reasoning.
Now fix $\alpha \in {\mathbb{T}}$ and $0 < r < 1$. Since $\rho(r\varphi(\zeta),r\psi(\zeta)) \leq {\varepsilon}$ for $m$-a.e. $\zeta \in E$, we get by that $$\int_E {\biggl\lvert
\frac{1-{\lvertr\varphi\rvert}^2}{{\lvert\alpha-r\varphi\rvert}^2} -
\frac{1-{\lvertr\psi\rvert}^2}{{\lvert\alpha-r\psi\rvert}^2} \biggr\rvert} \,dm
\leq C{\varepsilon}\int_E
\frac{1-{\lvertr\varphi\rvert}^2}{{\lvert\alpha-r\varphi\rvert}^2} \,dm
\leq C{\varepsilon}\frac{1-{\lvertr\varphi(0)\rvert}^2}{{\lvert\alpha-r\varphi(0)\rvert}^2}.$$ The last inequality was obtained by extending the integral over the whole circle ${\mathbb{T}}$ and using the harmonicity of the integrand. The definition of the Aleksandrov–Clark measures implies that the absolutely continuous measure $(1-{\lvertr\varphi\rvert}^2)/{\lvert\alpha-r\varphi\rvert}^2\,dm$ converges to $\tau_{\varphi,\alpha}$ weak\* as $r \to 1$. Similarly $(1-{\lvertr\psi\rvert}^2)/{\lvert\alpha-r\psi\rvert}^2\,dm$ converges to $\tau_{\psi,\alpha}$. Therefore, the preceding chain of inequalities yields, as $r \to 1$, $${\lvert\tau_{\varphi,\alpha}-\tau_{\psi,\alpha}\rvert}(E)
\leq C{\varepsilon}\frac{1-{\lvert\varphi(0)\rvert}^2}{{\lvert\alpha-\varphi(0)\rvert}^2}.$$ (Here we needed the assumption that $\tau_{\varphi,\alpha}$ and $\tau_{\psi,\alpha}$ both assign measure zero to the boundary of $E$.) Hence $$\begin{split}
{\lVert\chi_E (C_\varphi-C_\psi)\colon
\mathcal{M} \to \mathcal{M}\rVert}
&= \sup \{{\lvert\tau_{\varphi,\alpha}-\tau_{\psi,\alpha}\rvert}(E) :
\alpha \in {\mathbb{T}}\} \\
&\leq \frac{2C {\varepsilon}}{1-{\lvert\varphi(0)\rvert}},
\end{split}$$ and the proof is complete.
We are now in a position to define the maps $\varphi_t$. Recall from Section \[sec:CompOp\] that a composition operator $C_\varphi$ is non-compact on any of the spaces mentioned in Main Theorem if and only if at least one of the Aleksandrov–Clark measures $\tau_{\varphi,\alpha}$ fails to be absolutely continuous. On the other hand, if $C_\varphi$ is required to belong to the component of compact composition operators, MacCluer’s theorem implies that none of $\tau_{\varphi,\alpha}$ may have atoms. That is why we have to consider Aleksandrov–Clark measures with continuous singularity.
Let $\lambda$ be any nontrivial, positive and finite continuously singular Borel measure on the unit circle ${\mathbb{T}}$. For $0 \leq t \leq 1$, let $$\label{eq:tau1def}
\tau_{t,1} = m + \chi_{I(0,t)} \lambda,$$ where $I(0,t) \subset {\mathbb{T}}$ is the closed arc connecting the point $1$ to $e^{2\pi it}$ in the positive direction and, as before, $m$ denotes the normalized Lebesgue measure. We consider the Herglotz integral of $\tau_{t,1}$, $$H\tau_{t,1}(z) =
\int_{\mathbb{T}}\frac{\zeta+z}{\zeta-z} \,d\tau_{t,1}(\zeta),$$ and define the map $\varphi_t$ by $$\label{eq:phi_t}
\frac{1+\varphi_t}{1-\varphi_t} = H\tau_{t,1},
\qquad\text{that is,}\qquad
\varphi_t = \frac{H\tau_{t,1}-1}{H\tau_{t,1}+1}.$$ Since the real part of $H\tau_{t,1}$ is the Poisson integral of $\tau_{t,1}$, we see that $\tau_{t,1}$ becomes the Aleksandrov–Clark measure of $\varphi_t$ at $1$. Moreover, since this Poisson integral is $\geq 1$ everywhere on ${\mathbb{D}}$, it follows that $\varphi_t$ either takes ${\mathbb{D}}$ into the open disc $\{w : {\lvertw-\tfrac{1}{2}\rvert} < \tfrac{1}{2}\}$ or is constant $0$ (for small $t$). In general, we let $\tau_{t,\alpha}$ denote the Aleksandrov–Clark measure of $\varphi_t$ at $\alpha \in {\mathbb{T}}$.
The compactness statements of Main Theorem are now immediate. Since $\tau_{1,1} = m + \lambda$ is not absolutely continuous, the operator $C_{\varphi_1}$ is non-compact. On the other hand, $\varphi_0 \equiv 0$, so $C_{\varphi_0}$ is clearly compact.
The hard part of the proof consists of showing that the map $t \mapsto C_{\varphi_t}$ is indeed continuous. This will be based on the following two lemmas.
\[le:tauMeasure\] Let ${\varepsilon}> 0$. There exists $\delta > 0$ such that if $I \subset {\mathbb{T}}$ is an arc with $m(I) \leq \delta$, then the Aleksandrov–Clark measures of the maps $\varphi_t$ satisfy $\tau_{t,\alpha}(I) \leq {\varepsilon}$ for all $t \in [0,1]$ and $\alpha \in {\mathbb{T}}$. In particular, none of $\tau_{t,\alpha}$ have atoms.
First of all we note that all the measures $\tau_{t,\alpha}$ are indeed continuous, i.e. have no atoms. For $\alpha = 1$ this is clear from . For $\alpha \neq 1$ we need to note that since the image of $\varphi_t$ does not touch $\alpha$, the harmonic function $$\label{eq:ACt}
{\operatorname{Re}}\frac{\alpha+\varphi_t(z)}{\alpha-\varphi_t(z)}
= \int_{\mathbb{T}}P_z \,d\tau_{t,\alpha},$$ is bounded and hence $\tau_{t,\alpha}$ is absolutely continuous.
Using and one can easily show that the left-hand side of is continuous as a function of the pair $(t,\alpha)$ in ${\mathopen[{0,1}\mathclose]}\times{\mathbb{T}}$. Since linear combinations of Poisson kernels are dense among the continuous functions on ${\mathbb{T}}$, it follows that the map $(t,\alpha) \mapsto \tau_{t,\alpha}$ is continuous in the weak\* sense.
Now assume that the claim of the lemma fails. Then there are arcs $I_n \subset {\mathbb{T}}$ and points $t_n \in {\mathopen[{0,1}\mathclose]}$ and $\alpha_n \in {\mathbb{T}}$ such that $\tau_{t_n,\alpha_n}(I_n) > {\varepsilon}$ for all $n \geq 1$ while $m(I_n) \to 0$. By passing to a subsequence we may assume that the intervals $I_n$ (i.e. their endpoints) converge to a point $\zeta_0 \in {\mathbb{T}}$ and also that $t_n \to t_0$ and $\alpha_n \to \alpha_0$. Now for each $\eta > 0$ we have $\tau_{t_n,\alpha_n}(I(e^{-i\eta}\zeta_0,e^{i\eta}\zeta_0))
> {\varepsilon}$ whenever $n$ is large enough. Since the map $(t,\alpha) \mapsto \tau_{t,\alpha}$ is weak\* continuous, it follows that $\tau_{t_0,\alpha_0}(I(e^{-i\eta}\zeta_0,e^{i\eta}\zeta_0)) \geq
{\varepsilon}$ for all $\eta > 0$, and hence $\tau_{t_0,\alpha_0}(\{\zeta_0\}) \geq {\varepsilon}$. This is a contradiction since we observed that $\tau_{t_0,\alpha_0}$ cannot have atoms.
\[le:rho\] Fix $t_0 \in {\mathopen[{0,1}\mathclose]}$ and let $I_0 \subset {\mathbb{T}}$ be an arc whose midpoint is $e^{2\pi it_0}$. If ${\varepsilon}> 0$ is given, there exists $\delta > 0$ such that $$\rho(\varphi_{t_0}(\zeta),\varphi_t(\zeta)) \leq {\varepsilon}\quad\text{for $\zeta \in {\mathbb{T}}\setminus I_0$}$$ whenever ${\lvertt_0-t\rvert} \leq \delta$.
Assume that ${\lvertt_0-t\rvert}$ is so small that the distance of the point $e^{2\pi it}$ to the set ${\mathbb{T}}\setminus I_0$ is greater than a positive constant $c$. Then $H\tau_{t,1} = H\tau_{t_0,1}
\pm H(\chi_{J_t}\lambda)$, where $J_t \subset {\mathbb{T}}$ is the arc connecting $e^{2\pi it_0}$ to $e^{2\pi it}$. Moreover, for $\zeta \in {\mathbb{T}}\setminus I_0$ we have $${\lvertH(\chi_{J_t}\lambda)(\zeta)\rvert}
= {\biggl\lvert\int_{J_t}\frac{\xi+\zeta}{\xi-\zeta}\,d\lambda(\xi)\biggr\rvert}
\leq \frac{2}{c} \lambda(J_t).$$ Since this upper bound tends to zero as $t \to t_0$ and ${\operatorname{Re}}H\tau_{t_0,1} \geq 1$, we see that the distance between $H\tau_{t,1}(\zeta)$ and $H\tau_{t_0,1}(\zeta)$ in the hyperbolic metric of the right half-plane tends to zero as $t \to t_0$, uniformly for $\zeta \in {\mathbb{T}}\setminus I_0$. In view of and the conformal invariance of the hyperbolic metric, the same conclusion holds true for the distance of $\varphi_t(\zeta)$ and $\varphi_{t_0}(\zeta)$ in the metric $\rho$.
We are now ready to prove the continuity of the map $t \mapsto C_{\varphi_t}$ with respect to the operator norm on $\mathcal{M}$. Let $0 < {\varepsilon}< 1$. By Lemma \[le:tauMeasure\] we can find $\delta > 0$ such that $\tau_{t,\alpha}(I) \leq
{\varepsilon}$ for all $t \in [0,1]$ and $\alpha \in {\mathbb{T}}$ whenever $I \subset {\mathbb{T}}$ is an arc with $m(I) \leq \delta$. For all such $I$, equation yields the estimate $$\label{eq:Est1}
{\lVert\chi_I C_{\varphi_t}\rVert} \leq {\varepsilon}.$$ (Here and throughout the rest of the proof ${\lVert\ \rVert}$ refers to the operator norm on $\mathcal{M}$.)
Now fix $t_0 \in [0,1]$ and pick an arc $I_0 \subset {\mathbb{T}}$ with $m(I_0) \leq \delta$ whose midpoint is $e^{2\pi it_0}$. By Lemma \[le:rho\] there exists $\eta > 0$ such that if ${\lvertt_0-t\rvert} \leq \eta$, then $\rho(\varphi_{t_0}(\zeta),\varphi_t(\zeta)) \leq {\varepsilon}$ for all $\zeta \in {\mathbb{T}}\setminus I_0$. Hence Lemma \[le:MDiff\] shows that $$\label{eq:Est2}
{\lVert\chi_{{\mathbb{T}}\setminus I_0} (C_{\varphi_{t_0}}-C_{\varphi_t})\rVert}
\leq C{\varepsilon}/(1-{\lvert\varphi_{t_0}(0)\rvert})$$ whenever ${\lvertt_0-t\rvert} \leq \eta$. To finish the argument we just write $$C_{\varphi_{t_0}}-C_{\varphi_t}
= \chi_{I_0}C_{\varphi_{t_0}} - \chi_{I_0}C_{\varphi_t} +
\chi_{{\mathbb{T}}\setminus I_0}(C_{\varphi_{t_0}}-C_{\varphi_t})$$ and, when ${\lvertt_0-t\rvert} \leq \eta$, invoke estimates and to conclude that $${\lVertC_{\varphi_{t_0}}-C_{\varphi_t}\rVert}
\leq {\varepsilon}+ {\varepsilon}+ C {\varepsilon}/(1-{\lvert\varphi_{t_0}(0)\rvert}).$$ Since ${\varepsilon}> 0$ was arbitrary, this clearly shows that the norm of $C_{\varphi_{t_0}}-C_{\varphi_t}$ on $\mathcal{M}$ tends to zero as $t \to t_0$.
The proof of Main Theorem is now complete.
\[re:Heur\] We try to describe the heuristics behind the above construction. First of all, one can easily show that if a continuous path $(C_{\varphi_t})$ yielding the desired example exists, then one may assume that the image of each map $\varphi_t$ is contained in the disc $\{w : {\lvertw-\tfrac{1}{2}\rvert} \leq \tfrac{1}{2}\}$. Then $\tau_{1,1}$ is necessarily of the form $g\,dm + \lambda$ where $g \geq 1$ and $\lambda$ is non-trivial and continuously singular. One may also assume that $\varphi_0\equiv 0$. The central issue then is to find the intermediate maps $\varphi_t$ for $0 < t < 1$. A seemingly natural choice might be $\varphi_t=(1-t)\varphi_0+t\varphi_1$, but this obviously fails to work since each such map is compact. On the other hand, in certain applications to spectral theory one proceeds by considering the maps corresponding to the Aleksandrov–Clark measures $\tau_{t,1}=(1-t)\tau_{0,1}+t\tau_{1,1}$. However, Theorem \[thm:MacExt\] suggests that this approach might not work either. Namely, in the case of a discrete singular part, Theorem \[thm:MacExt\] shows that if one makes a simultaneous change—no matter how small—to all the mass points of the singular part, then this induces a big difference in the corresponding composition operator. These considerations were behind our actual choice , where the singularity $\lambda$ is continuously “wiped off” in such a way that the change in $\tau_{t,1}$ is strictly local at every instant $t$.
Further remarks and open problems {#sec:Further}
=================================
After the work of Section \[sec:Main\] it is natural to search for a larger class of composition operators that could be continuously joined to the compacts. For instance, one might be tempted to expect a positive answer to the following question:
- Assume that $\varphi$ and $\alpha_0 \in {\mathbb{T}}$ are such that the measure $\tau_{\varphi,\alpha_0}$ has no atoms and, for all $\alpha \neq \alpha_0$, the measure $\tau_{\varphi,\alpha}$ is absolutely continuous. Does it follow that $C_\varphi$ belongs to ${{\rm Comp }}_K(\mathcal{H}^2)$?
The answer to this question is, however, negative.
\[ex:Isol\] There is a symbol $\psi$ such that $C_\psi$ is isolated in ${{\rm Comp }}(\mathcal{H}^2)$ and the following properties hold:$\tau_{\psi,1}$ has a continuous non-trivial singular part while all the other measures $\tau_{\psi,\alpha}$ are absolutely continuous. In fact, one may choose $\psi = \varphi \circ \sigma$, where $\sigma$ is an inner function and $\varphi$ is a conformal map from ${\mathbb{D}}$ onto a region $\Omega\subset{\mathbb{D}}$ with $\overline{\Omega} \cap {\mathbb{T}}= \{1\}$.
The above example is based on a construction of Shapiro and Sundberg [@ShSuIso]. We first recall some terminology. Shapiro and Sundberg call a continuous and $2\pi$-periodic function $\kappa\colon \mathbb{R} \to {\mathopen[{0,1}\mathclose)}$ a *contact function* if it is increasing and positive on ${\mathopen({0,\pi}\mathclose]}$, decreasing and positive on ${\mathopen[{-\pi,0}\mathclose)}$ and vanishes at the origin. Such a function determines an approach region $$\Omega(\kappa)
= \{ re^{i\theta}: 0 \leq r < 1-\kappa(\theta) \},$$ whose boundary is a Jordan curve in ${\overline{{\mathbb{D}}}}$ that meets the unit circle only at the point $1$. In this setting Shapiro and Sundberg prove the following (see Theorem 4.1 and Remark 5.1 of [@ShSuIso]).
\[thm:SSIsol\] Suppose $\kappa$ is a $C^2$ contact function and $\varphi$ is a conformal map from ${\mathbb{D}}$ onto $\Omega(\kappa)$. If $$\label{eq:Extremal}
\int_0^\pi \log\kappa(\theta)\,d\theta = -\infty,$$ then $C_\varphi$ is (essentially) isolated in ${{\rm Comp }}(\mathcal{H}^2)$.
We observe that this theorem can be extended as follows.
\[prop:IsolExt\] Let $\varphi$ be a function given by Theorem \[thm:SSIsol\], and let $\sigma$ be an inner function with $\sigma(0) = 0$. Put $\psi = \varphi \circ \sigma$. Then $C_\psi$ is (essentially) isolated in ${{\rm Comp }}(\mathcal{H}^2)$.
Let us note that an analytic self-map of ${\mathbb{D}}$ is an inner function if and only if any (or all) of its Aleksandrov–Clark measures is singular. Therefore, to produce the symbol needed for Example \[ex:Isol\], we choose any inner function $\sigma$ vanishing at the origin whose Aleksandrov–Clark measure $\tau_{\sigma,1}$ is continuously singular. We then apply Proposition \[prop:IsolExt\] with the additional requirement that $\varphi(1) = 1$. It is relatively easy to check that $\psi = \varphi \circ \sigma$ has the required properties; in particular, $\tau_{\psi,1}$ cannot have atoms.
We start by recalling some ideas from the proof of Theorem \[thm:SSIsol\]. Write $\Omega = \Omega(\kappa)$ for the image of $\varphi$. A crucial part of Shapiro’s and Sundberg’s argument is the construction of a sequence of test functions $f_n \in \mathcal{H}^2$ which converges to zero weakly in $\mathcal{H}^2$. Their functions satisfy the following properties: ${\lvertf_n\rvert}^2 \geq c/m(J_n)$ on $\Gamma_n$, where $\Gamma_n \subset {\partial}\Omega$ are arcs converging to $1$ and $J_n = \varphi^{-1}(\Gamma_n)$; and ${\lvertf_n\rvert} \leq 1$ on ${\mathbb{D}}\setminus T_n$, where $T_n \subset {\mathbb{D}}$ is a set containing $\Gamma_n$ whose diameter is roughly twice the length of $\Gamma_n$. Now suppose that $\eta\colon {\mathbb{D}}\to {\mathbb{D}}$ is any analytic map different from $\varphi$. Shapiro and Sundberg consider the sets $E_n = \{ \zeta \in J_n: {\lvert\varphi(\zeta)-\eta(\zeta)\rvert} \geq c_n\}$ where $c_n$ is approximately twice the diameter of $T_n$. They observe that for $\zeta \in E_n$ one has $\varphi(\zeta) \in \Gamma_n$ and $\eta(\zeta) \in {\mathbb{D}}\setminus T_n$. Therefore ${\lvertf_n\circ\varphi - f_n\circ\eta\rvert}^2 \geq c/m(J_n)$ on $E_n$. Since $f_n \to 0$ weakly, this yields the estimate $${\lVertC_\varphi-C_\eta\rVert}_e^2
\geq c \limsup_{n\to\infty} \frac{m(E_n)}{m(J_n)}.$$ Finally Shapiro and Sundberg show that $\limsup m(E_n)/m(J_n) = 1$, based simply on the fact that $\int_{\mathbb{T}}\log{\lvert\varphi-\eta\rvert} \,dm > -\infty$.
Our argument is just a minor adaptation of the one explained above. Suppose that $\eta\colon {\mathbb{D}}\to {\mathbb{D}}$ is an analytic map different from $\psi$, and put $J_n' = \psi^{-1}(\Gamma_n)$ and $E_n' = \{ \zeta \in J_n' : {\lvert\psi(\zeta)-\eta(\zeta)\rvert} \geq c_n\}$. Then $J_n' = \sigma^{-1}(J_n)$, and since $\sigma$ is an inner function fixing the origin, we have $m(J_n') = m(J_n)$. Thus, using the test functions $f_n$ as before, we arrive at the estimate $${\lVertC_\psi-C_\eta\rVert}_e^2
\geq c \limsup_{n\to\infty} \frac{m(E_n')}{m(J_n')}.$$ The proof is now completed by using the same argument as Shapiro and Sundberg to show that the limit superior here equals $1$.
Given the above example, it seems appropriate to close this section with the following general open problem.
Determine all the non-compact composition operators in ${{\rm Comp }}_K(\mathcal{H}^2)$.
This problem might be quite hard. As a first step one could try to describe interesting subsets of ${{\rm Comp }}_K(\mathcal{H}^2)$ that are larger than those provided by obvious modifications of our construction presented in Section \[sec:Main\]. For instance, it would be instructive to know if the extremality condition that was essential for the example provided by Proposition \[prop:IsolExt\] can be relaxed.
[99]{}
R. Aron, P. Galindo and M. Lindström, *Connected components in the space of composition operators in $H^\infty$ functions of many variables*, Integral Equations Operator Theory **45** (2003), 1–14.
C. Bennett and R. Sharpley, *Interpolation of Operators*, Academic Press, Boston, 1988.
E. Berkson, *Composition operators isolated in the uniform operator norm*, Proc. Amer. Math. Soc. [**81**]{} (1981), 230–232.
P. S. Bourdon, *Components of linear-fractional composition operators*, J. Math. Anal. Appl. [**279**]{} (2003), 228–245.
J. A. Cima and A. L. Matheson, *Essential norms of composition operators and Aleksandrov measures*, Pacific J.Math. [**179**]{} (1997), 59–64.
J. A. Cima, A. L. Matheson and W. T. Ross, *The Cauchy Transform*, Amer. Math. Soc., Providence, 2006.
C. C. Cowen and B. D. MacCluer, *Composition Operators on Spaces of Analytic Functions*, CRC Press, Boca Raton, 1995.
P. L. Duren, *Theory of $H^p$ spaces*, Academic Press, New York, 1970; reprinted by Dover, New York, 2000.
J. B. Garnett, *Bounded Analytic Functions*, Academic Press, New York, 1981; revised ed. by Springer, New York, 2007.
C. Hammond and B. D. MacCluer, *Isolation and component structure in spaces of composition operators*, Integral Equations Operator Theory **53** (2005), 269–285.
T. Hosokawa and S. Ohno, *Topological structures of the sets of composition operators on the Bloch spaces*, J. Math.Anal. Appl. **314** (2006), 736–748.
T. Kriete and J. Moorhouse, *Linear relations in the Calkin algebra for composition operators*, Trans. Amer. Math. Soc. (2007), 2915–2944.
B. D. MacCluer, *Components in the space of composition operators*, Integral Equations Operator Theory [**12**]{} (1989), 725–738.
B. MacCluer, S. Ohno and R. Zhao, *Topological structure of the space of composition operators on $H^{\infty}$*, Integral Equations Operator Theory [**40**]{} (2001), 481–494.
B. D. MacCluer and J. H. Shapiro, *Angular derivatives and compact composition operators on the Hardy and Bergman spaces*, Canad. J. Math. **38** (1986) 878–906.
A. Matheson and M. Stessin, *Applications of spectral measures*, Recent Advances in Operator-Related Function Theory, Contemp. Math. **393** (2006), 15–27.
J. Moorhouse, *Compact differences of composition operators*, J. Funct. Anal. [**219**]{} (2005), 70–92.
J. Moorhouse and C. Toews, *Differences of composition operators*, Trends in Banach spaces and operator theory (Memphis, TN, 2001), Contemp. Math. [**321**]{} (2003), 207–213.
P. J. Nieminen and E. Saksman, *On compactness of the difference of composition operators*, J. Math. Anal. Appl. (2004), 501–522.
A. Poltoratski and D. Sarason, *Aleksandrov–Clark measures*, Recent Advances in Operator-Related Function Theory, Contemp. Math. **393** (2006), 1–14.
E. Saksman, *An elementary introduction to Clark measures*, Topics in Complex Analysis and Operator Theory, Univ. Málaga, 2007, pp. 85–136.
D. Sarason, *Composition operators as integral operators*, Analysis and Partial Differential Equations, Lecture Notes in Pure and Appl. Math., vol. 122, Dekker, New York, 1990, pp. 545–565.
J. E. Shapiro, *Aleksandrov measures used in essential norm inequalities for composition operators*, J. Operator Theory **40** (1998), 133–146.
J. H. Shapiro, *The essential norm of a composition operator*, Ann. of Math. [**125**]{} (1987), 375–404.
J. H. Shapiro, *Composition Operators and Classical Function Theory*, Springer, New York, 1993.
J. H. Shapiro and C. Sundberg, *Compact composition operators on $L^1$*, Proc. Amer. Math. Soc. **108** (1990), 443–449.
J. H. Shapiro and C. Sundberg, *Isolation amongst the composition operators*, Pacific J. Math. [**145**]{} (1990), 117–151.
[^1]: First author was partially supported by Plan Nacional I+D grant no. MTM2006-06431 and Gobierno de Aragón research group *Análisis Matemático y Aplicaciones*, ref. DGA E-64. Second author was partially supported by Plan Nacional I+D grant no. MTM2005-00544 and 2005SGR00774. Third author was supported by the Finnish Graduate School in Mathematical Analysis and Its Applications, and the Academy of Finland, project no. 118422. Fourth author was supported by the Academy of Finland, projects no. 113826 and 118765.
|
---
author:
- |
J. Klusoň\
Department of Theoretical Physics and Astrophysics\
Faculty of Science, Masaryk University\
Kotlářská 2, 611 37, Brno\
Czech Republic\
E-mail:
title: 'Note About Hamiltonian Formalism of Modified $F(R)$ Hořava-Lifshitz Gravities and Their Healthy Extension'
---
Introduction and Summary {#first}
========================
Last year Petr Hořava proposed new intriguing approach for the formulation of UV finite quantum theory of gravity [@Horava:2009uw; @Horava:2008ih; @Horava:2008jf] . The basic idea of this theory is to modify the UV behavior of the general theory so that the theory is perturbatively renormalizable. However this modification is only possible on condition when we abandon Lorentz symmetry in the high energy regime: in this context, the Lorentz symmetry is regarded as an approximate symmetry observed only at low energy.
In [@Kluson:2009rk; @Kluson:2009xx] we introduced version of Hořava-Lifshitz gravity that is related to $F(R)$ theories [^1]. This approach was further developed in very interesting paper [@Chaichian:2010yi]. It was argued there that such a form of gravity could provide unification of the early time inflation with the late time acceleration. Moreover, the preliminary analysis of the cosmological solution with some promising properties was given there as well.
The goal of this short note is to find the Hamiltonian formulation of modified $F(R)$ Hořava-Lifshitz theory. In fact, the Hamiltonian analysis of given theory was already done in [@Chaichian:2010yi] but we feel that it deserve to be investigated further. Following [@Deruelle:2009pu] we formulate the Hamiltonian formalism for modified $F(R)$ Hořava-Lifshitz gravity and we show that the algebra of constraints is closed for theory that obeys the projectability condition that claims that the lapse function depends on time only $N=N(t)$.
As a counterexample of standard form of well defined Hamiltonian dynamics of modified $F(R)$ theories of gravity that obey the projectability condition we discuss the Hamiltonian analysis of healthy extended Hořava-Lifshitz gravity that was proposed in [@Blas:2009qj; @Blas:2009ck]. Explicitly, since the momentum conjugate to lapse is primary constraint of the theory we find that the preservation of this constraint during the time evolution of the system induces the secondary constraint that has non-zero Poisson bracket with the primary constraint $p_N
\approx 0$. In other words, they form the collection of the second class constraints. It is instructive to compare this result with conclusions presented in [@Henneaux:2009zb]. It was shown there that the Hořava-Lifshitz gravity without the projectability condition has very peculiar property in the sense that the Hamiltonian constraints are the second class constraints and that the gravitational Hamiltonian vanishes strongly. However in case of the healthy extended Hořava-Lifshitz gravities we find new and surprasing resolution. Explicitly, since $p_N$ and corresponding secondary constraints are the second class constraints their can be explicitly solved. Then we can express $N$ as a function of cannonical variables, at least at principle. Further, the reduced phase space of healthy extended Hořava-Lifshitz theory is spanned by $g_{ij},p^{ij}$ and there is no gauge freedom related to the time reparameterization of theory since there is not the first class Hamiltonian constraint. Interestingly, this result naturally solves the problem of the closure of the algebra of the Hamiltonian constraints in the Hořava-Lifshitz gravity. Secondly, one can hope that heatlhy extended Hořava-Lifshitz gravities can provide solution of the problem of time in gravity [^2]. We hope to return to this interesting problem in future.
The structure of this note is as follows. In the next section (\[second\]) we perform the Hamiltonian analysis of modified $F(R)$ Hořava-Lifshitz theory of gravity. In section (\[third\]) we perform the Hamiltonian analysis of healthy extended Hořava-Lifshitz gravities and discuss their properties.
Hamiltonian Formulation of Modified $F(R)$ Hořava-Lifshitz gravity {#second}
==================================================================
Let us consider $D+1$ dimensional manifold $\mathcal{M}$ with the coordinates $x^\mu \ , \mu=0,\dots,D$ and where $x^\mu=(t,\bx) \ ,
\bx=(x^1,\dots,x^D)$. We presume that this space-time is endowed with the metric $\hat{g}_{\mu\nu}(x^\rho)$ with signature $(-,+,\dots,+)$. Suppose that $ \mathcal{M}$ can be foliated by a family of space-like surfaces $\Sigma_t$ defined by $t=x^0$. Let $g_{ij}, i,j=1,\dots,D$ denotes the metric on $\Sigma_t$ with inverse $g^{ij}$ so that $g_{ij}g^{jk}=
\delta_i^k$. We further introduce the operator $\nabla_i$ that is covariant derivative defined with the metric $g_{ij}$. We introduce the future-pointing unit normal vector $n^\mu$ to the surface $\Sigma_t$. In ADM variables we have $n^0=\sqrt{-\hat{g}^{00}},
n^i=-\hat{g}^{0i}/\sqrt{-\hat{g}^{
00}}$. We also define the lapse function $N=1/\sqrt{-\hat{g}^{00}}$ and the shift function $N^i=-\hat{g}^{0i}/\hat{g}^{00}$. In terms of these variables we write the components of the metric $\hat{g}_{\mu\nu}$ as $$\begin{aligned}
\hat{g}_{00}=-N^2+N_i g^{ij}N_j \ ,
\quad \hat{g}_{0i}=N_i \ , \quad
\hat{g}_{ij}=g_{ij} \ ,
\nonumber \\
\hat{g}^{00}=-\frac{1}{N^2} \ , \quad
\hat{g}^{0i}=\frac{N^i}{N^2} \ , \quad
\hat{g}^{ij}=g^{ij}-\frac{N^i N^j}{N^2}
\ .
\nonumber \\\end{aligned}$$ Then it is easy to see that $$\sqrt{-\det \hat{g}}=N\sqrt{\det g} \ .$$ We further define the extrinsic derivative $$K_{ij}=\frac{1}{2N}
(\partial_t g_{ij}-\nabla_i N_j-
\nabla_j N_i) \ .$$ It is well known that the components of the Riemann tensor can be written in terms of ADM variables [^3]. For example, in case of Riemann curvature we have $$\label{R}
R=K^{ij}K_{ij}-K^2+R^{(D)}+\frac{2}{\sqrt{-\hat{g}}}
\partial_\mu(\sqrt{-\hat{g}}n^\mu K)-
\frac{2}{\sqrt{g}N}\partial_i
(\sqrt{g}g^{ij}\partial_j N) \ ,$$ where $K=K_{ij}g^{ji}$ and where $R^{(D)}$ is Riemann curvature calculated using the metric $g_{ij}$. The new formulation of Hořava-Lifshitz $F(R)$ gravity that was given in [@Chaichian:2010yi] is based on the modification of the relation (\[R\]). In fact, the action introduced there takes the form $$\label{actionNOJI}
S_{F(\tilde{R})}= \int dt d^D\bx
\sqrt{g}N F (\tilde{R}) \ ,$$ where $$\tilde{R}= K_{ij}{\mathcal{G}}^{ijkl}K_{kl}+
\frac{2\mu}{\sqrt{-\hat{g}}}
\partial_\mu (\sqrt{-\hat{g}}n^\mu K)
-\frac{2\mu}{\sqrt{g}N}
\partial_i (\sqrt{g}g^{ij}\partial_j N)
-E^{ij}{\mathcal{G}}_{ijkl}E^{kl} \ ,$$ where $\mu$ is constant and where the generalized metric ${\mathcal{G}}^{ijkl}$ is defined as $${\mathcal{G}}^{ijkl}=\frac{1}{2}(g^{ik}g^{jl}+
g^{il}g^{jk})-\lambda g^{ij}g^{kl} \ ,$$ where $\lambda$ is real constant. $E^{ij}$ are defined using the variation of $D-$dimensional action $W(g_{kl})$ $$\sqrt{g}E^{ij}=\frac{\delta W}{\delta
g_{ij}} \ .$$ These objects were introduced in the original work [@Horava:2009uw]. However we can consider theory when $E_{ij}{\mathcal{G}}^{ijkl}E_{kl}$ is replaced with more general terms that depend on $g_{ij}$ and their covariant derivatives. Further, the action (\[actionNOJI\]) is invariant under foliation preserving diffeomorphism $$t'-t=f(t) \ , \quad
x'^i-x^i=\xi^i(t,\bx) \ .$$
Our goal is to perform the detailed Hamiltonian analysis of the theory defined by the action (\[actionNOJI\]). In order to do this we introduce two non-dynamical fields $A,B$ and rewrite the action (\[actionNOJI\]) into the form $$S_{F(\tilde{R})}= \int dt d^D\bx
\sqrt{g}N (B(\tilde{R}-A)+F(A)) \ .$$ It is easy to see that solving the equation of motion with respect to $A,B$ this action reduces into (\[actionNOJI\]). On the other hand when we perform integration by parts we obtain the action in the form $$\begin{aligned}
\label{SFtR}
S_{F(\tilde{R})}
=\int dt d^D\bx \left( \sqrt{g}N B(
K_{ij}{\mathcal{G}}^{ijkl}K_{kl}
-E^{ij}{\mathcal{G}}_{ijkl}E^{kl}-A)+\nonumber
\right.
\\
\left. +\sqrt{g}N F(A) -2\mu
\sqrt{g}(\partial_t B-N^i\partial_i B)
K + 2\mu
\partial_i B \sqrt{g}g^{ij}
\partial_j N \right) \ , \nonumber \\\end{aligned}$$ where we ignored the boundary terms. From this form of the action we clearly see that $B$ is now dynamical field. In fact, from the action (\[SFtR\]) we find the conjugate momenta $$\begin{aligned}
\label{defmom}
p_N&=&\frac{\delta
S_{F(\tilde{R})}}{\delta
\partt N}\approx 0 \ , \quad
p_i=\frac{\delta
S_{F(\tilde{R})}}{\delta
\partt N^i} \approx 0 \ , \quad
p_A=\frac{\delta
S_{F(\tilde{R})}}{\delta
\partt A}\approx 0 \ , \nonumber \\
p^{ij}&=&\frac{\delta
S_{F(\tilde{R})}}{\delta
\partt
g_{ij}}=\sqrt{g}\left(B{\mathcal{G}}^{ijkl}K_{kl}-
\frac{2\mu g^{ij}}{N}(\partt B-N^i
\partial_i B)\right) \ ,
\nonumber \\
\pi&=&\frac{\delta
S_{F(\tilde{R})}}{\delta \partt B}=
-2\mu\sqrt{g}K \ . \nonumber \\\end{aligned}$$ The first line in (\[defmom\]) implies that $p_N,p_i$ are primary constraints of the theory. On the other hand the relations on the second and third line in (\[defmom\]) can be inverted so that $$\begin{aligned}
& &(\partt B-N^i\partial_i B) =
-\frac{N}{2\mu D\sqrt{g}}
\left(\frac{1}{2\mu}B(1-\lambda
D)\pi+p^{ij}g_{ji}\right) \ , \nonumber \\
& &K_{ij}=\frac{1}{B\sqrt{g}}
{\mathcal{G}}_{ijkl}\left( p^{kl}-\frac{1}{D}
g^{kl}\left(\frac{1}{2\mu}B(1-\lambda
D)\pi+ p^{kl}g_{lk}\right)\right) \ ,
\nonumber \\\end{aligned}$$ where we used the fact that $$g_{ij}{\mathcal{G}}^{ijkl}=(1-\lambda D)g^{kl} \
.$$ Using these results it is straightforward exercise to find corresponding Hamiltonian $$H=\int d^D\bx (N \mH_T+N^i\mH_i+v^A
p_A+v^N p_N+v^ip_i ) \ ,$$ where $$\begin{aligned}
\mH_T&=& \frac{1}{B\sqrt{g}}p^{ij}
{\mathcal{G}}_{ijkl}p^{kl}
-\frac{1}{D\mu\sqrt{g}} (1-\lambda D)^2 \pi
p^{ij}g_{ji}-
\frac{1}{B D\sqrt{g}}(1-\lambda D)
(\pi^{ij}g_{ji})^2+
\nonumber \\
&+&\frac{1}{\sqrt{g}}\frac{(1-\lambda
D)^2 B}{4 D\mu^2}
((1-\lambda D)^2-2)\pi^2 +\nonumber \\
&+&\sqrt{g}B(E^{ij}{\mathcal{G}}_{ijkl}E^{kl}
+A)-\sqrt{g} F(A)+ 2\mu \partial_i[
\partial_j B \sqrt{g}
g^{ij}] \ , \nonumber \\
\mH_i&=& -2 g_{ik}\nabla_j p^{kj} +
\pi\partial_i B \ , \nonumber \\\end{aligned}$$ and where we included the primary constraints $p_N\approx 0 \ ,
p_i\approx 0 \ , p_A\approx 0$. Note that as opposite to the Hamiltonian analysis presented in [@Chaichian:2010yi] we find that $B$ is dynamical field. Further, the consistency of the primary constraints with the time evolution of the system implies following secondary constraints $$\begin{aligned}
\partial_t p_N(\bx)&=&
\pb{p_N(\bx),H}=-\mH_T(\bx)\approx 0 \
,
\nonumber \\
\partial_t p_i(\bx)&=&\pb{
p_i(\bx),H}=-\mH_i(\bx)\approx 0 \ ,
\nonumber \\
\partial_t p_A(\bx)&=&
\pb{p_A(\bx),H}=-\sqrt{g}N(B-F'(A))(\bx)\equiv
-\sqrt{g}NG_A(\bx)\approx 0 \ . \nonumber
\\\end{aligned}$$ Since $\pb{p_A(\bx),G_A(\by)}=
F''(A)\delta(\bx-\by)$ we see that $(p_A,G_A)$ are the second class constraints and hence can be explicitly solved. The solving the first one we set $p_A$ strongly zero while solving the second one we find $F'(A)=B$. If we presume that $F'$ is invertible we can express $A$ as a function of $B$ so that $A=\Psi(B)$ for some function $\Psi$. Finally, since $\pb{p^{ij},p_A}=\pb{g_{ij},p_A}=0$ we see that the Dirac brackets between canonical variables coincide with Poisson brackets.
Let us consider the smeared form of the spatial diffeomorphism generator $${\mathbf{T}}_S=\int d^D\bx \xi^i \mH_i \ .$$ It is easy to see that this generates the spatial diffeomorphism since $$\begin{aligned}
\pb{{\mathbf{T}}_S,B(\bx)}&=&-\xi^i(\bx)\partial_i B(\bx) \ , \nonumber \\
\pb{{\mathbf{T}}_S,\pi(\bx)}&=&-\xi^i(\bx) \partial_i \pi(\bx)-
\partial_i \xi^i (\bx)\pi(\bx) \ , \nonumber \\
\pb{{\mathbf{T}}_S,g_{ij}(\bx)}&=&
-\xi^k(\bx)\partial_k
g_{ij}(\bx)-g_{jk}(\bx)
\partial_k \xi^k(\bx)
-g_{ik}(\bx)\partial_j\xi^k(\bx) \ ,
\nonumber \\
\pb{{\mathbf{T}}_S,p^{ij}(\bx)}&=& -\partial_k
p^{ij}(\bx) \xi^k(\bx)-p^{ij}(\bx)\partial_k
\xi^k(\bx)+p^{jk}(\bx)
\partial_k \xi^i(\bx)+p^{ik}(\bx)
\partial_k \xi^j(\bx) \ . \nonumber \\\end{aligned}$$ Using the Poisson bracket between ${\mathbf{T}}_S$ and $B$ we find $$\begin{aligned}
\pb{{\mathbf{T}}_S,A(B(\bx))}&=&\frac{\delta A(\bx)}{\delta B(\bx)}
\pb{{\mathbf{T}}_S,B(\bx)}=\nonumber \\
&=&
-\frac{\delta A(\bx)}{\delta B(\bx)}\xi^k(\bx)
\partial_k B(\bx)=
-\xi^k(\bx) \partial_k A(\bx) \ . \nonumber \\
\end{aligned}$$ Then we find following Poisson bracket $$\pb{{\mathbf{T}}_S,\mH_T(\bx)}=
-\xi^k(\bx)\partial_k \mH_T(\bx)-
\mH_T(\bx)\partial_k \xi^k(\bx)$$ that implies $$\label{STpb}
\pb{{\mathbf{T}}_S(\xi),{\mathbf{T}}_T(f)}= \int d^D\bx
(\partial_k f \xi^k)\mH_T=
{\mathbf{T}}_T(\partial_k f \xi^k ) \ .$$ Note that the right side in the expression above vanishes for constant $f$.
Finally we calculate the Poisson bracket of ${\mathbf{T}}_T(f),{\mathbf{T}}_T(g)$. Clearly the calculations of the Poisson bracket $\pb{\mH_T(\bx),\mH_T(\by)}$ will be as intricate as the calculation of the Poisson bracket between these constraints in standard Hořava-Lifshitz gravity. The structure of these brackets was analyzed in [@Li:2009bg; @Henneaux:2009zb] with the outline that $\mH_T$ are the second class constraints with unclear physical meaning of this theory. On the other hand it is possible to find consistent physical theory (at least on the classical level) in case when we impose the projectability condition that claims that $N=N(t)$. Then the local primary constraint $p_N(\bx)\approx 0$ is replaced with the global one $p_N\approx 0$ and its preservation during the time evolution of the system implies the global constraint [^4] $${\mathbf{T}}=\int d^D\bx \mH_T(\bx)\approx 0 \ .$$ Then we find that the Hamiltonian is the linear combination of the first class constraints $$H=v^N p_N+v^i p_i + N {\mathbf{T}}+{\mathbf{T}}_S(N^i) \
.$$ Finally using the fact that $$\pb{{\mathbf{T}}_S(\xi),{\mathbf{T}}_S(\eta)}=
{\mathbf{T}}_S(\xi^i\partial_i\eta^k -\eta^i
\partial_i \xi^k)$$ and also the equation (\[STpb\]) when we impose the condition $f=1$ we find that the constraints $ {\mathbf{T}}\approx 0 \
, {\mathbf{T}}_S(\xi)\approx 0$ are consistent with the time evolution of the system since $$\begin{aligned}
\partial_t {\mathbf{T}}&=&\pb{{\mathbf{T}},H}\approx 0 \ , \nonumber \\
\partial_t {\mathbf{T}}_S(\xi)&=&\pb{{\mathbf{T}}_S(\xi),H}
\approx 0 \ . \nonumber \\\end{aligned}$$ Let us conclude our results. We derived the Hamiltonian formulation of modified $F(R)$ Hořava-Lifshitz gravity and argued that this is a consistent theory when the projectability condition is imposed. Observe that the requirement of the consistency of the constraints with the time evolution implies the secondary constraints only which is different from analysis presented in [@Chaichian:2010yi]. Explicitly, it was argued there the consistency of the constraints with the time evolution of the system could lead to the the possibility of the generation of tertiary constraints or constraints of higher order until the closure of constraints is established.
Hamiltonian Dynamics of Healthy Extended Hořava-Lifshitz Gravity {#third}
=================================================================
The healthy extended of Hořava-Lifshitz theory was proposed in [@Blas:2009qj] in order to improve some pathological properties of the Hořava-Lifshitz gravity without projectability condition. Explicitly, the healthy extended Hořava-Lifshitz gravity is the version the Hořava-Lifshitz theory without projectability and without detailed balance condition imposed that contains additional vector $a_i$ constructed from the lapse function $N(t,\bx)$ as $$a_i=\frac{\partial_i N}{N}$$ Note that under foliation preserving diffeomorphism where $N'(t',\bx')=
N(t,\bx)(1-\dot{f}(t))$ we find that $a_i$ transforms as $$a'_i(t',\bx')=a_i(t,\bx)-a_j(t,\bx)
\partial_i \xi^j(t,\bx) \ .$$ Let us now consider the healthy extension of modified $F(R)$ Hořava-Lifshitz theory of gravity defined by the action $$\begin{aligned}
S=\int dt d^D\bx \sqrt{g}N (B(\tilde{R}
-V(g_{ij},a_i)-A)+F(A)) \ , \end{aligned}$$ where $V(g,a)$ is an additional potential term that depends on $a_i$ and on $g_{ij}$. Performing the same analysis as in previous section we find the Hamiltonian in the form $$\begin{aligned}
H&=&\int d^D\bx
\left(N(\mH_T+B\sqrt{g}V)+N^i\mH_i
+\right. \nonumber \\
&+&v^i p_i+v^N p_N+v^A p_A) \ , \nonumber \\
\nonumber \\\end{aligned}$$ where $\mH_T$ and $\mH_i$ are the same as in case of modified $F(R)$ Hořava-Lifshitz gravity. The crucial point of the Hamiltonian analysis of the healthy extended Hořava-Lifshitz gravity is that the condition of the preservation of the primary constraint $p_N\approx 0$ implies following secondary one $$\begin{aligned}
\partial_t p_N(\bx)&=&
\pb{p_N(\bx),H}=
-(\mH_T(\bx)+B\sqrt{g}V(\bx))+\nonumber \\
&+&
\frac{1}{N}\partial_i\left(NB\frac{\delta
V}{\delta a_i}\right)(\bx)\equiv
-\tilde{\mH}_T(\bx)\approx 0
\nonumber \\\end{aligned}$$ using $$\begin{aligned}
\pb{p_N(\bx),\int d^D\by N B
\sqrt{g}V(g,a)}= -B\sqrt{g}V(\bx)
+\frac{1}{N}
\partial_i \left(NB\sqrt{g}\frac{\delta V}{\delta a_i}
\right)(\bx)
\nonumber \\\end{aligned}$$ The general analysis of the constraint systems implies that the total Hamiltonian is the sum of the original Hamiltonian and all constraints so that the Hamiltonian takes the form $$\label{Hamhealt}
H=\int d^D\bx(
N(\mH_T+\sqrt{g}BV)+N^i\mH_i
+v_T\tilde{\mH}_T +v^N p_N+v^ip_i) \ ,$$ where $v_T$ is Lagrange multiplier related to the new constraint $\tilde{\mH}_T$. Observe that as opposite to the case of canonical gravity or standard Hořava-Lifshitz theory $N$ does not appear as Lagrange multiplier in the Hamiltonian (\[Hamhealt\]). This is the first indication of the slightly unusual behavior of this theory. In order to investigate the properties of given theory further we introduce the smeared form of the Hamiltonian constraint ${\mathbf{T}}_T(f)=\int d^D\bx
f(\bx)\mH_T(\bx)$. Then we find $$\begin{aligned}
\pb{p_N,{\mathbf{T}}_T(f)}&=&
\frac{1}{N}f\partial_i \left(B\sqrt{g}
\frac{\delta V}{\delta a_i}\right)
+\nonumber \\
&+&\partial_i\left(\frac{f}{N}\right)\frac{\partial_j N}{N}
B\sqrt{g}\frac{\delta^2 V}{\delta
a_i a_j}+
\partial_j\left(\partial_i
\left(\frac{f}{N}\right)B\sqrt{g}\frac{\delta^2
V}{\delta a_i \delta a_j}\right) \ .
\nonumber \\\end{aligned}$$ Since the Hamiltonian can be written as $$\label{Hheathy}
H=\int d^D\bx (
N(\mH_T+\sqrt{g}BV)+v^Np_N+v^ip_i)+
{\mathbf{T}}_T(v_T)+{\mathbf{T}}_S(N^i)$$ we find that the time derivative of $p_N$ is equal to $$\begin{aligned}
\partt p_N&=&
\pb{p_N,H}\approx
\frac{1}{N}v_T\partial_i \left(B\sqrt{g}
\frac{\delta V}{\delta a_i}\right)
+\nonumber \\
&+&\partial_i\left(\frac{v_T}{N}\right)\frac{\partial_j N}{N}
B\frac{\delta^2 V}{\delta
a_i a_j}+
\partial_j\left(\partial_i
\left(\frac{v_T}{N}\right)B\sqrt{g}\frac{\delta^2
V}{\delta a_i \delta a_j}\right) \ .
\nonumber \\
\nonumber\\\end{aligned}$$ In principle this equation can be solved for $v_T$ so that it is determined by the dynamical variables. In other words, $p_N$ and $\tilde{\mH}_T$ form the second class constraints and consequently there is no gauge freedom related to the constraint $\tilde{\mH}_T$. However this fact has very interesting consequences for the structure of the theory [^5]. Explicitly, since $p_N(\bx),\tilde{\mH}_T(\bx)$ are second class constraints they can be explicitly solved. The solution of the first one is $p_N(\bx)=0$ strongly. On the other hand we suggest that the constraint $\tilde{\mH}_T(\bx)=0$ can be solved for $a_i=\frac{\partial_i N}{N}$ and hence $N$ can be expressed as a function of dynamical variables $g_{ij},p^{ij}$ $$\label{Ncan}
N=\Phi(g_{ij},p^{ij}) \ .$$ Further, since the Poisson brackets between $g_{ij},p^{ij}$ and $p_N$ vanish we find that the Dirac brackets between cannonical variables $g_{ij},p^{ij}$ that span the reduced phase space of the theory coincide with the Poisson brackets. Finally, using (\[Ncan\] in (\[Hheathy\]) we find that the Hamiltonian on the reduced phase space takes the form $$H=\int d^D\bx (
\Phi(\mH_T+\sqrt{g}BV(\Phi))+v^ip_i)+
+{\mathbf{T}}_S(N^i) \ .$$ We see that this Hamiltonian contains generator of the spatial diffeomorphism that is the first class constraint. The presence of this constraint is a consequence of the fact that this theory is invariant under spatial diffeomorphism. Observe that there is no gauge freedom related to time reparameterization. This result suggests that even if the structure of the healthy extended Hořava-Lifshitz gravity is completely different from general relativity it has the potential that it can solve the long standing problem of time in general relativity [^6]. It would be very interesting to study this theory further for some examples of the potential $V$ that allow to find $N$ as a function of cannonical variables and hence find Hamiltonian on reduced phase space. We hope to retun to this problem in future.
[**Acknowledgements:**]{} I would like to thank to Diego Blas, Oriol Pujolas and Sergey Sibiryakov for comments considering the first version of my paper and for suggestion of the correct interpretation of results derived here. This work was supported by the Czech Ministry of Education under Contract No. MSM 0021622409.
[20]{}
P. Horava, *“Quantum Gravity at a Lifshitz Point,”* Phys. Rev. D [**79**]{} (2009) 084008 \[arXiv:0901.3775 \[hep-th\]\].
P. Horava, *“Membranes at Quantum Criticality,”* JHEP [**0903**]{} (2009) 020 \[arXiv:0812.4287 \[hep-th\]\]. P. Horava, *“Quantum Criticality and Yang-Mills Gauge Theory,”* arXiv:0811.2217 \[hep-th\].
J. Kluson, *“Horava-Lifshitz f(R) Gravity,”* arXiv:0907.3566 \[hep-th\].
D. Blas, O. Pujolas and S. Sibiryakov, *“A healthy extension of Horava gravity,”* arXiv:0909.3525 \[hep-th\].
D. Blas, O. Pujolas and S. Sibiryakov, *“Comment on ‘Strong coupling in extended Horava-Lifshitz gravity’,”* arXiv:0912.0550 \[hep-th\].
M. Li and Y. Pang, *“A Trouble with Hořava-Lifshitz Gravity,”* JHEP [**0908**]{} (2009) 015 \[arXiv:0905.2751 \[hep-th\]\].
M. Chaichian, S. Nojiri, S. D. Odintsov, M. Oksanen and A. Tureanu, *“Modified F(R) Horava-Lifshitz gravity: a way to accelerating FRW cosmology,”* arXiv:1001.4102 \[hep-th\].
M. Henneaux, A. Kleinschmidt and G. L. Gomez, *“A dynamical inconsistency of Horava gravity,”* arXiv:0912.0399 \[hep-th\].
A. Papazoglou and T. P. Sotiriou, *“Strong coupling in extended Horava-Lifshitz gravity,”* arXiv:0911.1299 \[hep-th\]. J. Kluson, *“New Models of f(R) Theories of Gravity,”* arXiv:0910.5852 \[hep-th\].
E. Gourgoulhon, *“3+1 Formalism and Bases of Numerical Relativity,”* arXiv:gr-qc/0703035.
S. Capozziello, M. De Laurentis and V. Faraoni, *“A bird’s eye view of f(R)-gravity,”* arXiv:0909.4672 \[gr-qc\]. T. P. Sotiriou and V. Faraoni, *“f(R) Theories Of Gravity,”* arXiv:0805.1726 \[gr-qc\]. S. Nojiri and S. D. Odintsov, *“Dark energy, inflation and dark matter from modified F(R) gravity,”* arXiv:0807.0685 \[hep-th\]. V. Faraoni, *“f(R) gravity: successes and challenges,”* arXiv:0810.2602 \[gr-qc\].
S. Nojiri and S. D. Odintsov, *“Introduction to modified gravity and gravitational alternative for dark energy,”* eConf [**C0602061**]{} (2006) 06 \[Int. J. Geom. Meth. Mod. Phys. [**4**]{} (2007) 115\] \[arXiv:hep-th/0601213\]. N. Deruelle, Y. Sendouda and A. Youssef, *“Various Hamiltonian formulations of f(R) gravity and their canonical relationships,”* Phys. Rev. D [**80**]{} (2009) 084032 \[arXiv:0906.4983 \[gr-qc\]\].
C. J. Isham, *“Canonical quantum gravity and the problem of time,”* arXiv:gr-qc/9210011.
[^1]: For review and extensive list of references, see [@Capozziello:2009nq; @Sotiriou:2008rp; @Nojiri:2008nt; @Faraoni:2008mf; @Nojiri:2006ri].
[^2]: For detailed discussion of this problem, see [@Isham:1992ms].
[^3]: For review and extensive list of references, see [@Gourgoulhon:2007ue].
[^4]: Clearly this constraint takes the same form as ${\mathbf{T}}_T(f)$ for constant $f=1$.
[^5]: I would like to thank to Diego Blas, Oriol Pujolas and Sergey Sibiryakov for suggesting me this interpretation.
[^6]: By “the problem of time” in General Relativity (GR) one means that GR is a completely parametrised system. That is, there is no natural notion of time due to the diffeomorphism invariance of the theory and therefore the canonical Hamiltonian which generates time reparametrisations vanishes. In fact, instead of a Hamiltonian there are an infinite number of spatial diffeomorphism and Hamiltonian constraints respectively, of which the canonical Hamiltonian is a linear combination, which generate infinitesimal spacetime diffeomorphisms.
|
---
abstract: 'Electron scattering from the three-nucleon bound state with two- and three-body disintegration is described. The description uses the purely nucleonic charge-dependent CD-Bonn potential and its coupled-channel extension CD-Bonn + $\Delta$. Exact solutions of three-particle equations are employed for the initial and final states of the reactions. The current has one-baryon and two-baryon contributions and couples nucleonic with $\Delta$-isobar channels. $\Delta$-isobar effects on the observables are isolated. The $\Delta$-isobar excitation yields an effective three-nucleon force and effective two- and three-nucleon currents beside other $\Delta$-isobar effects; they are mutually consistent.'
author:
- 'A. Deltuva'
- 'L. P. Yuan'
- 'J. Adam Jr.'
- 'P. U. Sauer'
title: |
Three-body electrodisintegration of the three-nucleon bound state\
with ${\Delta}$-isobar excitation: Processes below pion-production threshold
---
[^1]
Introduction {#sec:intro}
============
Electron scattering from the three-nucleon bound state is described allowing for the excitation of a nucleon to a $\Delta$ isobar. The available energy stays below pion-production threshold; thus, the excitation of the $\Delta$ isobar remains virtual. The $\Delta$ isobar is therefore considered a stable particle; it yields an effective three-nucleon force and effective exchange currents beside other $\Delta$-isobar effects.
The paper updates our previous calculations [@yuan:02b] of three-nucleon electron scattering. Compared to [Ref.]{} [@yuan:02b], the description is extended to higher energies, and three-nucleon breakup is also included; however, energetically the description is only valid below pion-production threshold. Exclusive and inclusive reactions are described. The employed dynamics is the same as in [Ref.]{} [@deltuva:04a] for photo reactions. The underlying purely nucleonic reference potential is CD Bonn [@machleidt:01a]. Its coupled-channel extension, called CD Bonn + $\Delta$, is employed in this paper; it is fitted in [Ref.]{} [@deltuva:03c] to the experimental two-nucleon data up to 350 MeV nucleon lab energy; it is as realistic as CD Bonn. The exact solution of the three-particle scattering equations is used for the initial- and final-state hadronic interactions. They are solved by Chebyshev expansion of the two-baryon transition matrix as interpolation technique [@deltuva:03a]; that technique is found highly efficient and systematic. The employed electromagnetic (e.m.) current is structurally the same as in [Ref.]{} [@deltuva:04a] for photo reactions. It is a coupled-channel current tuned to the used two-baryon potentials as much as possible. It contains one- and two-baryon parts. Compared with [Ref.]{} [@deltuva:04a] it is augmented with e.m. form factors.
An alternative description of e.m. processes in the three-nucleon system is given in [Refs.]{} [@golak:95a; @golak:95b; @golak:02a]; [Refs.]{} [@golak:95a; @golak:95b; @golak:02a] employ a different two-nucleon potential and a different e.m. current; nevertheless, the theoretical predictions of [Refs.]{} [@golak:95a; @golak:95b; @golak:02a] and of this paper turn out to be qualitatively quite similar where comparable.
Section \[sec:calc\] recalls our calculational procedure and especially stresses its improvements. Section \[sec:res\] presents characteristic results for observables; $\Delta$-isobar effects on those observables are isolated. Section \[sec:concl\] gives a summary and our conclusions.
\[sec:calc\] Calculational procedure
====================================
The kinematics of the considered processes in electron scattering is shown in [Fig.]{} \[fig:reaction\]. The calculational procedure, including the notation, is taken over from [Refs.]{} [@deltuva:04a; @yuan:02b]. We remind the reader shortly of that procedure in order to point out changes and to describe the extension to three-body electro disintegration and to inclusive processes, not discussed in [Ref.]{} [@yuan:02b].
Description of exclusive reactions with three-body disintegration \[sec:d8s\]
-----------------------------------------------------------------------------
The $S$-matrix and the spin-averaged and spin-dependent cross sections for two-body electrodisintegration of the trinucleon bound state are given in [Ref.]{} [@yuan:02b]. In this subsection we add the corresponding quantities for three-body electrodisintegration. The right part of [Fig.]{} \[fig:reaction\] recalls the employed notation for the individual particle momenta of the trinucleon bound state, the three nucleons of breakup and the electron; i.e., $k_B$, $k_j$ and $k_{e}$. They are on-mass-shell four-momenta. The corresponding particle energies are the zero components of those momenta, i.e., $k_B^0 c$, $k_j^0 c$ and $k_{e}^0 c$; they are relativistic ones with the respective rest masses $m_B$, $m_N$ and $m_e$, in contrast to the nonrelativistic baryonic energies of the nonrelativistic model calculation of baryonic states without rest masses, i.e., $E_B({{\mathbf{k}}}_B) = E_B + {{\mathbf{k}}}_B^2/6m_N$, $E_B$ being the three-nucleon binding energy, and $E_N({{\mathbf{k}}}_j) = {{\mathbf{k}}}_j^2/2m_N$.
(0,0)(8.0,4.0) (2.5,2)(4,0)[2]{} (3.3,0.6)(4,0)[2]{} (3.3,0.2)(4,0)[2]{}[$k_B$]{} (1.0,2.0)(4,0)[2]{} (1.5,2.4)(4,0)[2]{}[$Q$]{} (1.0,2.0)(4,0)[2]{} (0.3,0.2)(4,0)[2]{}[$k_{e_i}$]{} (0.3,3.8)(4,0)[2]{}[$k_{e_f}$]{} (2.8,2.4) (3.3,3.8)[$k_d$]{} (2.2,2.4) (1.7,3.8)[$k_N$]{} (6.2,2.4)(3,0)[1]{} (5.7,3.8)[$k_1$]{} (6.5,2.5)(6.5,3.4) (6.5,3.8)[$k_2$]{} (6.8,2.4)(7.3,3.4) (7.3,3.8)[$k_3$]{}
We give two alternative forms for the $S$-matrix elements:
\[eq:Smat\] $$\begin{aligned}
\label{eq:Smata}
\langle f {{\mathbf{P}}}_f | S | i {{\mathbf{P}}}_i \rangle = &
-i(2 \pi \hbar)^{4} \delta (k_{e_f} + k_{1} + k_{2} + k_{3} - k_{e_i} - k_B)
\nonumber \\ & \times
\langle s_f | M | s_i \rangle (2 \pi \hbar)^{-9}
\nonumber \\ & \times
\big[ 2 k_{e_i}^0 c \, 2 k_B^0 c \, 2 k_{e_f}^0 c\,
2k_1^0 c \, 2k_2^0 c \, 2k_3^0 c \big]^{-1/2},
\\ \label{eq:Smatc}
\langle f {{\mathbf{P}}}_f | S | i {{\mathbf{P}}}_i \rangle = &
-\frac{i}{\hbar c} \, \delta \big(k_{e_f}^0 c + E_N({{\mathbf{k}}}_1)+E_N({{\mathbf{k}}}_2)
\nonumber \\ &
+ E_N({{\mathbf{k}}}_{3}) - k_{e_i}^0 c - E_B({{\mathbf{k}}}_B) \big)
\nonumber \\ & \times
\delta ({{\mathbf{k}}}_{e_f} + {{\mathbf{k}}}_{1} + {{\mathbf{k}}}_{2} +
{{\mathbf{k}}}_{3} - {{\mathbf{k}}}_{e_i} - {{\mathbf{k}}}_B )
\nonumber \\ & \times
\frac{1}{(2 \pi)^2} \big[ 2 k_{e_f}^0 c \, 2 k_{e_f}^0 c \big]^{-\frac12} \,
\nonumber \\ & \times
\bar{u} ({{\mathbf{k}}}_{e_f} s_{e_f}) \gamma_{\mu} u({{\mathbf{k}}}_{e_i} s_{e_i}) \,
\frac{4 \pi e_p^2 }{(k_{e_f} - k_{e_i})^2}
\nonumber \\ & \times
\frac{1}{e_p c} \langle \psi^{(-)}_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0f} |
j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) | B \rangle.
\end{aligned}$$
Equation introduces a covariant form, whereas [Eq.]{} is the noncovariant quantum mechanical realization of it. ${{\mathbf{P}}}$ is the total momentum including the one of the electron, $({{\mathbf{p}}} {{\mathbf{q}}} {{\mathbf{K}}})$ the Jacobi momenta of the three baryons according to [Ref.]{} [@nemoto:98a]; ${{{\mathbf{K}}}_{+}}= {{\mathbf{K}}}_i + {{\mathbf{K}}}_f$; $i$ and $f$ indicate the initial and final states of the reaction. $ u ({{\mathbf{k}}} s)$ is the Dirac spinor of the electron with positive energy in the normalization $ \bar{u}({{\mathbf{k}}} s') u({{\mathbf{k}}} s) = m_e c^2 \delta_{s's}$; $ \langle s_f | M | s_i \rangle $ is the singularity-free matrix element for three-nucleon electrodisintegration, from which the differential cross section is obtained. Its dependence on the spin projections $s_{e_i} $ and ${\mathcal M}_{B}$ of electron and trinucleon bound state in the initial channel, collectively described by $s_i$, and on the spin projections $s _{e_f}$ and $m_{s_f}$ of electron and nucleons in the final channel, collectively described by $s_f$, are explicitly indicated. $\langle s_f | M | s_i \rangle $ is Lorentz-invariant in a relativistic description and can therefore be calculated in any frame. However, when calculated according to [Eq.]{} in the framework of nonrelativistic quantum mechanics, $\langle s_f | M | s_i \rangle $ loses that property of being a Lorentz scalar.
The lab cross section takes the following compact form
\[eq:d5S\] $$\begin{gathered}
\begin{split} \label{eq:d5s}
d^8 \sigma_{i \to f} = &
\big | \langle s_f | M | s_i \rangle \big |^2
\mathrm{fps} \: {dE_e(\mathbf{k}_{e_f})}\, d^2 {\hat{\mathbf{k}}}_{e_f} \,
d S \, d^2 {\hat{\mathbf{k}}}_{1} \, d^2 {\hat{\mathbf{k}}}_{2}
\end{split}\end{gathered}$$ with the abbreviation $\mathrm{fps}$ for a phase-space factor; in the lab frame $\mathrm{fps}$ is $$\begin{aligned}
\label{eq:fpsr}
\mathrm{fps} = & \frac{k_{e_f}^0} {(2 \pi \hbar)^{8}
64 c^7 k_{e_i}^0 m_B }\, {{\mathbf{k}}}_1^2 {{\mathbf{k}}}_2^2
\nonumber \\ & \times
\big \{ {{\mathbf{k}}}_1^2 \big[ |{{\mathbf{k}}}_2| (k_2^0 + k_3^0)
- k_2^0 {\hat{\mathbf{k}}}_2 \cdot ({{\mathbf{Q}}} \! - \! {{\mathbf{k}}}_1) \big]^2
\nonumber \\ &
+ {{\mathbf{k}}}_2^2 \big[ |{{\mathbf{k}}}_1| (k_1^0+k_3^0) -
k_1^0 {\hat{\mathbf{k}}}_1 \cdot ({{\mathbf{Q}}} \! - \! {{\mathbf{k}}}_2) \big]^2 \big \}^{-\frac12}, \\
\label{eq:fpsn}
\mathrm{fps} = & \frac{k_{e_f}^0}{(2 \pi \hbar)^{8} 64 c^8 k_{e_i}^0 m_N m_B }
\, {{\mathbf{k}}}_1^2 {{\mathbf{k}}}_2^2
\nonumber \\ & \times
\big \{ {{\mathbf{k}}}_1^2 \big[ 2|{{\mathbf{k}}}_2| -
{\hat{\mathbf{k}}}_2 \cdot ({{\mathbf{Q}}} \! - \! {{\mathbf{k}}}_1) \big]^2
\nonumber \\ &
+ {{\mathbf{k}}}_2^2 \big[ 2|{{\mathbf{k}}}_1| -
{\hat{\mathbf{k}}}_1 \cdot ({{\mathbf{Q}}} \! - \! {{\mathbf{k}}}_2) \big]^2 \big \}^{-\frac12}.
\end{aligned}$$
Equation is the nonrelativistic version of [Eq.]{} ; $dS$ is the element of arclength $S$ as used in [Ref.]{} [@deltuva:04a]. The cross section is still spin-dependent. The spin-averaged eightfold differential cross section is $$\begin{gathered}
\label{eq:d5s-av}
\frac{\overline{d^8 \sigma}}{ {dE_e(\mathbf{k}_{e_f})}d^2 {\hat{\mathbf{k}}}_{e_f}\, dS
\, d^2 {\hat{\mathbf{k}}}_{1} \, d^2 {\hat{\mathbf{k}}}_{2}} = \nonumber \\
\frac 14 \sum_{s_f s_i} \frac {d^8 \sigma_{i\to f}}
{{dE_e(\mathbf{k}_{e_f})}d^2 {\hat{\mathbf{k}}}_{e_f} d S \, d^2 {\hat{\mathbf{k}}}_{1} \, d^2 {\hat{\mathbf{k}}}_{2} } ;\end{gathered}$$ in figures it is denoted by $d^8 \sigma / dE_e d\Omega_e dS d\Omega_1 d\Omega_2$, the traditional notation. The experimental setup determines the isospin character of the two detected nucleons 1 and 2; their isospin character is not followed up in our notation.
We calculate the matrix element $\langle s_f | M | s_i \rangle$ in the lab frame using the following computational strategy. The strategy is in the spirit of [Ref.]{} [@deltuva:04a]; it is nonunique, since the model calculations, due to the limitations of the underlying dynamics, miss the trinucleon binding energy; the necessary correction for that miss has arbitrary features:
1\. The experimental four-momentum transfer
$$\begin{aligned}
\label{eq:Qe}
Q = {} & k_{e_i} - k_{e_f} , \\
\label{eq:Qh}
Q = {} & k_1 + k_2 + k_3 - k_B
\end{aligned}$$
determines the total energy and the total momentum of the hadronic part of the system in the final channel. This step is done using relativistic kinematics and the true experimental trinucleon binding energy. The experimental momentum $Q$ in the lab frame with ${{\mathbf{K}}}_i = {{\mathbf{k}}}_B=0$ determines the total momentum ${{\mathbf{K}}}_f$ and the energy $E_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f {{\mathbf{K}}}_f)$ of the final three-nucleon system in the lab frame, i.e., ${{\mathbf{K}}}_f = {{\mathbf{Q}}} $ and $E_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f {{\mathbf{K}}}_f) = E_B + Q_0 c$. The resulting energy $E_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f {{\mathbf{K}}}_f)$ of the final state is the true experimental one. Thus, the experimental two- and three-body breakup thresholds are exactly reproduced.
2\. The matrix element $\langle s_f | M | s_i \rangle $ is calculated in the lab system as *on-energy-shell element* under nonrelativistic model assumptions for hadron dynamics. Taking the computed trinucleon model binding energy $E_B$ and the average nucleon mass $m_N $, i.e., $m_N c^2 =
938.919$ MeV, the energy transfer $Q_0$ to be used for the current matrix element results, i.e., $Q_0 c = E_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f {{\mathbf{K}}}_f) - E_B$; the three-momentum transfer ${{\mathbf{Q}}}$ to be used for the current matrix element is ${{\mathbf{Q}}} = \sqrt{Q_0^2+Q^2}\hat{{{\mathbf{K}}}}_f$ with the true experimental value of $Q^2$; note that we define the square of the space-like four-momentum transfer positive as $Q^2 = {{\mathbf{Q}}}^2 - Q_0^2$. Since the model trinucleon binding energy is not the experimental one, the components of the resulting four-momentum transfer $Q$ do not match precisely their experimental values when calculating the part $\langle \psi^{(-)}_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0f} |
j^{\mu} ({{\mathbf{Q}}}_{}, {{{\mathbf{K}}}_{+}}) | B \rangle$ of the matrix element $\langle s_f | M |s_i \rangle$ according to [Eq.]{} below; this strategy is chosen in order to preserve the experimental $Q^2$ as in photo reactions [@deltuva:04a]. The internal three-nucleon energy part of the final state is ${{{\mathbf{p}}}_f^2}/{m_N} + {3{{\mathbf{q}}}_f^2}/{4 m_N} =
E_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f {{\mathbf{K}}}_f) - {{{\mathbf{K}}}_f^2}/{6 m_N}$.
3\. The lab cross section is calculated nonrelativistically; it is constructed from the following form of the matrix element $$\begin{gathered}
\label{eq:Mampl}
\begin{align}
\langle s_f | M | s_i \rangle = & \frac{\hbar}{c} (2 \pi \hbar)^{3}
\bar{u} ({{\mathbf{k}}}_{e_f} s_{e_f}) \gamma_{\mu} u ({{\mathbf{k}}}_{e_i} s_{e_i})
\frac{4 \pi e_p^2 }{(k_{e_f} - k_{e_i})^2 }
\nonumber \\ & \times \frac{1}{e_p c}
\langle \psi^{(-)}_0 ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0f} |
j^{\mu} ({{\mathbf{Q}}}, {{\mathbf{Q}}}) | B \rangle
\nonumber \\ & \times [2m_N c^2 ]^{3/2} [ 2m_B c^2 ]^{1/2}
\end{align}\end{gathered}$$ and from the nonrelativistic phase-space factor $\mathrm{fps}$ in the form . As discussed in [Ref.]{} [@deltuva:04a] one could choose the hadronic kinematics nonrelativistically for the dynamic matrix element $\langle s_f | M | s_i \rangle $ on one side and relativistically for the kinematical factors on the other side. That split calculational strategy can be carried out with ease for the observables of exclusive processes. However, when total cross sections or inelastic structure functions in inclusive processes are calculated, we resort to a particular technical scheme as already described in [Ref.]{} [@deltuva:04a] for the total photo cross section: The energy-conserving $\delta$-function in the phase-space element is rewritten as imaginary part of the full resolvent and that full resolvent has to be made consistent with the employed nonrelativistic dynamics of the model calculations. Thus, the split calculational strategy, developed in [Ref.]{} [@yuan:02b], cannot be carried through for total cross sections and inelastic structure functions. We therefore do not use it in our *standard calculational procedure*; we use it only for exclusive cross sections when testing the validity of the employed nonrelativistic kinematics.
Description of inclusive reactions \[sec:RF\]
---------------------------------------------
We assume that the electron beam is polarized with the electron helicity $h_e$ and that the trinucleon target is polarized according to the polarization vector ${{\mathbf{n}}}_B = (\sin \theta_B \cos \varphi_B,
\sin \theta_B \sin \varphi_B, \cos \theta_B)$; the angles are taken with respect to the direction $\hat{{{\mathbf{Q}}}}$. We use the same definition of coordinate axes as [Ref.]{} [@yuan:02b].
The inclusive spin-dependent differential cross section has the form $$\begin{gathered}
\begin{align}
\frac{d^3 \sigma(h_e, {{\mathbf{n}}}_B)}{{dE_e(\mathbf{k}_{e_f})}d^2 {\hat{\mathbf{k}}}_{e_f}}
= {} & \sigma_{\mathrm{Mott}}
\{v_L(Q \theta_e) R_L(Q) + v_T(Q \theta_e) R_T(Q)
\nonumber \\ &
+ h_e [ v_{T'}(Q \theta_e) R_{T'}(Q) n_{Bz}
\nonumber \\ & +
v_{TL'}(Q \theta_e) R_{TL'}(Q) n_{Bx}] \}
\end{align}\end{gathered}$$ with the Mott cross section $\sigma_{\mathrm{Mott}}$ and the kinematical functions $v_{L}(Q \theta_e)$, $v_{T}(Q \theta_e)$, $v_{T'}(Q \theta_e)$ and $v_{TL'}(Q \theta_e)$ given in [Ref.]{} [@yuan:02b], and with the inclusive response functions $R_{L}(Q)$, $R_{T}(Q)$, $R_{T'}(Q)$ and $R_{TL'}(Q)$ given in Appendix \[app:RF\]. The longitudinal and transverse response functions $R_{L}(Q)$ and $R_{T}(Q)$ refer to a spin averaged target, $R_{T'}(Q)$ and $R_{TL'}(Q)$ are characteristic for the spin structure of the target. Experiments usually measure the asymmetry $A({{\mathbf{n}}}_B)$, i.e.,
$$\begin{aligned}
A ({{\mathbf{n}}}_B)= {} &
\left[ \frac{d^3 \sigma(1, {{\mathbf{n}}}_B)}{{dE_e(\mathbf{k}_{e_f})}d^2 {\hat{\mathbf{k}}}_{e_f}}
- \frac{d^3 \sigma(-1, {{\mathbf{n}}}_B)}{{dE_e(\mathbf{k}_{e_f})}d^2 {\hat{\mathbf{k}}}_{e_f}} \right]
\Bigg/ \nonumber \\ &
\left[ \frac{d^3 \sigma(1, {{\mathbf{n}}}_B)}{{dE_e(\mathbf{k}_{e_f})}d^2 {\hat{\mathbf{k}}}_{e_f}}
+ \frac{d^3 \sigma(-1, {{\mathbf{n}}}_B)}{{dE_e(\mathbf{k}_{e_f})}d^2 {\hat{\mathbf{k}}}_{e_f}} \right],
\\
A ({{\mathbf{n}}}_B) = {} & \frac{ v_{T'}(Q \theta_e) R_{T'}(Q) n_{Bz} +
v_{TL'}(Q \theta_e) R_{TL'}(Q) n_{Bx} }
{v_L(Q \theta_e) R_L(Q) + v_T(Q \theta_e) R_T(Q) }.\end{aligned}$$
When orienting the target spin parallel to the momentum transfer ${{\mathbf{Q}}}$, i.e., ${{\mathbf{n}}}_{BT'}=(0,0,1)$, the transverse asymmetry $A_{T'} = A({{\mathbf{n}}}_{BT'})$ is selected; when orienting the target spin perpendicular to the momentum transfer ${{\mathbf{Q}}}$, but in the electron scattering plane, i.e., ${{\mathbf{n}}}_{BTL'}=(1,0,0)$, the transverse-longitudinal asymmetry $A_{TL'}= A({{\mathbf{n}}}_{BTL'})$ is selected.
Results \[sec:res\]
===================
We present results for spin-averaged and spin-dependent observables in electro disintegration of the three-nucleon bound state. The presented exclusive results refer to three-body disintegration. Results of exclusive two-body disintegration are not shown; results for them are given in [Ref.]{} [@yuan:02b]; control calculations indicate that the results of [Ref.]{} [@yuan:02b] do not get any essential physics change, though the hadronic interaction and the e.m. current are improved compared with [Ref.]{} [@yuan:02b].
The results of this paper are based on calculations derived from the purely nucleonic CD-Bonn potential [@machleidt:01a] and its coupled-channel extension [@deltuva:03c], which allows for single $\Delta$-isobar excitation in isospin-triplet partial waves. The $\Delta$ isobar is considered to be a stable particle of spin and isospin $\frac32$ with a rest mass $m_{\Delta}c^2$ of 1232 MeV. In contrast to the coupled-channel potential constructed previously by the subtraction technique [@hajduk:83a] and used in the calculations of [Ref.]{} [@yuan:02b], the new one of [Ref.]{} [@deltuva:03c] and used in this paper is fitted properly to data and accounts for two-nucleon scattering data with the same quality as the original CD-Bonn potential. We describe first the *standard calculational procedure* which this paper follows.
The baryonic potential is taken into account in purely nucleonic and in nucleon-$\Delta$ partial waves up to the total two-baryon angular momentum $I=3$. The calculations omit the Coulomb potential between charged baryons. Nevertheless, the theoretical description is charge dependent. For reactions on ${}^3\mathrm{He}$ the proton-proton $(pp)$ and neutron-proton $(np)$ parts of the potentials are used, for reactions on ${}^3\mathrm{H}$ the neutron-neutron $(nn)$ and $np$ parts. Assuming charge independence, the three-nucleon bound state and nucleon-deuteron scattering states are pure states with total isospin $\mathcal{T} = \frac12$; the three-nucleon scattering states have total isospin $\mathcal{T} = \frac12$ and $\mathcal{T} = \frac32$, but those parts are not coupled by hadron dynamics. In contrast, allowing for charge dependence, all three-baryon states have $\mathcal{T} = \frac12$ and $\mathcal{T} = \frac32$ components which are dynamically coupled. For hadronic reactions that coupling is found to be quantitatively important in the ${}^1S_0$ partial wave [@deltuva:03b]; in other partial waves the approximative treatment of charge dependence as described in [Ref.]{} [@deltuva:03b] is found to be sufficient; it does not couple total isospin $\mathcal{T} = \frac12$ and $\frac32$ channels dynamically. The same holds for the hadronic dynamics in e.m. reactions considered in this paper: The effect of charge dependence is dominated by the ${}^1S_0$ partial wave; it is seen in some particular kinematic situations, but we refrain from discussing them in detail in this paper. However, the calculations of e.m. reactions require total isospin $\mathcal{T} = \frac32$ components of scattering states in *all* considered isospin-triplet two-baryon partial waves, since the e.m. current couples the $\mathcal{T} = \frac12$ and $\mathcal{T} = \frac32$ components strongly.
The three-particle equations for the trinucleon bound state $|B \rangle$ and for the scattering states are solved as in [Ref.]{} [@deltuva:03a]; in fact, the scattering states are calculated only implicitly as described in Appendix \[app:RF\]. The resulting binding energies of ${}^3\mathrm{He}$ are -7.941 and -8.225 MeV for CD Bonn and CD Bonn + $\Delta$, respectively. If the Coulomb interaction were taken into account, as proper for ${}^3\mathrm{He}$, the binding energies shift to -7.261 and -7.544 MeV, whereas the experimental value is -7.718 MeV. Nevertheless, we use the purely hadronic energy values and bound-state wave functions for consistency when calculating the current matrix elements, since we are unable to include the Coulomb interaction in the scattering states.
Whereas the baryonic potential is considered up to $I=3$, the e.m. current is allowed to act between partial waves up to $I=6$, the higher partial waves being created by the geometry of antisymmetrization. The e.m. current is taken over from [Ref.]{} [@deltuva:04a] augmented by e.m. form factors. Whereas the employed current operators depend on the three-momentum transfer ${{\mathbf{Q}}}$ only, the added e.m. form factors depend on the four-momentum transfer $Q^2 = {{\mathbf{Q}}}^2 - Q_0^2$ as discussed in Appendix A of [Ref.]{} [@deltuva:04a], $Q_0$ being taken as the energy transfer to the nuclear system. The current is expanded in multipoles as described in [Refs.]{} [@oelsner:phd; @deltuva:phd]; current conservation is imposed explicitly by replacing the longitudinal current part by its charge part. The technique for calculating multipole matrix elements is developed in [Ref.]{} [@oelsner:phd]; a special stability problem [@yuan:02a] arising in the calculation requires some modifications of that technique as described in [Ref.]{} [@deltuva:phd]. The electric and magnetic multipoles are calculated from the one- and two-baryon parts of the spatial current; the Siegert form of electric multipoles is *not* used. The Coulomb multipoles are calculated from diagonal single-nucleon and single-$\Delta$ parts of the charge density; the nucleon-$\Delta$ transition contribution as well as two-baryon contributions are of relativistic order and are therefore omitted in the charge-density operator when calculating Coulomb multipoles.
The number of considered current multipoles is limited by the maximal total three-baryon angular momentum $\mathcal{J}_{\mathrm{max}} = \frac{25}{2}$, taken into account for the hadronic scattering states. The results for the considered e.m. reactions appear fully converged with respect to higher two-baryon angular momenta $I$, with respect to $\Delta$-isobar coupling and with respect to higher three-baryon angular momenta $\mathcal{J}$ on the scale of accuracy which present-day experimental data require, the exception being only exclusive observables in the vicinity of the quasielastic peak which show poorer convergence with respect to $\mathcal{J}$.
E.m. form factors of the three-nucleon bound state and detailed choice of current
---------------------------------------------------------------------------------
The trinucleon form factors refer to elastic electron scattering. The form factors are calculated in order to check how realistic the underlying current operators are for the momentum transfers required later on in inelastic electron scattering; they are calculated in the Breit frame, i.e., as functions of $Q = \sqrt{Q^2} = |{{\mathbf{Q}}}|$; we will make sure in the text that the magnitude $Q$ will not be confused with the four vector $Q$. As customary we give $Q$ in this subsection in units of ${\mathrm{fm}^{-1}}$ with $1\;{\mathrm{fm}^{-1}}\approx 200\;\mathrm{MeV}/c$ in contrast to the remainder of the paper. The operator forms are defined in Appendix A of [Ref.]{} [@deltuva:04a] with the hadronic parameters of the CD Bonn and CD Bonn + $\Delta$ potentials and with the following additional choices for the baryonic and mesonic e.m. form factors.
We employ the recent parametrization of the nucleonic e.m. form factors as given in [Ref.]{} [@hammer:04a]; it is tuned to new form factor data for the proton and the neutron and is therefore rather different at momentum transfers larger than $3\,{\mathrm{fm}^{-1}}$ compared with older parametrizations as those of [Ref.]{} [@gari:86a], used by us previously in [Refs.]{} [@yuan:02b; @yuan:02a]. We take the Sachs form factors $g_E(Q^2)$ and $g_M(Q^2)$ of [Ref.]{} [@hammer:04a] as the form factors $e(Q^2)$ and $\mu(Q^2)$ in Appendix A of [Ref.]{} [@deltuva:04a]; in the context of the two-baryon potentials CD Bonn and CD Bonn + $\Delta$ of this paper the two-baryon exchange currents of [Eqs.]{} (A5) – (A7) in [Ref.]{} [@deltuva:04a] are used with the isovector form factors $e^V(Q^2) = g_E^V(Q^2)$ instead of the Dirac form factor $f_1^V(Q^2)$, used previously [@yuan:02b; @yuan:02a] in the context of the potentials Paris and Paris + $\Delta$. However, the two-baryon exchange currents, corresponding to nondiagonal meson exchanges according to [Eqs.]{} (A5) and (A6) of [Ref.]{} [@deltuva:04a], are used with form factors $f_{\rho \pi \gamma}(Q^2) = g_{\rho \pi \gamma} f_1^S(Q^2)$ and $f_{\omega \pi \gamma}(Q^2) = g_{\omega \pi \gamma} f_1^V(Q^2)$. In contrast to [Ref.]{} [@deltuva:04a], we choose for the nucleon-$\Delta$ transition form factor $g_{\Delta N}^{M1}(Q^2)$ the coupling strength as $g_{\Delta N}^{M1}(0) = 4.59\,\mu_N$, $\mu_N$ being the nuclear magneton. The coupling strength is in accordance with the relation $g_{\Delta N}^{M1}(0) = \frac32 G_M^\ast(0)$ to the transition magnetic moment $ G_M^\ast(0)$ and with its experimental value $ G_M^\ast(0) = 3.06\,\mu_N$ of [Ref.]{} [@kamalov:01a], the experimental value being a bit larger than the quark model value $2.63\,\mu_N$. The momentum-transfer dependence of $g_{\Delta N}^{M1}(Q^2)$ is taken as $g_{\Delta N}^{M1}(Q^2)/g_{\Delta N}^{M1}(0)=
e^{-\gamma Q^2}/(1+Q^2/\Lambda_{\Delta N}^2)^{2}$ with $\gamma=0.21\,(\mathrm{GeV/c})^{-2}$ and $\Lambda_{\Delta N}^2 = 0.71\,(\mathrm{GeV}/c)^2$ according to [Fig.]{} 2 of [Ref.]{} [@kamalov:01a]; the momentum-dependent fall of the form factor $g_{\Delta N}^{M1}(Q^2)$ is slightly faster than for a dipole.
Figure \[fig:fc\] shows the trinucleon charge form factors. The relativistic operator corrections as given in [Eqs.]{} (A11) of [Ref.]{} [@deltuva:04a] and the additional corrections of $\rho$ and $\omega$ exchange of [Ref.]{} [@carlson:98a] are necessary to account for the data at least roughly at larger momentum transfers. Those relativistic corrections are, however, omitted in our *standard calculational procedure* for electrodisintegration. Most disintegration processes, considered in this paper, require the current up to momentum transfers $|{{\mathbf{Q}}} | \approx 2.5\,{\mathrm{fm}^{-1}}$; in that kinematic regime the employed relativistic corrections are still small. However, even in that limited kinematic regime the predictions based on that *standard calculational procedure* show more deviations from data with increasing momentum transfer. The found agreement between data and the theoretical predictions with relativistic corrections for the trinucleon charge form factors is comparable with the results of [Refs.]{} [@henning:95a; @marcucci:98a], based on other baryonic potentials. In contrast to the relativistic corrections, the nonrelativistic $\Delta$-isobar effect on the charge form factors is minute and therefore not separately shown in [Fig.]{} \[fig:fc\].
![\[fig:fc\] Charge form factors $F_C$ of ${{}^3\mathrm{He}}$ and ${{}^3\mathrm{H}}$ as function of momentum transfer $Q$. Results of the coupled-channel potential with $\Delta$-isobar excitation without (solid curves) and with selected relativistic charge operator corrections (dashed curves) are compared. The results of the purely nucleonic CD-Bonn potential are almost indistinguishable from the respective results of CD Bonn + $\Delta$. The experimental data are from [Ref.]{} [@sick:01a].](fc.eps)
The trinucleon magnetic moments are given in Table \[tab:mm\] and the magnetic form factors are shown in [Fig.]{} \[fig:fm\]. The agreement between data and theoretical predictions is quite satisfactory for the magnetic moments and for the form factors up to $Q=5\,{\mathrm{fm}^{-1}}$; beyond $Q=5\,{\mathrm{fm}^{-1}}$ the predicted form factors are too small in magnitude and have a shape not consistent with the data around the secondary maximum. In contrast to the charge, exchange corrections of the spatial current are of nonrelativistic order and they contribute already at momentum transfers relevant for the considered disintegration processes; they are therefore fully included in our *standard calculational procedure*. In the context of the potentials CD Bonn and CD Bonn + $\Delta$, the use of the isovector form factors $g_E^V(Q^2)$ for diagonal meson-exchange currents is absolutely necessary; the use of $f_1^V(Q^2)$ instead moves the first minima out, i.e., beyond $7\,{\mathrm{fm}^{-1}}$.
At the larger momentum transfers $Q > 3\,{\mathrm{fm}^{-1}}$ we note a sensitivity of the theoretical predictions for the charge and magnetic form factors upon the parametrization of the underlying charge and current operators, especially on the e.m. form factors of baryons. Furthermore, as already discussed in [Ref.]{} [@deltuva:04a], the match between hadronic and e.m. dynamics has deficiencies, i.e., the two-baryon potentials being nonlocal, whereas the employed e.m. currents being local; also the use of a nonrelativistic description of the hadron dynamics at those momentum transfers is questionable. However, the observed theoretical uncertainties and the discrepancies with data, occurring at larger momentum transfers, are not relevant for most disintegration processes, considered in this paper.
[[l]{}\*[2]{}[c]{}]{} & $\mu({{}^3\mathrm{He}})$ & $\mu({{}^3\mathrm{H}})$\
CD Bonn & $-2.073$ & 2.906\
CD Bonn + $\Delta$ & $-2.139$ & 2.970\
Experiment & $-2.127$ & 2.979
![\[fig:fm\] Magnetic form factors $F_M$ of ${{}^3\mathrm{He}}$ and ${{}^3\mathrm{H}}$ as function of momentum transfer $Q$. Results of the coupled-channel potential with $\Delta$-isobar excitation (solid curves) are compared with reference results of the purely nucleonic CD-Bonn potential (dashed curves). The experimental data are from [Ref.]{} [@sick:01a]. Both, data and theoretical predictions, are divided by the experimental magnetic moments as given in Table \[tab:mm\].](fm.eps)
Exclusive three-nucleon breakup \[sec:3Nexcl\]
----------------------------------------------
To the best of our knowledge, there are no fully exclusive experimental data of three-nucleon breakup in the considered energy regime. As in hadronic and photo reactions we observe more significant $\Delta$-isobar effects at higher energies. Figure \[fig:e3HR\] presents sample results for the spin-averaged eightfold differential cross section of the three-body electro disintegration of ${{}^3\mathrm{He}}$ with a moderate $\Delta$-isobar effect.
![\[fig:e3HR\] Eightfold differential cross section of three-body electrodisintegration of ${{}^3\mathrm{He}}$, i.e., ${{}^3\mathrm{He}}(e,e'pp)n$, at 390 MeV electron lab energy as a function of the arclength $S$ along the kinematical curve. The electron scattering angle, the momentum and energy transfer are $\theta_e = 39.7^{\circ}$, $\;|{{\mathbf{Q}}}| = 250.2\;\mathrm{MeV}/c$ and $\;Q_0 = 113\;\mathrm{MeV}/c$, respectively. The observable refers to the configuration $(30^{\circ},180^{\circ},45^{\circ},180^{\circ})$; the angles are given with respect to the direction of the incoming electron; the notation is standard, e.g., explained in [Ref.]{} [@deltuva:phd]. Results of the coupled-channel potential with $\Delta$-isobar excitation (solid curves) are compared with reference results of the purely nucleonic CD-Bonn potential (dashed curves).](e3HR1.eps)
Reference [@groep:00a] presents results for the eightfold differential cross section ${d^8 \sigma}/{ d E_e d\Omega_e \, dE_1 \, d\Omega_1 \, d\Omega_2}$ averaged over a rather large experimental detection volume. However, the excitation energy in the experiment of [Ref.]{} [@groep:00a] is well above pion-production threshold. Thus, the theoretical predictions of any model neglecting pionic channels as our potential CD Bonn + $\Delta$ should be taken with severe caution. Nevertheless, we present our results for that higher energy in [Fig.]{} \[fig:LQ\]; we use the representation of [Ref.]{} [@groep:00a], but for simplicity we do not perform the averaging. We note large $\Delta$-isobar effects in particular kinematical regimes where the purely nucleonic calculations presented in [Fig.]{} 7 of [Ref.]{} [@groep:00a] clearly underestimate the data; the inclusion of the $\Delta$ isobar may therefore be able to reduce that discrepancy. However, we emphasize that the employed potentials CD Bonn and CD Bonn + $\Delta$ are unrealistic above pion-production threshold; in contrast to CD Bonn, the coupled-channel potential CD Bonn + $\Delta$ yields inelasticities, but they show clearly unphysical, resonating and therefore unwanted structures in the ${}^1D_2$ two-nucleon partial wave as already demonstrated in [Ref.]{} [@deltuva:03c], casting serious doubts on the size of the calculated $\Delta$-isobar effect in [Fig.]{} \[fig:LQ\]. Thus, a modified version of CD Bonn + $\Delta$ with more realistic phase shifts above pion-production threshold is developed for exploratory reasons and also used for the predictions in [Fig.]{} \[fig:LQ\]; it yields an expected reduction of the $\Delta$-isobar effect, though the effect remains rather strong. The modified version of CD Bonn + $\Delta$ works with a reduced coupling strength $g_{\sigma \Delta \Delta}$ of the $\sigma$ meson to the $\Delta$ isobar. The quality of the fit to the two-nucleon scattering data up to 350 MeV nucleon lab energy remains practically unchanged, but, unfortunately, the beneficial $\Delta$-isobar effect on trinucleon binding gets almost completely lost – the resulting binding energy of ${{}^3\mathrm{He}}$ including the Coulomb interaction is -7.329 MeV. Other modification schemes of CD Bonn + $\Delta$ have not been tried yet. Anyhow, the data of [Ref.]{} [@groep:00a] deserve a theoretical description with a two-baryon potential, extended realistically above pion-production threshold.
![\[fig:LQ\] Eightfold differential cross section of ${{}^3\mathrm{He}}(e,e'pp)n$ reaction at 563.7 MeV electron lab energy as a function of the magnitude of missing momentum ${{\mathbf{k}}}_m = {{\mathbf{Q}}} - {{\mathbf{k}}}_1 - {{\mathbf{k}}}_2 $. The electron scattering angle, the momentum and energy transfer are $\theta_e = -27.72^{\circ}$, $\;|{{\mathbf{Q}}}| = 305\;\mathrm{MeV}/c$ and $\;Q_0 = 220\;\mathrm{MeV}/c$, respectively. The observable refers to the configuration $(53.8^{\circ},0.0^{\circ},92.9^{\circ},180.0^{\circ})$. Results of the standard coupled-channel potential with $\Delta$-isobar excitation (solid curve) and of its modified version (dash-dotted curve) are compared with reference results of the purely nucleonic CD-Bonn potential (dashed curve).](LQVpp.eps)
Inclusive response functions
----------------------------
Figures \[fig:RLt\] - \[fig:RF500\] present sample results for inclusive longitudinal and transverse response functions $R_L$ and $R_T$ of unpolarized ${{}^3\mathrm{He}}$ and ${{}^3\mathrm{H}}$.
Figures \[fig:RLt\] and \[fig:RTt\] contain results for threshold data of sizable momentum transfer, i.e., $473 \leq |{{\mathbf{Q}}}| \leq 927\;\mathrm{MeV}/c$; they are given as functions of the excitation energy $E_x= \sqrt{m_B^2 c^4 + 2m_B c^3 Q_0 - Q^2 c^2} - m_B c^2$. The longitudinal response $R_L$ shows only a relatively small $\Delta$-isobar effect, not documented in the plot, but there is a clear need for relativistic corrections as seen already in the trinucleon charge form factors; the same operator corrections are used there and here. The transverse response $R_T$ is rather well described, as the trinucleon magnetic form factors in [Fig.]{} \[fig:fm\] are, by the inclusion of the $\Delta$ isobar; the purely nucleonic calculations of this paper as well as those of [Ref.]{} [@hicks:03a], based on a different two-nucleon potential, fail in accounting for the experimental data at higher momentum transfers.
Figures \[fig:RF300\] and \[fig:RF500\] contain results for the responses at higher energy transfers including the region of the quasielastic peak. The $\Delta$-isobar effects are rather insignificant. The overall agreement with the experimental data is satisfactory, though a consistent displacement of the quasielastic peak for the responses at higher three-momentum transfer in [Fig.]{} \[fig:RF500\] is obvious. The displacement occurs in all responses, but it is more discernible for the transverse responses whose peaks are more pronounced. We think that that displacement is due to the use of nonrelativistic kinematics for the baryons involved:
\(1) The estimates for the position of the quasielastic peak, i.e., $Q^2/2m_N$ with relativistic kinematics and ${{\mathbf{Q}}}^2/2m_N$ with nonrelativistic kinematics differ just by that displacement. (2) In the plane-wave impulse approximation of the responses by one of the present authors in [Ref.]{} [@meier-hajduk:89a] the use of relativistic kinematics in the final phase-space element is important for the achieved agreement with experimental data. However, a calculational improvement with respect to baryon kinematics is not straightforward when the full dynamics is included.
![\[fig:RLt\] ${{}^3\mathrm{He}}$ inclusive longitudinal response $R_L$ near threshold for the momentum transfer $|{{\mathbf{Q}}}| = 487\;\mathrm{MeV}/c$ as function of the excitation energy $E_x$. Results of the coupled-channel potential with $\Delta$-isobar excitation without (solid curves) and with selected relativistic charge operator corrections (dashed curves) are compared. The experimental data are from [Ref.]{} [@retzlaff:94a].](RLt.eps)
![\[fig:RTt\] ${{}^3\mathrm{He}}$ inclusive transverse response $R_T$ near threshold around the momentum transfers $|{{\mathbf{Q}}}| = 473$, $862$ and $927\;\mathrm{MeV}/c$ from top to bottom as function of the excitation energy $E_x$. Results of the coupled-channel potential with $\Delta$-isobar excitation (solid curves) are compared with reference results of the purely nucleonic CD-Bonn potential (dashed curves). The experimental data are from [Ref.]{} [@hicks:03a], $|{{\mathbf{Q}}}|$ being the value at threshold there.](RTt.eps)


Figures \[fig:A0\] and \[fig:A135\] present results for asymmetries $A({{\mathbf{n}}}_B)$, measured in the experiments of [Refs.]{} [@xu:00a; @xiong:01a] around the four-momentum transfer $Q^2 = 0.1$ and $0.2\,(\mathrm{GeV}/c)^2$. The $\Delta$-isobar effects are rather insignificant. The overall agreement between the experimental data and the theoretical predictions is rather good for the lower four-momentum transfer, whereas at higher momentum transfer there are some discrepancies.
![\[fig:A0\] The inclusive asymmetry $A$ around $(\theta_B, \varphi_B) = (0^\circ, 0^\circ)$ in ${{}^3 \vec{\mathrm{H}}\mathrm{e} (\vec{e},e')}$ process at four-momentum transfer $Q^2 = 0.1$ and $0.2\,(\mathrm{GeV}/c)^2$ as function of the energy transfer $Q_0$. The incident electron energy is 778 MeV. Results of the coupled-channel potential with $\Delta$-isobar excitation (solid curves) are compared with reference results of the purely nucleonic CD-Bonn potential (dashed curves). The experimental data are from [Ref.]{} [@xu:00a]. ](A0.eps)
![\[fig:A135\] The inclusive asymmetry $A$ around $(\theta_B, \varphi_B) = (135^\circ, 0^\circ)$ in ${{}^3 \vec{\mathrm{H}}\mathrm{e} (\vec{e},e')}$ process around the four-momentum transfer $Q^2 = 0.1$ and $0.2\,(\mathrm{GeV}/c)^2$ as function of the excitation energy $E_x$. The incident electron energies are 778 and 1727 MeV, and the electron scattering scattering angles are $23.7^{\circ}$ and $15.0^{\circ}$, respectively. Results of the coupled-channel potential with $\Delta$-isobar excitation (solid curves) are compared with reference results of the purely nucleonic CD-Bonn potential (dashed curves). The experimental data are from [Ref.]{} [@xiong:01a].](A135.eps)
Summary and conclusions \[sec:concl\]
=====================================
The present paper completes our discussion [@yuan:02a; @yuan:02b; @deltuva:04a] of e.m. three-nucleon processes below pion-production threshold. Its particular focus is three-body disintegration and inclusive reactions in inelastic electron scattering. The specialty of the description is the use of a realistic coupled-channel potential with single $\Delta$-isobar excitation for the initial and final hadronic states and the use of a corresponding coupled-channel e.m. current with two-baryon contributions. The $\Delta$-isobar effects on observables therefore result from the effective three-nucleon force — and a $\Delta$-modification of the effective two-nucleon force — and from corresponding effective two- and three-nucleon exchange currents, all effective hadronic and e.m. interactions mediated by the $\Delta$ isobar and based on the exchange of all considered mesons.
We find large and beneficial $\Delta$-isobar effects for the transverse response in the threshold region at rather high momentum transfer $|{{\mathbf{Q}}}| > 800\;\mathrm{MeV}/c$; all purely nucleonic calculations fail in accounting for the corresponding experimental data. We also predict rather significant $\Delta$-isobar effects for the exclusive differential cross section in particular kinematical regimes. For the considered response functions and inclusive asymmetries up to $|{{\mathbf{Q}}}| = 500\;\mathrm{MeV}/c$ the found $\Delta$-isobar effects are small.
We see a need for an improvement of the presented theoretical apparatus in three respects:
\(1) As already discussed in [Ref.]{} [@deltuva:04a], the employed baryonic potentials and the respective e.m. currents are not fully consistent, the potentials being nonlocal and the currents being local. According to our exploratory investigation [@deltuva:phd] this lack of current conservation is practically not serious for the observables of electrodisintegration considered in this paper. Nevertheless, conceptually the development and the use of an improved and consistent e.m. current is quite desirable.
\(2) Future experiments will focus on processes above pion-production threshold, even if only purely nucleonic channels are selected for the explicit observation. The data of [Ref.]{} [@groep:00a] corresponding to our predictions of [Fig.]{} \[fig:LQ\] are only one example. Thus, for those processes the present description of the dynamics without an explicit pion channel is clearly insufficient; an improvement is quite desirable. That improvement is also quantitatively important as [Fig.]{} \[fig:LQ\] proves.
\(3) The four-vector e.m. current is a relativistic concept. Thus, the description of the hadronic initial and final states should be based on covariant dynamic equations. Such an extension of the present theoretical description is highly desirable.
The authors thank H. W. Hammer and U. G. Meissner for providing them with their new parametrization of nucleonic form factors, H. Henning, E. Jans, J. Jourdan and I. Sick for fruitful discussions, and J. Golak and I. Nakagawa for helping them to obtain experimental data. A.D. and L.P.Y. are partially supported by the DFG grant Sa 247/25, J.A. by the grant GA CzR 202/03/0210 and by the projects ASCR AV0Z1048901 and K1048102. The numerical calculations were performed at Regionales Rechenzentrum für Niedersachsen.
\[app:RF\] Integral equation for current matrix element
=======================================================
In this appendix the current matrix elements for two- and three-body electro disintegration of the trinucleon bound state, i.e., $\langle\psi^{(-)}_{\alpha} ({{\mathbf{q}}}_f) \nu_{\alpha_f}|
j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}})| B \rangle $ and $\langle\psi^{(-)}_{0} ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f}|
j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) | B \rangle $, are calculated.
The antisymmetrized fully correlated three-nucleon scattering states of internal motion in nucleon-deuteron channels, i.e., $\langle\psi^{(-)}_{\alpha} ({{\mathbf{q}}}_f) \nu_{\alpha_f}|$, and in three-body breakup channels, i.e., $\langle\psi^{(-)}_{0} ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f}|$, are not obtained explicitly; they are calculated only implicitly when forming current matrix elements. We introduce the state $|X^{\mu}(Z) \rangle$, defined according to
\[eq:X\] $$\begin{aligned}
\label{eq:Xa}
|X^{\mu}(Z) \rangle = {} & \big( 1+P \big)
j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) | B \rangle
\nonumber \\ &
+ P T(Z) G_0(Z) |X^{\mu}(Z) \rangle, \\ \label{eq:Xb}
|X^{\mu} (Z)\rangle = {} & \sum_{n=0}^{\infty} [P T(Z) G_0(Z)]^n
\nonumber \\ & \times
\big( 1+P \big) j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) | B \rangle,
\end{aligned}$$
as intermediate quantity with $Z=E_i+i0$ being the three-particle available energy, $T(Z)$ being the two-baryon transition matrix and $P$ the sum of the cyclic and anticyclic permutation operators of three particles. Equation is an integral equation for $|X^{\mu}(Z)\rangle$, analogous to that for the multichannel transition matrix $U(Z)$ of [Ref.]{} [@deltuva:03a]: Both equations have the same kernel, only their driving terms are different. We therefore solve [Eq.]{} according to the technique of [Ref.]{} [@deltuva:03a], summing the Neumann series for $|X^{\mu} (Z)\rangle$ by the Padé method. Once $|X^{\mu} (Z)\rangle$ is calculated, the current matrix elements required for the description of two- and three-body electro disintegration of the trinucleon bound state are obtained according to
\[eq:X2J\] $$\begin{aligned}
\label{eq:X2Ja}
\langle \psi^{(-)}_{\alpha} &({{\mathbf{q}}}_f) \nu_{\alpha_f}|
j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}})| B \rangle
\nonumber \\ = {} &
{ \frac{1}{\sqrt{3}} }
\langle \phi_{\alpha} ({{\mathbf{q}}}_f) \nu_{\alpha_f}| X^{\mu} (Z)\rangle,
\\ \nonumber
\langle \psi^{(-)}_{0} & ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f}|
j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) | B \rangle \\ = {} &
{ \frac{1}{\sqrt{3}} }
\langle \phi_{0} ({{\mathbf{p}}}_f {{\mathbf{q}}}_f) \nu_{0_f}|
\big( 1+P \big)
\big[ j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) | B \rangle \nonumber \\ &
+ T(Z) G_0(Z) |X^{\mu} (Z)\rangle \big]. \label{eq:X2Jb}
\end{aligned}$$
When calculating the inclusive response functions, the integration over all final hadronic states is performed implicitly, following the strategy of [Ref.]{} [@deltuva:04a] for calculating the total cross section of photo disintegration. We define the general spin-dependent response function as follows, i.e.,
\[eq:RFM\] $$\begin{aligned}
\label{eq:RFMa}
R_{\mathcal{M}_B' \mathcal{M}_B}^{\lambda' \lambda} (Q) = {} &
\epsilon_{\nu}^{\ast} ({Q} \lambda')
\langle B \mathcal{M}_B'|[j^{\nu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) ]^{\dagger}
\nonumber \\ & \times
\delta (E_i - H_0 - H_I)
\nonumber \\ & \times
j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) |B \mathcal{M}_B \rangle \,
\epsilon_{\mu} ({Q} \lambda), \\ \label{eq:RFMb}
R_{\mathcal{M}_B' \mathcal{M}_B}^{\lambda' \lambda} (Q) = {} &
- \frac{1}{\pi} \mathrm{Im} \big\{ \epsilon_{\nu}^{\ast} ({Q} \lambda')
\langle B \mathcal{M}_B'|[j^{\nu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) ]^{\dagger}
\nonumber \\ & \times
G({E_i \! + \!i0}) j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) |B \mathcal{M}_B \rangle \,
\epsilon_{\mu} ({Q} \lambda) \big\}
\end{aligned}$$ with the effective polarization vectors $\epsilon (Q \lambda=\pm 1) = \mp \frac{1}{\sqrt{2}}(0,1,\pm i,0)$ and $\epsilon (Q \lambda=0) = (1,0,0,0)$. The latter choice assumes current conservation, i.e., the longitudinal part of the spatial current is replaced by the charge density $j^{0} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) = j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) \epsilon_{\mu} (Q \lambda=0)$, resulting in the effective form of $\epsilon (Q \lambda=0)$ different from the standard one as given, e.g., in [Ref.]{} [@yuan:02b]. In contrast to the rest of this paper, we indicate the dependence on the spin projection $\mathcal{M}_B$ of the trinucleon bound state in [Eqs.]{} explicitly. Note that all $R_{\mathcal{M}_B' \mathcal{M}_B}^{\lambda' \lambda} (Q)$ with $\mathcal{M}_B' + \lambda' \neq \mathcal{M}_B + \lambda$ vanish. The auxiliary state $G({E_i \! + \!i0}) j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) | B \rangle $ of [Eq.]{} is related to $|X^{\mu}({E_i \! + \!i0}) \rangle$ according to $$\begin{aligned}
\label{eq:RFMc}
G({E_i \! + \!i0}) & j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) | B \rangle
\nonumber \\ = {} &
\frac13 (1+P) G_0({E_i \! + \!i0}) \big[ j^{\mu} ({{\mathbf{Q}}}, {{{\mathbf{K}}}_{+}}) | B \rangle \nonumber \\
& + T({E_i \! + \!i0}) G_0({E_i \! + \!i0}) |X^{\mu} ({E_i \! + \!i0})\rangle \big].
\end{aligned}$$
The spin-averaged longitudinal and transverse response functions $R_L(Q)$ and $R_T(Q)$ and the spin-dependent transverse and transverse-longitudinal response functions $R_{T'}(Q)$ and $R_{TL'}(Q)$ are calculated according to
\[eq:RF\] $$\begin{aligned}
\label{eq:RL}
R_L(Q) = {} & \frac12 \mathrm{Tr} \big[ R^{00} (Q) \big] , \\
R_T(Q) = {} & \frac12 \sum_{\lambda= \pm 1}
\mathrm{Tr} \big[ R^{\lambda \lambda} (Q) \big], \\
R_{T'}(Q) = {} & \frac12 \sum_{\lambda= \pm 1} \lambda
\mathrm{Tr} \big[ R^{\lambda \lambda} (Q) \sigma_{Bz} \big], \\
R_{TL'}(Q) = {} & \frac12 \sum_{\lambda' \lambda}
\mathrm{Tr} \big[ R^{\lambda' \lambda} (Q) \sigma_{Bx} \big] .
\end{aligned}$$
In [Eqs.]{} traces are calculated with respect to the spin quantum numbers $\mathcal{M}_B$ of the trinucleon bound state; $\sigma_{Bj}$ are the ordinary spin-$\frac12$ particle spin operators, i.e., the Pauli matrices, which refer in this context to the three-nucleon target.
[29]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , , , ****, ().
, , , , , ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , , , ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , ****, ().
, , , ****, ().
, Ph.D. thesis, (), <http://edok01.tib.uni-hannover.de/edoks/e002/300225598.pdf>.
, Ph.D. thesis, ().
, , , , , , ****, ().
(), .
, ****, ().
, , , , , ****, ().
, ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
[^1]: On leave from Institute of Theoretical Physics and Astronomy, Vilnius University, Vilnius 2600, Lithuania
|
---
abstract: 'We show that in colloidal models of artificial kagome and modified square ice systems, a variety of ordering and disordering regimes occur as a function of biasing field, temperature, and colloid-colloid interaction strength, including ordered monopole crystals, biased ice rule states, thermally induced ice rule ground states, biased triple states, and disordered states. We describe the lattice geometries and biasing field protocols that create the different states and explain the formation of the states in terms of sublattice switching thresholds. For a system prepared in a monopole lattice state, we show that a sequence of different orderings occurs for increasing temperature. Our results also explain several features observed in nanomagnetic artificial ice systems under an applied field.'
address: |
$^1$Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545\
$^2$Faculty of Mathematics and Computer Science, Babes-Bolyai University, RO-400591 Cluj-Napoca, Romania
author:
- 'C.J. Olson Reichhardt$^{1}$, A. Lib[á]{}l$^{2}$, and C. Reichhardt$^{1}$'
title: 'Multi-Step Ordering in Kagome and Square Artificial Spin Ice '
---
Introduction
============
Spin ices have been extensively studied as ideal systems that exhibit geometric frustration effects since not all the pairwise spin interactions can be satisfied simultaneously [@Anderson; @Gingras; @Pauling; @Anderson2]. Such systems are termed spin ices due to their similarity with a water ice phase in which proton ordering into corner-sharing tetrahedra is frustrated but obeys the “ice rule” of two protons “in” (close to the O atom) and two protons “out” (far from the O atom) in every tetrahedron [@Pauling]. In spin ices, the corresponding spin ice rules determine how many spins point toward or away from each vertex in the system, and these rules vary depending on the geometry of the system. For example, in two dimensional (2D) square ice, ice-rule-obeying states have two spins pointing toward each vertex and two spins pointing away, and it is possible for the spins to organize into an ordered ground state configuration [@Shiffer; @Moller]. For 2D kagome ice, ice-rule-obeying states have two spins in and one spin out out or two spins out and one spin in; in this case, the lowest energy state is not ordered but contains only vertices that obey these ice rules.
There has recently been growing interest in 2D artificial spin ice systems created using nanomagnetic arrays [@Shiffer; @Moller; @Nisoli; @Li; @GB; @EM; @Zabel; @OT; @Ladak; @Cumings; @Mengotti; @Tanaka; @Mo; @S; @P], 2D colloidal particle assemblies [@Libal; @Colloid], or vortices in nanostructured superconductors [@Vortex]. In the nanomagnetic arrays, the orientation of the magnetic moment of each nanoisland produces the effective spin ordering. The colloidal and vortex systems more closely resemble the water ice system in that the repulsive interactions of the colloids or vortices at vertices resemble the interactions between charged protons, and the ice rules indicate the number of colloids or vortices that sit close to or away from a vertex. Experimental versions of these systems allow for direct imaging of the microscopic spin configurations as well as for careful control of numerous parameters that are not accessible in real spin systems.
In the work of Wang [*et al.*]{} [@Shiffer] on 2D square ice, as the interactions between neighboring nanomagnets increased, the system was increasingly filled with ice-rule-obeying vertices; however, the predicted ground state was not observed due to quenched disorder effects in the nanomagnet array. Experiments on kagome artificial ice systems soon followed and showed that the ice rules predicted for that geometry were obeyed [@Cumings]. In more recent square ice studies using more advanced techniques, large regions of the sample adopted the square ice ground state while non-ice-rule obeying vertices created grain boundaries [@GB]. Such grain boundary formation was previously predicted in vortex simulations to occur for weak quenched disorder, while for stronger disorder isolated non-ice-rule defects begin to appear [@Vortex].
Lack of thermalization is an issue for the experimental nanomagnet systems since thermal activation of the spin configurations is weak or nonexistent. This can be partially mitigated by applying a changing external field [@Nisoli; @Li]. It was recently shown that biased states appear under applied external fields for both square and kagome ice systems, while cycled external fields can be used to generate hysteresis curves [@OT; @Zabel; @EM; @Ladak]. For kagome ice, Zabel [*et al.*]{} [@Zabel] observed an ordered monopole state where each vertex has either all spins pointing out or all spins pointing in, with the vertices alternating sign in a crystalline arrangement. Other studies have examined the creation and motion of monopole excitations in the presence of a biasing field for both square and kagome ice systems [@OT; @EM]. Recent theoretical studies have found that 2D kagome ice can undergo a two-stage ordering transition from a paramagnetic state to an ice-rule-obeying state followed by an ordered monopole state [@Chern]. The thermal disordering or melting of the ordered states can be readily studied in colloidal artificial ices, and recently materials have been created with a spin ordering temperature that is near room temperature, making it now possible to melt magnetic artificial spin ices [@K].
Here we show that a variety of different types of orderings are possible in both kagome and modified square ice systems as a function of external field, temperature, and interaction strength for a colloidal artificial spin ice system composed of arrays of elongated optical traps that each capture one colloid. Experimental studies have demonstrated frustration in buckled 2D colloidal layers [@Colloid] while numerical studies have shown that charged colloids in elongated trap arrays exhibit the same spin ice rules observed for nanomagnet arrays [@Libal]. Computational and experimental studies of colloidal ordering on 2D square and triangular substrates have revealed numerous ordered states and multi-step transitions between the states [@Korda; @Reichhardt; @Frey; @Trizac; @Tizac; @Brunner; @Mangold; @Mikael]. The elongated optical traps we consider here permit each colloid to sit in one of two locations at either end of the trap. Arrays of this type of trap have been created experimentally to study various types of ratchet effects, and preferential ordering of the colloids into a state that minimized the colloid-colloid interactions was observed [@Babic]. An advantage of the colloidal system over the nanomagnetic systems is that thermal activation of the effective spin degrees of freedom is possible. The relative importance of the thermal fluctuations can be varied either by adjusting the temperature or by holding the temperature fixed and modifying the strength of the colloid-colloid interactions. It is also possible to bias the colloidal artificial ice system using an electric field.
![ (a) Schematic of artificial square ice system consisting of elongated traps with two potential minima. An elementary unit or vertex consists of four traps that each capture one charged colloid. The effective spin direction is defined to be toward the end of the trap where the colloid is sitting. Ice-rule obeying states have two colloids close to the vertex and two colloids away from the vertex; one of the two possible ground state configurations is illustrated. (b) Schematic of an artificial kagome ice system. Ice-rule obeying states have either two colloids close to the vertex and one away from the vertex or two colloids away from the vertex and one close to the vertex, as shown. []{data-label="fig:1"}](Fig1.ps){width="3.5in"}
[*Simulation-*]{} We consider a 2D array of $N$ elongated traps that each contain a single colloid. Each trap has two potential minima where the colloid can sit, as shown in figure 1(a) where we illustrate a single vertex of the square ice system. The effective spin at each site is defined to point toward the end of the trap occupied by the colloid. The colloids interact with each other via a repulsive screened Coulomb or Yukawa potential given by $V(r_{ij}) = r_{ij}^{-1}\exp(-\kappa r_{ij}){\bf {\hat r}}_{ij}$ where $\kappa=4/a_0$ is the screening length and $a_{0}$ is the average spacing between the vertices. Since the colloids are repulsive it is very energetically costly for all the colloids around a vertex to sit close to the vertex in a positive monopole state; on the other hand, the lowest energy state for a vertex is the negative monopole configuration in which all the colloids sit away from the vertex. Due to geometrical constraints it is not possible for all of the vertices to adopt negative monopole configurations. The most cost-effective way to accommodate the colloid-colloid repulsion in the square ice is for each vertex to have two close colloids and two far colloids in a “two in-two out” state that obeys the square ice rules. These ice rule obeying states can be biased, with the close colloids occupying traps oriented at 90$^\circ$ to each other, or they can be the slightly lower energy ground state configuration shown in figure 1(a) [@Libal]. For kagome ice, shown schematically in figure 1(b), the ice rule obeying vertices have either two in and one out or one out and two in.
The colloid dynamics evolve according to the following overdamped equation of motion: $$\eta \frac{d{\bf R}_{i}}{dt} = {\bf F}^{cc}_{i} + {\bf F}^{ext} + {\bf F}^{T}_{i} +
{\bf F}^{s}_{i} .$$ Here $\eta$ is the damping constant and the colloid-colloid interaction force is ${\bf F}_{i}^{cc} = -F_0q^2\sum^{N}_{i\neq j}\nabla_i V(r_{ij})$ with $F_{0} = Z^{*2}/(4\pi\epsilon\epsilon_{0})$, $r_{ij}=|{\bf r}_{i} - {\bf r}_{j}|$, and ${\bf {\hat r}}_{ij}=({\bf r}_{i}-{\bf r}_{j})/r_{ij}$. ${\bf r}_{i(j)}$ is the position of particle $i$($j$), $Z^{*}$ is the unit of charge, $\epsilon$ is the dielectric constant of the solvent, and $q$ is the magnitude of the charge on a single colloid. The strength of the repulsion between the colloids can be controlled by varying $q$. The thermal force ${\bf F}^{T}$ arises from Langevin kicks with $\langle {\bf F}^{T}_{i}\rangle = 0$ and $\langle {\bf F}_i(t){\bf F}_j(t^{\prime})\rangle = 2\eta k_{B}T\delta_{ij}\delta(t - t^{\prime})$. The substrate force ${\bf F}^{s}_{i}$ arises from $N$ traps composed of two parabolic ends capping a cylindrical confining area of length $l=1.333a_0$ and width $d_p=0.4a_0$ with a maximum strength of $F_{p}$ and radius $r_{p}$; an additional parabolic barrier of height $f_r$ is placed at the center of the trap to produce two potential minima at each end of the trap [@Libal]. The external biasing force is given by ${\bf F}^{ext}=F_{ext}[\cos(\theta_{ext}){\bf \hat{x}} + \sin(\theta_{ext}){\bf \hat{y}}]$, with $\theta_{ext}=0$ for kagome ice and $\theta_{ext}=45^\circ$ for square ice. For the nanomagnetic system, an in-plane applied external field was used as a biasing force that could align the magnetic moments of the nanoislands. For the colloidal system, the sample can be biased by an in-plane electric field, while for a system of vortices in a type-II superconductor the bias would come from an in-plane applied current. In this work we consider external forces that are strong enough to induce hopping over the central barrier of the traps but not strong enough to cause the colloids to escape from the traps.
![ A kagome ice system with no colloid-colloid interactions ($q^2=0$) and $f_r=4.0$ for $F_{ext}$ applied along the $x$ axis. (a) The effective magnetization $M$ for the entire sample (black filled circles) vs time in simulation steps, along with the sublattice magnetizations $M_1$ (green open squares) and $M_{23}$ (red x’s). (b) Simulation protocol: $F_{ext}$ vs time. The system is initially driven into an ice rule obeying state which transitions into a monopole state when $F_{ext}$ becomes negative. (c) $N_i/N$, the fraction of vertices of type $N_i$, vs time. Black filled circles: $N_0/N$; red open squares: $N_1/N$; green x’s: $N_2/N$; blue +’s: $N_3/N$. The monopole states are $N_{0}$ and $N_{3}$. []{data-label="fig:diverge"}](Fig2.ps){width="4.5in"}
![ Black circles: particle locations; open ellipses: trap locations for a $20a_0 \times 20a_0$ section of a kagome ice sample. Colored circles indicate vertex types: $N_0$ (blue), $N_1$ (green), $N_2$ (yellow), and $N_3$ (red). (a) The positive biased ice rule obeying state. (b) The monopole state consisting of an ordered lattice of $N_0$ and $N_3$ vertices. (c) A finite temperature ice-rule obeying non-biased state. (d) A high temperature paramagnetic state where the vertex types are uncorrelated. []{data-label="fig:vt"}](Fig3.ps){width="3.5in"}
Orderings for Kagome Ice
========================
We first consider a kagome ice array in an external field $F_{ext}$ applied along the $x$ axis, in alignment with the axes of 1/3 of the traps in the sample. In figure 2 we plot the fraction of vertex types $N_i/N$, the effective magnetization $M$, and the applied external force $F_{ext}$ as a function of time for a sample with noninteracting colloids at $q^2=0$. The vertex types are defined as follows: $N_0$ has no colloids near the vertex; $N_1$ has one colloid near the vertex and two far from the vertex; $N_2$ has two colloids near the vertex and one far; and $N_3$ has three colloids near the vertex. $N_1$ and $N_2$ vertices obey the kagome ice rules, while $N_0$ and $N_3$ vertices are negative and positive monopoles, respectively.
In figure 2(c), the noninteracting colloids begin in a randomized state with $N_0/N = N_3/N = 0.12$ and $N_1/N = N_2/N = 0.38$. As shown in figure 2(b), $F_{ext}$ is gradually increased from zero. Just above $F_{ext} = 1.0$, the system jumps into the biased ice-rule obeying state illustrated in figure 3(a). We define the contribution of each trap to the effective magnetization $M$ according to the component of the effective spin direction that is aligned with $F_{ext}$. Each trap is assigned a value $s_i=\pm 1$ depending on whether it is biased with or against the direction of $F_{ext}$, and this value is then weighted by $\cos(\theta_i)$. The sublattice of trap sites that are aligned with the $x$ axis have $\theta_i=0$ while the remaining trap sites that are oriented at $\pm 60^\circ$ to the $x$ axis have $\theta_i=60^\circ$. In the biased state the net magnetization is $M/N_{\rm norm} = N_{\rm norm}^{-1}\sum_{i=1}^{N}s_i\cos(\theta_i)= 1.0$, where $N_{\rm norm}=\cos(0^\circ)N/3 + \cos(60^\circ)2N/3$, and in figure 2(a) the sample reaches the biased state at $F_{ext}=3.7$.
![ Schematic of the formation of the ordered monopole state in a kagome ice sample under a biasing drive applied along the $x$ axis. Black circles are colloid positions and ellipses indicate trap locations. The traps can be broken into two sublattices: sublattice $A$ aligned with the $x$ axis (shaded ellipses) and sublattice $B$ oriented at $\pm 60^{\circ}$ from the $x$ axis (open ellipses). For noninteracting colloids ($q^2=0$) the switching or coercive field for sublattice $A$ is lower than for sublattice $B$. Colored circles indicate vertex types: $N_0$ (blue), $N_1$ (green), $N_2$ (yellow), and $N_3$ (red). (a) The system begins in the positively biased ice rule obeying state. (b) As $F_{ext}$ becomes increasingly negative, it crosses the coercive field of sublattice $A$ first. Colloids in sublattice $A$ switch while those in sublattice $B$ do not, creating the ordered monopole state. (c) As $F_{ext}$ continues to decrease, sublattice $B$ switches and the sample reaches the negatively biased ice rule obeying state. []{data-label="fig:vtb"}](Fig4.ps){width="4.5in"}
After the sample is fully biased, we decrease $F_{ext}$ as shown in figure 2(b). At $F_{ext}=-1.7$, the biased state switches into an ordered monopole state where $N_0$ and $N_3$ each jump from $0\%$ of the population to $50\%$ of the population while $N_1$ and $N_2$ simultaneously drop to zero as shown in figure 2(c). The ordered monopole state, illustrated in figure 3(b), is a lattice of positive and negative monopoles and is the same state observed in experiments on kagome nanomagnetic systems under external drives [@Zabel]. The ordered monopole state arises due to the fact that the kagome lattice is effectively composed of two sublattices that have different switching fields under an external $x$-direction drive. This is illustrated schematically in figure 4 where the darker ellipses indicate the sublattice with a lower switching field, which we term sublattice $A$. For the case of non-interacting colloids with $q^2=0$, the switching field for sublattice $A$ is simply the maximum force $f_r$ of the barrier in the center of the trap. Since the applied drive is not aligned in the direction of the traps of sublattice $B$, the switching force for sublattice $B$ has a higher value of $f_{r}/\cos(60^{\circ})$. When we apply a sufficiently large $F_{ext}$ in the positive $x$-direction to cause both the $A$ and $B$ sublattices to switch, the system enters the positively biased state illustrated in figure 4(a). After we reduce $F_{ext}$ back to zero, we begin applying $F_{ext}$ in the negative $x$ direction. Sublattice $A$ switches in the negative $x$ direction first due to its lower threshold, but since the colloids in sublattice $B$ have not yet switched, the ordered monopole state forms as shown in figure 4(b). For $q^2 \ne 0$ or finite colloid-colloid interactions, creation of monopoles becomes energetically unfavorable, so for sufficiently high $q^2$ the switching of sublattice $A$ simultaneously induces a switching of sublattice $B$, bringing the system directly into the negatively biased state illustrated in figure 4(c) and preventing the formation of the ordered monopole state. For a certain range of nonzero $q^2$, however, the ordered monopole state can still be stabilized when $F_{ext}$ passes through zero. Our results suggest that in the experiments of Zabel [*et al.*]{} [@Zabel] the magnetic islands are in the weakly interacting or noninteracting limit.
![ $N_i/N$ vs temperature $T$ in kagome ice samples with $f_r=1$ that have been initialized in the ordered monopole state; $F_{ext}$ is set to zero before the temperature is applied. Black filled circles: $N_0$; red open squares: $N_1$; green x’s: $N_2$; blue $+$’s: $N_3$. (a) A sample with zero pairwise interactions $q^2=0$. Here the ordered monopole state melts directly into a paramagnetic state. (b) A sample with $q^2=0.3$. As $T$ increases, the monopole density drops to zero while $N_1/N$ and $N_2/N$ pass through peak values of 0.5, indicating that the ordered monopole state melts into an ice-rule obeying state. At higher temperatures, monopoles reappear while $N_{1}/N$ and $N_{2}/N$ decrease and approach the value expected for the paramagnetic state. []{data-label="fig:vta"}](Fig5.ps){width="3.5in"}
We next examine the thermal disordering of the ordered monopole state for varied colloid-colloid interaction strengths. In figure 5(a) we plot the vertex populations $N_{i}/N$ versus temperature $T$ for a noninteracting sample with $q^2=0$, while figure 5(b) shows $N_i/N$ versus $T$ for an interacting sample with $q^2=0.3$. In each case we set $F_{ext}=0$ before applying the temperature. In the noninteracting system in figure 5(a), the ordered monopole state persists up to $T = 2.5$, at which point the $N_{i}/N$ cross over to the values expected for an uncorrelated random arrangement or paramagnetic state. In figure 5(b) for $q^2 = 0.3$, $N_{3}/N$ and $N_{0}/N$ drop to zero with increasing $T$ while $N_{1}/N$ and $N_{2}/N$ increase to $0.5$, indicating a crossover from the ordered monopole state into an ice-rule obeying state. This is not the biased ordered state illustrated in figure 3(a); instead, the ice-rule obeying state lacks true long range order, as shown in figure 3(c). As $T$ is further increased, $N_{3}/N$ and $N_{0}/N$ gradually increase while $N_{1}/N$ and $N_{2}/N$ decrease to values close to those expected for a random or paramagnetic state of the type illustrated in figure 3(d).
![ Phase diagram for $T$ vs pairwise interaction strength $q^2$ for the kagome ice system with $f_r=1$ indicating regions where the ordered monopole, ice-rule obeying, and random or paramagnetic states appear. For weak interactions (small $q^2$) the monopole state disorders directly into the random state, while the width of the ice-rule obeying region grows with increasing $q^2$. Red symbols: beyond the highest value of $q^2$ where the ordered monopole state can be created with a biased drive protocol, the system can be artificially placed into the ordered monopole state and then melted into the ice-rule obeying state. []{data-label="fig:diverge2"}](Fig6.ps){width="4.5in"}
By conducting a series of simulations, we map out the different orderings as a function of temperature and pairwise interactions as shown in figure 6. The line separating the ice rule obeying state from the random state is defined as the point at which $N_{1}/N$ and $N_{2}/N$ reach values that are within 10% of those expected for a random vertex arrangement. For weak interactions or small $q^2$ the ordered monopole state melts directly into the random state. As $q^2$ increases, the ice-rule obeying region increases in extent while the line marking the crossover between the ordered monopole state and the ice-rule obeying state drops to lower $T$. These results suggest that the ordered monopole states can only be observed for systems with effective spin-spin interactions that are weak compared to the strength of the switching or coercive field. For $q^2 > 0.4$ the ordered monopole state can no longer be prepared by means of the external field protocol since the repulsion between colloids becomes strong enough that switching of the $A$ sublattice immediately induces switching of the $B$ sublattice. Nevertheless, the ordered monopole state is still stable at lower $T$ for $q^2>0.4$ as can be seen by artificially preparing the lattice in the ordered monopole state and then increasing the temperature. Such artificial preparation could be achived in colloidal systems by, for example, tuning the colloid-colloid interactions to a lower value in order to create the ordered monopole state and then increasing the interactions to a higher value; however, it is generally not possible to vary the interaction strength in magnetic systems. The line marking the melting into the ice-rule obeying state obtained by using artificially created ordered monopole states is marked with a dashed line in figure 6.
![ (a) Schematic of a modified square ice sample containing two trap sublattices $A$ (shaded ellipses) with central barrier $f_r^1$ and $B$ (open ellipses) with central barrier $f_r^2$. We take $f_r^2/f_r^1=2$ so that sublattice $A$ has a lower threshold for switching. $F_{ext}$ is applied along a line oriented at $45^\circ$ to the $x$ axis. (b) A positively biased ice rule obeying state is formed by initial driving in the positive direction, $\theta_{ext}=45^\circ$. Yellow circles: positively biased $N_2^b$ vertices. (c) When the driving direction is switched to the negative direction, sublattice $A$ flips first producing a checkerboard monopole state. Blue circles: $N_0$ vertices; red circles: $N_4$ vertices. The colloid-colloid interactions must be weak in order for the ordered monopole state to remain stable. []{data-label="fig:diverge3"}](Fig7.ps){width="4.5in"}
![ A modified square ice system as illustrated in figure 7 with $q^2=0.4$, $f_r^1=1.0$, and $f_r^2=2.0$ for $F_{ext}$ applied along a line tilted at $45^\circ$ to the $x$ axis. (a) $N_i/N$, the fraction of vertices of type $N_i$, vs time in simulation time steps. Black filled circles: $N_0$; red open squares: $N_1$; green $+$’s: biased $N_2^{b}$ vertices; purple open triangles: ground state $N_2^{gs}$ vertices; orange open diamonds: $N_3$; blue $x$’s: $N_4$. The monopole states are $N_0$ and $N_4$. (b) $F_{ext}$ vs time. The system switches into a positively biased ice rule obeying state with $N_2^{b}/N=1.0$ near $F_{ext}=1.3$. After the drive has been reversed, the system switches into an ordered monopole state with $N_0/N=N_4/N=0.5$ at $F_{ext}=-0.65$ and then orders into a negatively biased ice rule obeying state at $F_{ext}=-1.3$. []{data-label="fig:divergeO"}](Fig8.ps){width="4.5in"}
Orderings For Square Ice
========================
In order to create an easily accessible ordered monopole state for a square ice lattice, we propose a simple extension of the standard square ice geometry where we now have two sublattices as illustrated in figure 7. Each sublattice has a different strength of the central barrier, $f_r^1$ and $f_r^2$, and we take $f_r^2/f_r^1=2$. In figure 7(a), the darker traps have the weaker barriers. Such a geometry could be produced in the nanomagnetic system by using two different sizes or two different materials for the nanoislands, producing two different coercive field values. We apply a biasing force along a line oriented at $45^{\circ}$ to the $x$ axis. In the square ice array, the vertex types $N_0$, $N_1$, $N_2$, $N_3$, and $N_4$ have 0, 1, 2, 3, and 4 colloids close to the vertex, respectively. The $N_2$ vertices are further subdivided into biased vertices $N_2^b$ where the two close colloids are in adjacent traps as in figure 7(b), and ground state vertices $N_2^{gs}$ where the two close colloids are on opposite sides of the vertex as in figure 1(a). In figure 8 we plot the the external drive $F_{ext}$ and fraction of vertex types $N_j/N$ versus time. The system begins in a random state with $N_{4}/N + N_{0}/N = N_2^{gs}/N = 0.125$ and $N_{1}/N = N_{2}/N = N_{3}/N = 0.25$. As $F_{ext}$ increases past $F_{ext}=1.3$, the sample enters the positively biased ice rule obeying state illustrated in figure 7(b), with $N_2^b/N=1$. When the external drive reaches $F_{ext}=2.0$, we begin decreasing $F_{ext}$ all the way to $F_{ext}=-2.0$. The biased state persists until $F_{ext} = -0.67$, at which point a checkerboard monopole state appears as illustrated in figure 7(c). In analogy to the kagome ice system, in the two-sublattice square ice sample the monopole state forms when sublattice $A$, with its weaker barrier, switches before sublattice $B$. If the colloid-colloid interaction strength is small enough, the particles in sublattice $B$ remain in their original positions, producing the monopole state shown in figure 7(c). As $F_{ext}$ is further increased in the negative direction, sublattice $B$ switches at $F_{ext}=-1.3$ to form a negatively biased ice rule obeying state aligned with the drive. We note that in our previous work on square ice with only a single trap sublattice, monopole formation was rare due to its high energetic cost.
![ The vertex populations $N_i/N$ vs temperature $T$ in a modified square ice system with $f_r^1=1.0$ and $f_r^2=2.0$ that has been prepared in a monopole state with $F_{ext}$ set to zero. Black filled circles: monopole states $N_0/N+N_4/N$; red open squares: $N_1/N + N_3/N$; green $x$’s: $N_2^{b}/N$ biased vertices; blue $+$’s: $N_2^{gs}/N$ ground state vertices. (a) A sample of noninteracting colloids with $q^2=0$ passes directly from a monopole state to a disordered state. (b) A sample with $q^2 = 0.2$ shows the formation of the biased triple state followed by the thermally induced biased ice rule obeying state and then a random or paramagnetic state. (d) In a sample with $q^2 = 0.4$, the monopole state is followed by the biased triple state, the thermally induced biased ice rule obeying state, the thermally induced ground state, and the paramagnetic state. []{data-label="fig:divergen"}](Fig9.ps){width="4.5in"}
![ Black circles: particle locations; open ellipses: trap locations for a $20a_0 \times 20a_0$ section of a modified square ice system with $f_r^1=1.0$, $f_r^2=2.0$, and $q^2=0.4$. The two trap sublattices are indicated by ellipses of different sizes. Colored circles indicate vertex types: $N_0$ (blue), $N_1$ (green), biased $N_2^{b}$ (yellow), ground state $N_2^{gs}$ (gray), $N_3$ (orange), and $N_4$ (red). (a) The ordered monopole state. (b) The biased triple state. (c) The thermally induced biased ice rule obeying state. (e) The thermally induced ground state. (f) The paramagnetic or uncorrelated state. []{data-label="fig:divergem"}](Fig10.ps){width="5.5in"}
We next consider thermal effects on the ordered monopole state. To prepare the sample, we sweep $F_{ext}$ to a value at which the monopole state appears and then switch off $F_{ext}$. If the pairwise interactions between colloids are not too strong, the monopole state remains stable even without a biasing drive. Figure 9(a) shows the vertex populations $N_i/N$ versus temperature $T$ for a system of noninteracting colloids with $q^2=0$. Here the monopole state illustrated in figure 10(a) disorders into the paramagnetic state illustrated in figure 10(e), with $N_0/N$ and $N_4/N$ monotonically dropping from $0.5$ to around $0.06$ but never falling below this value. In samples with interacting colloids, such as the system with $q^2=0.2$ shown in figure 9(b), $N_0/N$ and $N_4/N$ both drop from 0.5 to 0 with increasing $T$ indicating that all monopoles disappear from the system. As $T$ is increased further, $N_0/N$ and $N_4/N$ gradually increase back to the paramagnetic value of 0.0625. For $1.9 \leq T \leq 2.3$, where $N_0/N$ and $N_4/N$ are dropping to zero, the values of $N_1/N$ and $N_3/N$ pass through a peak. This is followed by a window $2.3 \leq T \leq 3.3$ where $N_1/N$ and $N_3/N$ drop again while the value of $N_2^{b}/N$ peaks. A weak peak in $N_2^{gs}/N$ appears in the vicinity of $T \sim 3.5$, while at higher temperatures the vertex populations approach the values expected in a completely random configuration.
The same trends are more clearly evident in figure 9(c) for a sample with larger pairwise interactions, $q^2=0.4$. Here the highest value reached by $N_1/N + N_3/N$ is nearly 0.9, indicating that large portions of the sample are filled with $N_1$ or $N_3$ vertices. The corresponding colloid configuration is illustrated in figure 10(b), where local ordering of the $N_1$ and $N_3$ vertices into a lattice can be seen. We term this state a biased triple state; it forms due to the preferential switching of colloids in the weaker sublattice $A$ traps. As $T$ increases, $N_1/N$ and $N_3/N$ drop while $N_2^{b}/N$ peaks at a value of $0.8$ corresponding to the thermally induced biased ice rule obeying state, illustrated in figure 10(c). At higher $T$, $N_2^b/N$ decreases again while $N_2^{gs}/N$ reaches its peak value of 0.7 corresponding to the thermally induced ground state shown in figure 10(d). For still higher $T$, the $N_i/N$ approach the values expected for a random configuration and the sample enters a paramagnetic state, illustrated in figure 10(e).
The sequence of ordered states we observe for finite pairwise interactions under finite temperatures can be understood by considering the interaction of the metastable monopole state with the two trap sublattices. Our protocol of sweeping $F_{ext}$ into the monopole regime and then switching off $F_{ext}$ at zero temperature causes the sample to be trapped in this metastable state for low temperatures. As $T$ increases from zero, the system first lowers its energy by moving one colloid out of each $N_4$ vertex into an $N_0$ vertex, creating $N_1$ and $N_3$ vertex pairs and generating the biased triple state. This produces the peak in $N_1/N$ and $N_3/N$ near $T=1.6$ in figure 9(c), while figure 10(b) indicates that due to the finite temperature, some non-$N_1$ or $N_3$ vertices still exist in the sample. In the biased triple state, the lower energetic cost of an $N_3$ vertex compared to an $N_4$ vertex means that a higher thermal activation energy is required to destroy $N_3$ vertices compared to $N_4$ vertices. The difference in thermal activation energies determines the size of the temperature window where the biased triple state remains metastable. Once the temperature is large enough, colloids jump out of $N_3$ vertices into neighboring $N_1$ vertices, resulting in the formation of $N_2$ vertices. This hopping is more likely to occur in the weaker traps of sublattice $A$, resulting in the preferential formation of biased $N_2^b$ vertices. As a result, $N_2^b/N$ peaks near $T=2.3$ in figure 9(c) and the sample enters the biased ice rule obeying configuration (with thermal defects) shown in figure 10(c). Since the $N_2^b$ vertices have a slightly higher energy than the ground state $N_2^{gs}$ vertices [@Libal], at even higher temperatures additional hopping produces an increased fraction of $N_2^{gs}$ vertices, placing the system in the thermally induced and thermally defected square ice ground state configuration illustrated in figure 10(d). For sufficiently high $T$, the correlations between neighboring traps are lost and the sample enters the paramagnetic state shown in figure 10(e).
![ Phase diagram for $T$ vs $q^2$ for the modified square ice sample with $f_r^1=1.0$ and $f_r^2=2.0$ indicating regions where the ordered monopole, biased triple, biased ice rule obeying, square ice ground state, and random states appear. As $q^2$ increases, the width of the ordered monopole state decreases while the transition to the random state shifts to higher $T$. []{data-label="fig:divergeo"}](Fig11.ps){width="4.5in"}
In figure 11 we plot a phase diagram of the regions in $T$ and $q^2$ space where the different ordered states appear. The lines indicate crossovers and are determined based on when the different $N_i/N$ cross threshold values. As $q^2$ increases, the destruction of the ordered monopole state drops to lower values of $T$. The biased triple state and biased ice rule obeying state maintain roughly constant widths in $T$ as $q^2$ increases, although both phases shift to lower ranges of $T$. In contrast, the square ice ground state, which appears only for larger values of $q^2$, increases in width as $q^2$ increases. Above $q^2=0.45$ we can no longer stabilize the ordered monopole state using our biasing protocol. Our results indicate that a simple modification of the square ice geometry can produce a wealth of orderings in the artificial spin ice system.
We note that the phase diagrams for the kagome and modified square ice presented here do not exhibit true phases since we have driven the system into a metastable state. In many of the nanomagnetic systems that exhibit hysteresis, most of the observed states are also metastable unless a successful annealing protocol has been applied. All of our samples contained no quenched disorder. We expect that many of the states we observe would be gradually washed out if increasing amounts of quenched disorder are added; however, there should still be observable correlations in the different states that can be detected by analyzing the vertex population densities.
Summary
=======
We have described the creation of different types of orderings in artificial kagome and modified square ice systems using external fields, temperature, and particle-particle interaction strength. We show that multiple-step ordering-disordering transitions occur that can be identified by counting the vertex populations. In the kagome ice, an ordered monopole state can be induced by applying a biasing field to samples with pairwise interactions that are not too strong. With increasing temperature, the system crosses into an ice rule obeying state and then into a high temperature paramagnetic state. For stronger pairwise interactions, the monopole state is unstable to the formation of the ice rule obeying state. Our results suggest that recent experimental observations of monopole ordering in kagome nanomagnetic systems were performed in a weakly interacting or noninteracting regime. For square ice, we make a simple modification to the geometry by introducing two sublattices of traps that have different switching fields and show that under the application of an external biasing field, this system can form a checkerboard ordered monopole state. The modified square ice exhibits multi-state ordering as a function of temperature starting from the monopole state, passing through a biased triple state, a thermally induced biased ice rule obeying state, a thermally induced square ice ground state, and a disordered or paramagnetic state. Although our results are obtained specifically for colloids on periodic trap arrays, the behavior should be generic to other artificial ice systems under external drives, including nanomagnetic array systems.
Acknowledgements
================
This work was carried out under the auspices of the NNSA of the U.S. Department of Energy at Los Alamos National Laboratory under contract number DE-AC52-06NA25396. The work of A. Lib[' a]{}l was supported by a grant of the Romanian National Authority for Scientific Research, CNCS–UEFISCDI, project number PN-II-RU-TE-2011-3-0114.
References {#references .unnumbered}
==========
[99]{}
Moessner R and Ramirez A P 2006 [*Phys. Today*]{} [**59**]{}(2) 24
Ramirez A P, Hayashi A, Cava R J, Siddharthan R and Shastry B S 1999 [*Nature (London)*]{} [**399**]{} 333; Bramwell S T and Gingras M J P 2001 [*Science*]{} [**294**]{} 1495
Pauling L 1935 [*J. Am. Chem. Soc.*]{} [**57**]{} 2680
Anderson P W 1956 [*Phys. Rev.*]{} [**102**]{} 1008
Wang R F, Nisoli C, Freitas R S, Li J, McConville W, Cooley B J, Lund M S, Samarth N, Leighton C, Crespi V H and Schiffer P 2006 [*Nature (London)*]{} [**439**]{}, 303
M[" o]{}ller G and Moessner R 2006 [*Phys. Rev. Lett.*]{} [**96**]{} 237202
Nisoli C, Li J, Ke X, Garand D, Schiffer P and Crespi V H 2010 [*Phys. Rev. Lett.*]{} [**105**]{} 047205; Lammert P E, Ke X, Li J, Nisoli C, Garand D M, Crespi V H and Schiffer P 2010 [*Nature Phys.*]{} [**6**]{} 786
Li J, Ke X, Zhang S, Garand D, Nisoli C, Lammert P, Crespi V H and Schiffer P 2010 [*Phys. Rev. B*]{} [**81**]{} 092406
Morgan J P, Stein A, Langridge S and Marrows C H 2011 [*Nature Phys.*]{} [**7**]{} 75
Mengotti E, Heyderman L J, Rodriguez A F, Nolting F, H[" u]{}gli R V and Braun H-B 2011 [*Nature Phys.*]{} [**7**]{} 68
Schumann A, Sothmann B, Szary P and Zabel H 2010 [*Appl. Phys. Lett.*]{} [**97**]{} 022509
Ladak S, Read D E, Perkins G K, Cohen L F and Branford W R 2010 [*Nature Phys.*]{} [**6**]{} 359
Ladak S, Read D E, Branford W R and Cohen L F 2011 [*New J. Phys.*]{} [**13**]{} 063032
Qi Y, Brintlinger T and Cumings J 2008 [*Phys. Rev. B*]{} [**77**]{} 094418
Mengotti E, Heyderman L J, Rodr[' i]{}guez A F, Bisig A, Le Guyader L, Nolting F and Braun H B 2008 [*Phys. Rev. B*]{} [**78**]{} 144402; Rougemaille N, Montaigne F, Canals B, Duluard A, Lacour D, Hehn M, Belkhou R, Fruchart O, El Moussaoui S, Bendounan A and Maccherozzi F 2011 [*Phys. Rev. Lett.*]{} [**106**]{} 057209; Phatak C, Petford-Long A K, Heinonen O, Tanase M and De Graef M 2011 [*Phys. Rev. B*]{} [**83**]{} 174431
Tanaka M, Saitoh E, Miyajima H, Yamaoka T and Iye Y 2006 [*Phys. Rev. B*]{} [**73**]{} 052411
M[' o]{}l L A S, Moura-Melo W A and Pereira A R 2010 [*Phys. Rev. B*]{} [**82**]{} 054434
Budrikis Z, Politi P and Stamps R L 2010 [*Phys. Rev. Lett.*]{} [**105**]{} 017201; Mellado P, Petrova O, Shen Y and Tchernyshyov O 2010 [*Phys. Rev. Lett.*]{} [**105**]{} 187206; Budrikis Z, Politi P and Stamps R L 2011 [*Phys. Rev. Lett.*]{} 107 217204
Kohli K K, Balk A L, Li J, Zhang S, Gilbert I, Lammert P E, Crespi V H, Schiffer P and Samarth N [*arXiv:1106.1394*]{}
Lib[' a]{}l A, Reichhardt C and Olson Reichhardt C J 2006 [*Phys. Rev. Lett.*]{} [**97**]{} 228302
Han Y, Shokef Y, Alsayed A M, Yunker P, Lubensky T C and Yodh A G 2008 [*Nature (London)*]{} [**456**]{} 898
Lib[' a]{}l A, Olson Reichhardt C J and Reichhardt C 2009 [*Phys. Rev. Lett.*]{} [**102**]{} 237004
Chern G-W, Mellado P and Tchernyshyov O 2011 [*Phys. Rev. Lett.*]{} [**106**]{} 207202
Kapaklis V, Arnalds U B, Harman-Clarke A, Papaioannou E Th, Karimipour M, Korelis P, Taroni A, Holdsworth P C W, Bramwell S T and Hj[" o]{}rvarsson B [*arXiv:1108.1092*]{}
Korda P T, Spalding G C and Grier D G 2002 [*Phys. Rev. B*]{} [**66**]{} 024504
Reichhardt C and Olson C J 2002 [*Phys. Rev. Lett.*]{} [**89**]{}, 248301
Sarlah A, Frey E and Franosch T 2007 [*Phys. Rev. E*]{} [**75**]{} 021402
Reichhardt C and Olson Reichhardt C J 2009 [*Phys. Rev. E*]{} [**80**]{} 022401
El Shawish S, Dobnikar J and Trizac E 2011 [*Phys. Rev. E*]{} [**83**]{} 041403
Brunner M and Bechinger C 2002 [*Phys. Rev. Lett.*]{} [**88**]{} 248302
Mangold K, Leiderer P and Bechinger C 2003 [*Phys. Rev. Lett.*]{} [**90**]{} 158302
Mikhael J, Roth J, Helden L and Bechinger C 2008 [*Nature (London)*]{} [**454**]{} 501
Babic D and Bechinger C 2005 [*Phys. Rev. Lett.*]{} [**94**]{} 148303
|
---
abstract: 'We present results from a study of optically emitting Supernova Remnants (SNRs) in six nearby galaxies (NGC2403, NGC3077, NGC4214, NGC4395, NGC4449 and NGC5204) based on deep narrow band [H$\alpha$]{} and [\[S [ii]{}\]]{} images as well as spectroscopic observations. The SNR classification was based on the detected sources that fulfill the well-established emission line flux criterion of [\[S [ii]{}\]]{}/[H$\alpha$]{}$>$ 0.4. This study revealed $\sim$400 photometric SNRs down to a limiting [H$\alpha$]{} flux of 10$^{-15}$ erg sec$^{-1}$ cm$^{-2}$. Spectroscopic observations confirmed the shock-excited nature of 56 out of the 96 sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}>$ 0.3 (our limit for an SNR classification) for which we obtained spectra. 11 more sources were spectroscopically identified as SNRs although their photometric [\[S [ii]{}\]]{}/[H$\alpha$]{} ratio was below 0.3. We discuss the properties of the optically-detected SNRs in our sample for different types of galaxies and hence different environments, in order to address their connection with the surrounding interstellar medium. We find that there is a difference in [\[N [ii]{}\]]{}/[H$\alpha$]{} line ratios of the SNR populations between different types of galaxies which indicates that this happens due to metallicity. We cross-correlate parameters of the optically detected SNRs ([\[S [ii]{}\]]{}/[H$\alpha$]{} ratio, luminosity) with parameters of coincident X- ray emitting SNRs, resulted from our previous studies in the same sample of galaxies, in order to understand their evolution and investigate possible selection effects. We do not find a correlation between their [H$\alpha$]{} and X-ray luminosities, which we attribute to the presence of material in a wide range of temperatures. We also find evidence for a linear relation between the number of luminous optical SNRs (10$^{37}$ erg sec$^{-1}$) and SFR in our sample of galaxies.'
author:
- |
I.Leonidaki,$^{1,2}$[^1] P. Boumis,$^{1}$A.Zezas$^{3,4,5}$\
$^{1}$Institute of Astronomy, Astrophysics, Space Applications $\&$ Remote Sensing, National Observatory of Athens, I.Metaxa and V.Pavlou,\
Lofos Koufou, Penteli, 15236, Athens, Greece\
$^{2}$Astronomical Laboratory, Physics Department, University of Patras, 26500, Rio-Patra, Greece\
$^{3}$Physics Department, University of Crete, P.O. Box 2208, GR-710 03, Heraklion, Crete, Greece\
$^{4}$Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA\
$^{5}$IESL/Foundation for Research and Technology-Hellas, P.O. Box 1527, GR-711 10, Heraklion,\
Crete, Greece
title: 'A multiwavelength study of Supernova Remnants in six nearby galaxies. II. New optically selected Supernova Remnants'
---
\[firstpage\]
ISM: supernova remnants - galaxies: star formation
Introduction
============
Supernova Remnants (SNRs) inject the predominant fraction of mechanical energy that heats and shapes the Interstellar Medium (ISM) since the generated shock waves are responsible for the compression, acceleration and excitation of their surrounding medium. At the same time they replenish the ISM with heavy elements formed during the evolution of massive stars while when immersed in molecular clouds the compression may trigger formation of the next generation of stars. SNRs can yield significant information on the global properties of a galaxy’s ISM such as its density, temperature or composition [e.g. @Blair04]. Furthermore, being the endpoints of core-collapse massive stars ($M > 8M_{\sun}$) they can be used as proxies for measurements of massive star formation rate (SFR) and studies of stellar evolution [@Condon90].
Detecting large samples of SNRs in a multi-wavelength context can provide several key aspects of the physical processes taking place during their evolution. For example, the blast waves of newly formed SNRs can heat the material behind the shock front to temperatures up to $10^{8}$ K producing thermal X-rays. Synchrotron radiation in radio wavelengths is produced at the vicinity of the shock as well as from the cooling regions behind the shock front, owing to relativistic electrons gyrating in the magnetic field of the SNRs [e.g. @Dickel99; @CS95]. Optical filaments are signs of older SNRs since they form in the cooling regions behind the shock [i.e. @Stupar09] producing shock-heated collisionally ionized species (such as [\[S [ii]{}\]]{}, [\[O [iii]{}\]]{} or [H$\alpha$]{} recombination lines). Therefore, multi-wavelength studies (optical, radio, X-rays, infrared) can surmount possible selection effects inherent in ’monochromatic’ samples of SNRs and provide a more complete picture of their nature and evolution as well as their interplay with the ISM and their correlation with star forming activity.
About 274 SNRs detected in different wavebands are known in our Galaxy, a comprehensive catalogue of which is presented in @Green09. A large number of them has been studied in detail in various wavebands (e.g. radio: @Green09; optical: @Boumis02 [@Boumis05; @Boumis09; @Fesen10]; X-rays: @Reynolds09 [@Slane02]; infrared: @Reach06) providing significant information on the properties of individual objects and SNR physics. However these studies are impeded by distance uncertainties and Galactic absorption, hampering the investigation of SNRs in a wide variety of environments. On the other hand extragalactic studies of SNRs offer several advantages: they can be achieved in determined distances with much fewer observations while they cover a broader range of metallicities and ISM parameters than our Galaxy, giving us a more complete picture of the SNR population parameters.
Numerous studies of extragalactic SNRs have been conducted since the pioneering work of @MC73 on the Magellanic Clouds. The availability of sensitive, high-resolution observations in the radio and X-ray bands have allowed the systematic investigation of extragalactic SNRs in these wavebands [e.g. @Leonidaki10; @Long10; @Pannuti07; @Ghavamian05]. However, since the first studies in a small sample of nearby galaxies [e.g. @MF97; @MFBL97] there have not been any systematic pursuits of the optical populations of extragalactic SNRs.
The availability of sensitive wide-field imagers and spectrographs allow us to greatly extend these initial efforts to a larger set of galaxies, while probing fainter SNRs populations. We have embarked in an extensive multi-wavelength investigation of the SNR populations in six nearby galaxies (NGC2403, NGC3077, NGC4214, NGC4395, NGC4449 and NGC5204), involving optical and X-ray data. These galaxies are selected from the Third Catalog of Bright Galaxies (RC3; @de [@Vaucouleurs95]) to be (a) late type (T $>$ 4; Hubble type), (b) close ($\leq$5 Mpc) in order to minimize source confusion, (c) at low inclination $\leq 60\degr$) in order to minimize internal extinction and projection effects, and (d) be above the Galactic plane ($|b| > 20\degr$). The properties of the galaxies in our sample are presented in Table 1 while a list of previous multi-wavelength SNR surveys are presented in §2 of @Leonidaki10.
From the pool of objects drawn from these selection criteria, we selected galaxies that have Chandra archival data with exposure times long enough to achieve a uniform detection limit of $10^{36}$ erg s$^{-1}$. We opted to focus on Chandra data owing to its superior spatial resolution which allows the detection of faint sources in crowded environments and therefore can provide the most unbiased sample to be correlated with the optical data. This X-ray investigation revealed 37 thermal X-ray SNRs (based on their spectra or hardness ratio colours), 30 of which were new discoveries. In many cases, the X-ray classification was confirmed based on counterparts with SNRs identified in other wavelengths. We found that X-ray selected SNRs in irregular galaxies appear to be more luminous than those in spirals due to the lower metallicities and therefore more massive progenitor stars of irregular galaxies or the higher local densities of the interstellar medium. A comparison of the numbers of observed luminous X-ray-selected SNRs with those expected based on the luminosity functions of X-ray SNRs in the Magellanic Clouds and M33 suggested different luminosity distributions between the SNRs in spiral and irregular galaxies with the latter tending to have flatter distributions. These results are presented in the first paper of this series ([@Leonidaki10], hereafter Paper I).
In this paper we present a detailed optical spectro-photometric study of the SNR populations in this sample of galaxies. The optical identification of SNRs is based on the elevated [\[S [ii]{}\]]{}/[H$\alpha$]{} ratio ($\geq$ 0.4), pioneered by @MC73. The outline of this paper is as follows: In §2, we describe the observations, including data reduction and techniques used for source detection and photometry. In §3, we describe the long-slit and multi-slit spectroscopic observations, while in §4 we give the SNR classification criteria as well as aggregate results of the detected SNRs in our sample of galaxies. In §5 we discuss the results of our spectro-photometric investigation. Finally, in §6 we present the conclusions of this work.
Imaging
=======
Observations
------------
Optical images were obtained with the 1.3m (f/7.7) Ritchey–Chrétien telescope at Skinakas Observatory on June 6-12, 2008 and Nov 16-18, 2009. A 2048 $\times$ 2048 ANDOR Tech CCD was used which has a $9.6\arcmin \times 9.6\arcmin$ field of view and an image scale of 0.28$\arcsec$ per pixel. Apart from Nov 18 2009, all the other observing nights were photometric with seeing conditions ranging between 1.3$\arcsec$-2.5$\arcsec$. The observations were performed with the narrow band [H$\alpha$]{} + [\[N [ii]{}\]]{}, [\[S [ii]{}\]]{} and [\[O [iii]{}\]]{} filters. Broadband continuum filters in red and blue were used to subtract the continuum from the [H$\alpha$]{}+ [\[N [ii]{}\]]{}, [\[S [ii]{}\]]{} and [\[O [iii]{}\]]{} images respectively. The continuum filters used for the observations are centered on line-free regions of the spectra in order to avoid strong SNR emission lines to pass. The interference filter characteristics are listed in Table 2.
The exposure time was 3600 sec for each [H$\alpha$]{} + [\[N [ii]{}\]]{} filter exposure, 7200 sec for each [\[S [ii]{}\]]{} filter exposure and 300 sec for the exposures through the continuum filters. The airmass of the galaxies during observations ranged between 1.06 and 1.87. In the case of NGC2403, the $9.6\arcmin$ CCD field of view did not cover the whole D$_{25}$[^2] area of the galaxy, therefore we obtained a 2$\times$2 mosaic. We did not observe NGC2403 and NGC4395 through [\[O [iii]{}\]]{} or continuum blue filters owing to weather conditions. Bias frames and well-exposed twilight flats were taken on each run as well as spectro-photometric standard stars from the list of @Hamuy92.
Data Reduction
--------------
The data reduction was performed using the IRAF V2.14 package[^3]. All images were bias-subtracted and flat-field corrected while the data sets of each filter for each galaxy were median-combined in order to reject the cosmic rays. Star-free areas outside the body of the galaxies were selected in [H$\alpha$]{}+ [\[N [ii]{}\]]{}, [\[S [ii]{}\]]{}, [\[O [iii]{}\]]{} and continuum images in order to subtract the sky background and obtain just the light from each galaxy.
The sky-background subtracted images were aligned to a reference image for each galaxy (e.g. continuum red for [H$\alpha$]{}+ [\[N [ii]{}\]]{} and [\[S [ii]{}\]]{}, continuum blue for [\[O [iii]{}\]]{}) and astrometrically calibrated using the USNO-B1.0 Catalog or SDSS Data Release 7. For each galaxy’s [H$\alpha$]{}+ [\[N [ii]{}\]]{}, [\[S [ii]{}\]]{} and continuum red images, we selected an adequate number (8-10) of the same faint stars ($\sim$15-20 mag) for which we calculated their ([H$\alpha$]{}+ [\[N [ii]{}\]]{}/cont red) and ([\[S [ii]{}\]]{}/cont red) ratios. The mean value of those ratios were used to create normalized-continuum red images for [H$\alpha$]{}+ [\[N [ii]{}\]]{} and [\[S [ii]{}\]]{} images, respectively. We then subtracted the corresponding normalized-continuum red images from the [H$\alpha$]{}+ [\[N [ii]{}\]]{}, [\[S [ii]{}\]]{} images in order to eliminate the star-light continuum. We did not follow the same procedure for the [\[O [iii]{}\]]{} images since they were used only for visual examination of each source’s [\[O [iii]{}\]]{} emission. The continuum-subtracted [H$\alpha$]{} + [\[N [ii]{}\]]{} and [\[S [ii]{}\]]{} images were flux calibrated, using several spectrophotometric standard stars observed each night and reduced the same way as the galaxy images. We note that the used intereference [H$\alpha$]{} + [\[N [ii]{}\]]{} filter includes the [\[N [ii]{}\]]{} 6548 Å and 6584 Å lines. In order to estimate the net [H$\alpha$]{} flux-calibrated images of our galaxies, we corrected for the [\[N [ii]{}\]]{} contamination using the [\[N [ii]{}\]]{}($\lambda\lambda$ 6548, 6584)/[H$\alpha$]{} ratios from integrated spectroscopy of the galaxies from the work of @Kennicutt08.
Source Detection
----------------
Sources present considerably higher S/N in the [H$\alpha$]{} than in the [\[S [ii]{}\]]{} images therefore we searched for sources in the continuum-subtracted, flux calibrated [H$\alpha$]{} data sets of each galaxy, using the Sextractor V2.5.0 package [@Bertin96]. Our goal is to identify faint nebulae in relatively isolated regions as well as to separate possible SNRs from [H[ii]{}]{} or diffuse emission regions. Since Sextractor was used only for detection, the main parameters we adjusted are the following: a) detection threshold set to 1.3-3.5 $\sigma$ above background, b) minimum number of pixels for a detection to be triggered between 3 to 7 for different galaxies, c) a background mesh size of 6-10 pixels in order to detect faint sources and account for local variations of the background within the galaxies. All the above parameters were adjusted for each galaxy depending on its background and the detection efficiency of faint sources.
The non-uniform [H$\alpha$]{} background and diffuse emission within the galaxies did not allow us to base the source detection solely on the Sextractor output. For that reason, the results of the Sextractor run for each of the continuum-subtracted, flux-calibrated [H$\alpha$]{} images were visually inspected in order to discard local maxima of [H$\alpha$]{} background or spurious sources associated with bad pixels, and then were used to create source lists. In the case of NGC2403, the fourth frame of the mosaic was observed at a non-photometric night (November 18, 2009) therefore we excluded it from the data analysis procedure. The source detection for NGC2403 was performed individually on each of the three frames of the galaxy mosaic. The detection results from each mosaic were then combined in order to form the final list of sources in NGC2403. Each source list was overlaid on the relevant continuum red image of each galaxy in order to eliminate any obvious star-like objects. Since SNRs are identified on the basis of strong [\[S [ii]{}\]]{} emission, we visually inspected the significance of the sources on the relevant continuum-subtracted, flux-calibrated [\[S [ii]{}\]]{} images. The final source list was defined on the basis of clear detection of sources in both [H$\alpha$]{} and [\[S [ii]{}\]]{} images. We note that we opted to use the individual [H$\alpha$]{} and [\[S [ii]{}\]]{} images (rather than the [\[S [ii]{}\]]{}/[H$\alpha$]{} ratio images) for source detection since they tend to be less noisy.
Photometry
----------
We used the [*apphot*]{} package in IRAF in order to perform photometry of the sources identified on the continuum-subtracted, flux-calibrated [H$\alpha$]{} and [\[S [ii]{}\]]{} images. In the case of NGC2403, we created a final list from each mosaic frame and performed photometry on each source in every frame that it was observed. The final photometric parameters for each source in NGC2403 were derived from the mean value of the parameters in each frame. We used source apertures with diameter set to 8-10 pixels (which corresponds to $\sim$2$\arcsec$-3$\arcsec$) and physical scales of $\sim$32 to $\sim$66 pc for the closest and most distant galaxy, respectively. These apertures were chosen to cover most of the source’s flux in both the [H$\alpha$]{} and [\[S [ii]{}\]]{} images (taking into account the seeing conditions of 1.3$\arcsec$ - 2.5$\arcsec$) while taking care not to encompass other neighbouring sources or diffuse emission. The local background for each source was measured from appropriate annuli of typical sizes of 10 pixels ($\sim$3$\arcsec$). In some special cases with highly non-uniform background, the local background was measured from a neighbouring region. We note that accurate photometry depends strongly on the selected background area, especially in cases of sources embedded in large filaments or regions with enhanced diffuse emission. From measurements for different background regions we find a typical uncertainty on the flux of 40% stemming from the background selection. This uncertainty is minimized in the case of [\[S [ii]{}\]]{} images due to the very low background and the point-like nature of most sources. Extinction correction was not applied on the [H$\alpha$]{}, [\[S [ii]{}\]]{} fluxes since no [H$\beta$]{} observations were obtained for the galaxies in our sample.
We calculated variance maps by applying error propagation to the error map of the initial bias-subtracted, flat-fielded combined images (including readout noise, gain etc). In this calculation we accounted for all the analysis steps taken from the derivation of the fluxed images (continuum-subtraction, flux calibration). The [H$\alpha$]{} and [\[S [ii]{}\]]{} flux errors were estimated by calculating the square root of the measured sum of the variance map within the aperture of each source. The [\[S [ii]{}\]]{}/[H$\alpha$]{} ratio errors were calculated through standard error propagation.
Based on the photometric properties of the detected sources, we calculated the [\[S [ii]{}\]]{}/[H$\alpha$]{} flux ratio of the final source list in each galaxy. In order to further examine whether the correction we applied for the [\[N [ii]{}\]]{} contamination (based on the integrated [\[N [ii]{}\]]{}/[H$\alpha$]{} ratios of @Kennicutt08) in the flux-calibrated [H$\alpha$]{} images is appropriate and thus inspect the validity of the measured [H$\alpha$]{} fluxes and [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios, we used the spectroscopic [\[N [ii]{}\]]{}/[H$\alpha$]{} ratios of our spectroscopically-observed SNRs (see §3). From the histogram of the spectroscopic [\[N [ii]{}\]]{}/[H$\alpha$]{} ratios of our SNR sample (Fig. 1) we see that they form two distinct loci: irregular galaxies (apart from NGC3077) extend to lower [\[N [ii]{}\]]{}/[H$\alpha$]{} ratios than spiral galaxies, probably owing to differences in their metallicities (§5.2). This led us to redefine our [\[N [ii]{}\]]{}/[H$\alpha$]{} correction factor to the median values of the SNRs in NGC2403 + NGC3077 and the rest of the irregular galaxies and correct accordingly the [H$\alpha$]{} fluxes and [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios of the detected sources.
Distinct sources with strong emission in both continuum-subtracted, flux-calibrated [H$\alpha$]{} and [\[S [ii]{}\]]{} images that present ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $>$ 0.4 (within their errorbars) are considered photometric SNRs. We also include sources with 0.3 $<$ ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $<$ 0.4 which are well possible to belong to the SNR regime since photometry can result in many cases to ambiguous [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios, especially for sources embedded in diffuse emission or near H[ii]{} regions.
In Tables 3-8 we present the photometric properties of the photometric SNRs in each galaxy while in Table 9 (available at the electronic version) we present the photometric properties of all spectroscopically-observed sources which were not identified as SNRs (([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ $<$ 0.4). In Column 1 we give the source ID, in Columns 2 and 3 the Right Accension and Declination (J2000) of each photometric SNR, in Column 4 the radius in pixels used for the source aperture on which the photometry was performed, in Columns 5 and 6 the inner and outer radius of the annulus used for background subtraction (sources for which the background was measured from a nearby region are indicated as ext), in Columns 7 and 8 their photometric, non-extinction corrected [H$\alpha$]{} and [\[S [ii]{}\]]{} fluxes respectively and in Column 9 their ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ ratio. In Column 10 we indicate whether there is available spectrum ([*M*]{} and [*S*]{} for Mayall and Skinakas telescopes respectively) and in Column 11 their classification based on the criteria mentioned in §4. In some cases of large filaments the imaging resulted in multiple detections. In these cases it is not possible to distinguish between a single or multiple SNRs. These sources are indicated as LBZXX-Y in the relevant tables. Each Table is separated for clarity into three frames: the first frame with spectroscopically-verified SNRs, the second frame with sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $>$ 0.4 within their errorbars and the third frame with sources presenting 0.3 $<$ ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $<$ 0.4 (within their errorbars).
Spectroscopy
============
Spectroscopic observations are the only way to unambiguously verify the shock-heated nature of these sources and therefore classify them as SNRs (([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ $\geq$ 0.4). They can be used to obtain accurate emission line ratios that provide physical information (e.g. electron density, shock velocities) while inspecting the accuracy of the photometric parameters.
The spectroscopically-observed sources were selected based on: a) their ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ ratio, b) their strong S/N and c) the physical parameters of the multi-slit masks. We also opted to obtain spectra for a few additional sources in each galaxy with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $\leq$ 0.3 in order to investigate any systematic effects in the [\[S [ii]{}\]]{}/[H$\alpha$]{} photometric ratios.
The spectroscopic observations were obtained during the course of 2 observing runs: 4 nights (long-slit spectra) at the 1.3m Skinakas telescope in Crete, Greece and 4 nights (multi-slit spectra) at the 4m Mayall telescope, Kitt Peak, Arizona, USA.
Multi-slit observations with the 4m Mayall telescope
----------------------------------------------------
Multi-slit spectra were obtained with the 4m Mayall telescope at Kitt Peak on May 3-6, 2010. We used the $5\arcmin\times5\arcmin$ T2KB CCD detector and the BL420 600 lines mm$^{-1}$ grating at the 1st order, centered at 6000 Å. This setup gives a spectral coverage of 2300 Å with a spectral resolution of 3.8 Å which allows the separation of the [H$\alpha$]{} from the [\[N [ii]{}\]]{} doublet and the measurements of the individual lines of the [\[S [ii]{}\]]{} doublet.
Each slitlet was 2.5$\arcsec$ wide and included most of the source light given the seeing conditions (1.2$\arcsec$-1.5$\arcsec$), without significantly degrading the spectral resolution. The slit length was between 4$\arcsec$-5$\arcsec$ which allowed the subtraction of the local diffuse background. The weather conditions provided photometric nights while the exposure time per mask varied between 2100-3600 sec, depending on the brightness of the targets and the time constraints. Bias-frames, comparison lamp exposures, projector flats and spectrophotometric standard stars were observed each night for CCD, wavelength and flux calibrations.
Long-slit observations with the 1.3m Skinakas telescope
-------------------------------------------------------
Long-slit spectra of individual objects were obtained with the 1.3m telescope at Skinakas Observatory on May 25-28, 2009. A 1302 line mm$^{-1}$ grating, blazed at 5500 Å, was used with the 2000 $\times$ 800 SITe CCD, giving a spectral coverage of 4700-6700 Å (dispersion of $\sim$ 1 Å/pixel) and a spectral resolution of $\sim$6 Å and $\sim$4 Å (FWHM) in the blue and red wavelenghths respectively. The slit we used has a width of 6.3$\arcsec$, including most of the source light (given the seeing conditions of 1.3$\arcsec$-1.8$\arcsec$), while its length of 7.8$\arcmin$ allowed for background subtraction. In all cases the slit was oriented in the north-south direction. Since the faintness of our target nebulae makes the positioning of the slit a hard task, we positioned each slit on the required targets by offseting from a field star. We opted to observe sources outside regions of strong diffuse emission or crowded fields in order to position them accurately in the slit and be able to subtract their background with better precision and accurately. The slit centres and the exposure times for each slit are presented in Table 3. The nights were all photometric. Spectrophotometric flux standard stars were observed each night as well as calibration frames consisted of biases, twilight flats and comparison lamp exposures.
Reduction of spectra
--------------------
The IRAF package was used for the standard data reduction as well as for the extraction of flux-calibrated spectra. The slit length in each case allowed the subtraction of the local diffuse background. In cases where the extraction of multiple spectra along the slit was difficult due to extended sources embedded in diffuse regions, we defined the spectrum extraction by comparing the spatial dimension of the spectrum with the photometric images. In the case of long-slit spectroscopy, the centre of the slit for each observation was chosen so that more than one targets to be included in each spectrum. Line measurements were performed by fitting Gaussians to the spectra.
In Table 11 (available at the electronic version) we give the absorbed (F) and extinction-corrected (I) emission line fluxes of all spectroscopically-observed SNRs (([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ $\geq$ 0.4) in our sample of galaxies. The presented emission line fluxes are normalized to F([H$\alpha$]{})=100 and I([H$\alpha$]{})=100 respectively. We also give the signal-to-noise (S/N) ratio of the quoted fluxes which was estimated based on the spectral counts of the emission lines and their relevant background. Lines for which no values are given were not detected.
Tables 12-13 (Table 13 is available at the electronic version) present the emission line parameters of the spectroscopically observed sources of our sample with (([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ $\geq$ 0.4 and (([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ $<$ 0.4 respectively. Cols 1 and 2 present the galaxy and the ID of the source respectively. Column 3 gives the absolute, extinction corrected F([H$\alpha$]{}) in units of 10$^{-14}$ erg s$^{-1}$ cm$^{-2}$. In Column 4 we give the extinction c([H$\beta$]{}), Column 5 gives the colour excess E(B-V) using the ”standard” reddening law E(B-V)$\approx$0.77c with R=3.1 [@Osterbrock06], while Column 6 presents the unabsorbed [H$\alpha$]{}/[H$\beta$]{} ratios. The remaining columns present various emission line ratios derived from the relevant extinction-corrected emission line fluxes (when [H$\beta$]{} line was detected), otherwise from the absorbed emission line fluxes. The extinction-corrected emission line fluxes were normalized to the [H$\alpha$]{} emission line and were estimated using the R=3.1 reddening curve [@Osterbrock06]. All errors were calculated through standard error propagation.
In Fig. 2 we present the individual, zoomed-in display of the 67 spectroscopically-observed SNRs (SNRs in Tables 3-8) over the [H$\alpha$]{} image of each galaxy in order to show their distinct morphology where possible. The images cover and area of 30$\arcsec$$\times$30$\arcsec$ while the arrows point at the SNRs. In Fig. 3 we show representative spectra of three spectroscopically-verified SNRs (at low, medium and high resolution) while the electronic version presents the extracted spectra of all spectroscopically-observed SNRs.
RESULTS AND SNR CLASSIFICATION
==============================
Overall, a large number of sources (269) were detected with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}>$0.4 while 138 more present 0.3$<$([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}<$0.4. 134 sources were spectroscopically observed with the 1.3m Skinakas and 4m Mayall telescopes, resulting in a total of 67 sources identified as SNRs (12 in NGC2403, 6 in NGC3077, 18 in NGC4214, 6 in NGC4395, 18 in NGC4449 and 7 in NGC5204). This number of spectroscopically observed sources does not include only objects with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$$>$0.3 (our limit for an SNR classification) but also sources with ratio below that limit in order to investigate any systematic effects in the [\[S [ii]{}\]]{}/[H$\alpha$]{} photometric ratios (for an aggregate view, see Table 14).
On the basis of these results, we divide our optically-selected SNRs into the following types: 1) SNRs, 2) candidate SNRs, and 3) probable candidate SNRs. As SNRs we consider all spectroscopically observed sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ flux ratio $\geq$ 0.4 within their error-bars. We consider as candidate SNRs all sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $\geq$ 0.4 (within their error-bars) but with no available spectra. As probable candidate SNRs we consider sources with 0.3 $<$ ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $<$ 0.4 within their error-bars (see §2.4 for details). This diagnostic tool ([\[S [ii]{}\]]{}/[H$\alpha$]{}$>$0.4) has been proven to differentiate shock excited processes occuring in SNRs from photo-ionised nebulae ([H[ii]{}]{} regions or planetary nebulae). This is because in the case of SNRs, most of the sulfur content in the cooling regions behind the shock front are in the form of S$^{+}$ and their collisional excitation yield to enhanced [\[S [ii]{}\]]{}/[H$\alpha$]{} ratio. In typical [H[ii]{}]{} regions, S$^{++}$ ions are mainly present due to strong photo-ionisation and therefore the [\[S [ii]{}\]]{}/[H$\alpha$]{} ratio is expected to be generally lower than 0.4. Additional forbidden lines (e.g. [\[O [i]{}\]]{} 6300 Å or [\[O [iii]{}\]]{} 4959, 5007 Å) or enhanced [\[N [ii]{}\]]{}/[H$\alpha$]{} ratios with respect to [H[ii]{}]{} regions, can be used as evidence for shock-heating mechanisms and therefore verify the nature of sources as SNRs.
In Table 14 we present the census of the SNRs in our sample of galaxies and the success rates in the photometric SNR classification. In Column 1, we split the ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ ratios into three categories: $>$0.4 (candidate SNRs), 0.3 - 0.4 (probable candidate SNRs) and $<$0.3. In Column 2 we present the number of the photomeric sources that were detected in each category (within the error-bars). In Column 3 we give the number of photometric SNRs presented in Tables 3-8. These numbers result from the detected SNRs (Column 2) if we subtract the spectroscopically observed, non-SNRs (Column 4 minus Column 5). In the case of $<$0.3 for this column, we included only the sources that were spectroscopically verified as SNRs. In Column 4 we give the number of spectroscopically observed sources while in Column 5 we present the number of sources that were spectroscopically verified as SNRs. Finally, in Column 6 we give the percentage of the photometric SNRs that were spectroscopically confirmed as SNRs (success rate in SNRs).
Individual objects
------------------
Below we present notable cases of sources for each galaxy:\
[***NGC2403***]{}: [*LBZ6, LBZ95*]{}: These two sources are located within a larger complex of nebulosity (Fig. 4a, small circles) and present ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ 0.63 and 0.42 respectively. LBZ6 was also spectroscopically observed to have ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ = 0.61. @MFBL97 photometrically identified the whole region as one SNR (SNR-15). This is most likely due to the fact that their SNR identification is based on the [\[S [ii]{}\]]{}/[H$\alpha$]{} ratio images which tend to be more noisy than the individual [H$\alpha$]{} and [\[S [ii]{}\]]{} images. For comparison, we performed photometry on the the whole area (large circle in Fig. 4a) and resulted to ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}\approx$0.41. However, in our images this area is clearly split in several individual sources with large values of ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$, suggesting that it is most likely an SNR complex.\
[*LBZ1*]{}: We performed photometry on the whole region of this arc-like source (circle in Fig. 4b). However, the spectroscopy was performed on the edge of the arc as can be seen from the slit in Fig. 4b.\
[*LBZ12*]{}: This stellar-like source was photometrically identified as an SNR and stands besides an arc (Fig. 4c). The combination of the two objects was photometrically identified as SNR-32 by @MFBL97. However, the placemenet of the slit helped us to further investigate the nature of this region (see Fig. 4c). The left edge of the slit falls on our photometrically-detected LBZ12 which has ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ = 0.43. The right edge of the slit covers part of the arc with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ = 0.23 indicating that it is not a shock-excited region. We also calculated the integrated ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ for the entire object in order to compare our results with those of @MFBL97. We find a ratio of 0.30, possibly consistent with the classification of @MFBL97 for the entire region. Therefore, we suggest that only the stellar-like source (LBZ12) is an SNR while the arc is part of an [H[ii]{}]{} region.\
[***NGC4214***]{}:[*LBZ5*]{}: The photometry of this source was performed on a considerably smaller area than that used to extract the spectrum. However the latter does not show any peaks along the spatial direction that would allow us to extract spectra for individual regions.\
[*LBZ87*]{}: This source is located in the vicinity of a large [H[ii]{}]{} region. The area presents enhanced diffuse emission over the [H$\alpha$]{} image, preventing its detection by Sextractor as a discrete source. However, we opted to perform photometry because the [\[S [ii]{}\]]{} emission of the particular source is distinct and bright while it is already known X-ray, radio and optical SNR (see Table 18). Its measured ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ ratio (=0.36) allows us to include it in the final list of photometric SNRs (Table 5).\
[***NGC4449***]{}:[*LBZ6*]{}: The slit was placed along the source and spectroscopy revealed the existence of two peaks in the overall spectrum of the source. We examined the detected source in the [H$\alpha$]{} image of the galaxy (Fig. 2) and indeed the presence of two lobes is unequivocal. We opted to present the properties and spectra for both regions (LBZ6a-LBZ6b) but nonetheless we consider it as one source.\
[***NGC5204***]{}: [*LBZ16*]{}: This source stands beside a very bright [H[ii]{}]{} region and for that reason it was not detected by Sextractor. However, we performed photometry on the source which resulted to ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ = 0.91. This ratio ($>$0.4) as well as the fact it is an already-known optical SNR [@MF97] allows us to include the source in the final list of our photometric SNRs.
Physical parameters
-------------------
The photometric investigation revealed a large number of photometric SNRs (418; see Tables 3-8, Table 14) in our sample of galaxies reaching [H$\alpha$]{} and [\[S [ii]{}\]]{} fluxes as low as $\sim$1.2$\times$10$^{-15}$ and $\sim$7$\times$10$^{-16}$ erg sec$^{-1}$ cm$^{-2}$ respectively. In Fig. 5 we plot the ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ ratio of all photometric SNRs ([\[S [ii]{}\]]{}/[H$\alpha$]{}$_{phot} >$0.3 within their error-bars) in our sample of galaxies against their photometric [H$\alpha$]{} flux. The vast majority of the SNRs, apart from those in NGC2403, have fluxes between 3$\times$10$^{-15}$ and 3$\times$10$^{-14}$ erg sec$^{-1}$ cm$^{-2}$. On the other hand, the majority of the SNRs in NGC2403 have fluxes between 1$\times$10$^{-14}$ and 6$\times$10$^{-14}$ erg sec$^{-1}$ cm$^{-2}$, almost half order of magnitude brighter than the mean flux value of the SNRs in the other galaxies of our sample. This is consistent, within the photometric errors, with the sensitivity limit of the SNR survey of @MFBL97 performed with a similar telescope. The difference in the sensitivity limits between NGC2403 and the other galaxies is most likely due to the stronger and non-unifom diffuse emission in this galaxy. As pointed out by @Pannuti07 optical surveys are not very sensitive in identifying SNRs in these environments.\
We also derived the electron densities of the 67 spectroscopically observed SNRs based on their [\[S [ii]{}\]]{}(6716)/ [\[S [ii]{}\]]{}(6731) ratios (see Table 12) which is a good indicator of electron density [@Osterbrock06]. We used the [*temden*]{} task of the [*nebular*]{} package in IRAF software[^4], assuming a temperature of 10$^{4}$ K. The [\[S [ii]{}\]]{}(6716)/ [\[S [ii]{}\]]{}(6731) ratios of our sample of SNRs indicate electron densities ranging between 170 to 580 cm$^{-3}$ for the sample of our galaxies.\
In Fig. 6 we plot the number of spectroscopically observed SNRs against their [\[S [ii]{}\]]{}(6716 Å)/[\[S [ii]{}\]]{}(6731 Å) ratios. The red histogram corresponds to SNRs in NGC2403 (the only spiral galaxy in our sample), the black histogram shows the SNRs in the remaining galaxies of our sample (irregulars) while we have included (magenta) the spectroscopically observed SNRs of four spiral galaxies (NGC5585, NGC6946, M81, and M101) from the work of @MF97. One would expect SNRs in irregular galaxies to present lower [\[S [ii]{}\]]{}(6716)/[\[S [ii]{}\]]{}(6731) ratios (higher densities) than those in spirals, since local enhancements of ISM are usually the case in irregular galaxies. However, there is no trend in the sulfur-line ratios between SNRs in different types of galaxies. This indicates that there are not significant differences in the density of the ejecta or the circumstellar environment between spiral and irregular galaxies. On the other hand, the majority of the SNRs in Fig. 6 have [\[S [ii]{}\]]{}(6716)/[\[S [ii]{}\]]{}(6731) $>$ 1, which following @Stupar09 indicates old SNRs. The trend of detecting preferentially older SNRs in the optical band (e.g. @Rosado83), in combination with the age-dependence of their density may explain the fact that we do not see any significant differences between the SNR populations of elliptical and spiral galaxies.
Multiwavelength associations
----------------------------
We have compiled a catalogue of all known optical SNRs in our sample of galaxies from this study and the literature [@MFBL97; @Dopita10; @Blair83; @MF97] as well as SNRs from X-ray (Paper I and the literature), and radio [@Eck02; @Turner94; @Rosa05; @Chomiuk09; @Vukotic05] observations. We searched for possible associations between these three wavebands by cross-correlating the source catalogue with a search radius of 2$\arcsec$. This search radius was based on the absolute astrometric error of the individual catalogs (which in most cases was very small; e.g. USNO-B1.0 at 0.2$\arcsec$) and the typical error of our optical data ($\sim$1$\arcsec$-1.5$\arcsec$).\
Sources identified as SNRs in the X-ray or radio band but present ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot} < $ 0.3 in this study are denoted as SNR/[H[ii]{}]{}. For X-ray and radio SNRs not identified as such in our analysis but were correlated with distinct nebular features in the [H$\alpha$]{} images, we performed photometry at the location of the associated multi-wavelength source for comparison.\
The results of the cross-correlation are presented in Tables 16-21. Column 1 shows the source identification. Sources with a questionmark have offsets from their multiwavelength associations, somewhat larger than the defined search radius. In most cases however no other sources appear to be encompassed by this search radius, unless otherwise stated. Column 2 gives the source classification based on this study (see §4). Columns 3 and 4 give the RA and Dec (J2000) of the sources in this study. If the source is detected in this study we report the coordinates of the optical source, otherwise we give the coordinates of the multi-wavelength counterparts. Column 5 shows the optically associated SNR by other studies, while Column 6 gives the coordinate offset of the source between this study and other optical studies. Column 7 shows the X-ray associated SNR while Column 8 gives the coordinate offset of the source between this study and the X-ray associate SNR. Column 9 shows the radio associated SNR while Column 10 gives the coordinate offset of the optical source and the radio asscociated SNR.
[*NGC3077*]{}: The X-ray (LZB18)/radio SNR is located between two detected SNR/[H[ii]{}]{} sources by this study (LBZ299 and LBZ300). For a possible interpretation see §5.3.3. The offsets of these sources with the X-ray/radio SNR association are similar, therefore we opted to present both of them.\
[*NGC4449*]{}: The CasA-like, oxygen-rich SNR in NGC4449 (e.g. @Blair83) presented ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot} < $ 0.4 in this study. However, the identification of this source from previous optical studies was not based on the narrow lines of [H$\alpha$]{}, [H$\beta$]{}, [\[N [ii]{}\]]{}, and [\[S [ii]{}\]]{} but on broad lines of [\[O [i]{}\]]{}, [\[O [ii]{}\]]{} and [\[O [iii]{}\]]{} which associated it with ejecta of a young, O-rich SNR. We are aware that the photometric method used in this study (([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot} > $ 0.4 criterion for identifying SNRs) is not helpful for identifying young oxygen-rich SNRs since we focus on different strong emission lines of SNRs.\
[*NGC2403*]{}: We have spectroscopically verified the shock-heated mechanism of three photometric SNRs of @MFBL97 (SNR-3, SNR-15 and SNR-32). On the other hand, two photometric SNR of @MFBL97 (SNR-26, SNR-28) are denoted as SNR/[H[ii]{}]{} (([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot} < $ 0.3) in this study. For these sources, a spectroscopic investigation is necessary in order to verify their nature.
Discussion
==========
Validity of the photometric method
----------------------------------
In order to examine the precision of the photometric [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios we plot them against the ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ ratios of all spectroscopically observed sources in each galaxy (Fig. 7). The red points denote SNRs (see Tables 3-8, column 5 of Table 14) while the green points indicate sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $\geq$ 0.3 but were not spectroscopically verified as SNRs (([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ $\leq$ 0.4, see Tables 9, 13, 14). In order to further test the validity of the photometric method, as mentioned in §3 we randomly selected sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot} \le$ 0.3 for spectroscopic follow-up. These sources present ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ $\leq$ 0.3 and are denoted as black points in Fig. 7 (see Tables 9, 13, 14). The solid line represents the 1:1 relation between photometric and spectroscopic [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios while the dashed lines denote the borderline area for SNRs (([\[S [ii]{}\]]{}/[H$\alpha$]{})$\ge$ 0.4).
Based on the above plots or the success rates in Table 14, we can estimate the detection rate expected for the candidate SNRs/probable candidate SNRs presented in Tables 3-8. The number of candidate SNRs ([\[S [ii]{}\]]{}/[H$\alpha$]{}$_{phot}>$0.4 within their error-bars, without spectra) in all six galaxies of our sample is 229 sources while the probable candidate SNRs (0.3$<$[\[S [ii]{}\]]{}/[H$\alpha$]{}$_{phot}<$0.4 within their error-bars, without spectra) are 122 (These numbers result from Table 14, [*All Galaxies*]{} section if we subtract Column 5 from Column 3). Taking into account the success percentage in SNRs from Table 14 for each galaxy, we expect to have $\sim$ 155 falsely identified SNRs.\
This is by no means a complete catalogue of SNRs in these galaxies, particularly in the faint end where incompleteness becomes important, and photometric errors dominate in our measurements of the ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ ratio. Another case would be of sources embedded in diffuse emission or near [H[ii]{}]{} regions for which the detection limit is higher, due to the increased background.
Line Ratio Diagnostics
----------------------
We calculated the emission line ratios log([H$\alpha$]{}/([\[N [ii]{}\]]{} 6548, 6584 Å)), log([H$\alpha$]{}/([\[S [ii]{}\]]{} 6716, 6731 Å)) and [\[S [ii]{}\]]{} (6716)/[\[S [ii]{}\]]{} (6731)Å of the spectroscopically-detected SNRs in order to place them in the diagnostic plots of @Sabbadin77 and @Garcia91 and investigate the region they occupy (Figs. 8-10). The extinction corrected emission lines were used when available, otherwise we used the uncorrected ones (Table 12). The locus of the different types of sources in these diagrams (dashed lines in Figs 8-10) have been created using the emission line ratios of a large number of Galactic SNRs, [H[ii]{}]{} regions and planetary nebulae (PNe) and can help us distinguish the excitation mechanism of the emission lines (photoionization for [H[ii]{}]{} regions and PNe or collisional excitation for SNRs). For comparison, we also included the spectroscopically-observed optical SNRs of four more spiral galaxies (M81, M101, NGC6946 and NGC5585) from the work of @MF97.
In Fig. 8 we plot the log([H$\alpha$]{}/[\[S [ii]{}\]]{}) against the log([H$\alpha$]{}/[\[N [ii]{}\]]{}) emission line ratios of the spectroscopically observed SNRs. All sources are within the range of [\[S [ii]{}\]]{}/[H$\alpha$]{} = 0.4 - 1 which is typical for SNRs. What is intriguing though is that along the [H$\alpha$]{}/[\[N [ii]{}\]]{} axis, the vast majority of the SNRs in irregular galaxies extend outside the region of Galactic SNRs in contrast to the SNRs of spiral galaxies which occupy that specific region. The region of SNRs in irregular galaxies is shifted in the direction of higher [H$\alpha$]{}/[\[N [ii]{}\]]{} ratios, indicating weaker emission in the [\[N [ii]{}\]]{} lines. This could be due either to a difference in excitation or difference in metallicity. However, since there is no particular difference between the [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios (a powerful shock-excitation indicator for SNRs; Fig. 8) for the SNR populations in spiral and irregular galaxies, this suggests a difference in [H$\alpha$]{}/[\[N [ii]{}\]]{} line ratios of the SNR populations between different types of galaxies due to metallicity. Indeed, irregular galaxies present typically lower metallicities in relation to spiral galaxies (as can be seen from Table 15 or in the work of e.g. @Pagel81; @Garnett02). Similar behaviour is seen for most of the SNRs in LMC (see @Meaburn10) that have significantly higher [H$\alpha$]{}/[\[N [ii]{}\]]{} ratios than those of Galactic SNRs. The nitrogen abundance of the LMC is lower by a factor of 2 compared with that of our Galaxy [@Russell92]. The effect of metallicity on the [H$\alpha$]{}/[\[N [ii]{}\]]{} line ratios is also observed in other nebulae such as [H[ii]{}]{} regions (e.g. @Viironen07).
In Figs 9 and 10 we plot the [\[S [ii]{}\]]{} Å (6716)/[\[S [ii]{}\]]{} Å (6731) line ratio against log([H$\alpha$]{}/[\[S [ii]{}\]]{}) and log([H$\alpha$]{}/[\[N [ii]{}\]]{}). The [\[S [ii]{}\]]{} Å (6716)/[\[S [ii]{}\]]{} Å (6731) line ratio is a good indicator of density in interstellar gas therefore it can be used to probe the effects of e.g. non-uniform ISM (which is often the case in irregular galaxies) on the properties of SNR populations between different types of galaxies. In this context, the majority of the SNRs in our sample present [\[S [ii]{}\]]{} Å (6716)/[\[S [ii]{}\]]{} Å (6731) line ratios between 1.06 - 1.43, indicating electron densities up to $\sim$470 cm$^{-3}$. These SNRs have low densities ([\[S [ii]{}\]]{} Å (6716)/[\[S [ii]{}\]]{} Å (6731) $>$ 1) and are expected to be old (c.f. @Stupar09). Nonetheless, we do not see any trend in electron densities between SNRs in spiral and irregular galaxies. This indicates that there are not significant differences in ejecta or the circumstellar environment between spiral and irregular galaxies. Yet, the SNRs of the irregular galaxies in the [\[S [ii]{}\]]{} Å (6716)/[\[S [ii]{}\]]{} Å (6731) versus log([H$\alpha$]{}/([\[N [ii]{}\]]{} 6548 & 6584 Å) diagram still extend outside the Galactic region of SNRs. Although they seem to overlap with the Galactic [H[ii]{}]{} region, their SNR nature is beyond doubt based on their high [\[S [ii]{}\]]{}/[H$\alpha$]{} ratio which appears to be sensitive to metallicity. This is the result to the definition of the [\[S [ii]{}\]]{}/[H$\alpha$]{} $>$ 0.4 limit which is based on the low metallicity SNRs in the Magellanic Clouds [@MC73]. Their higher [H$\alpha$]{}/[\[N [ii]{}\]]{} ratio could instead be the result of lower metallicity, defining this way an extended region for extragalactic SNRs in low metallicity irregular galaxies. Similar arguments hold for the shifts of our SNRs in the other diagnostic diagrams.\
The [\[O [iii]{}\]]{}/[H$\beta$]{} ratio is a usefull diagnostic tool for complete/incomplete recombination zones. Theoretical models of @Cox85 and @Hartigan87 suggest that [\[O [iii]{}\]]{}/[H$\beta$]{} ratios below 6 indicate shocks with complete recombination zones while this value is easily exceeded for shocks with incomplete recombination zones [@Raymond88]. The measured values from our spectroscopically observed SNRs (Table 12) indicate shocks with complete recombination zones.
The temperature sensitive [\[O [iii]{}\]]{} emission line is a good indicator of shock activity and velocity as the faster the shock propagates the stronger [\[O [iii]{}\]]{} emission is produced. The absence of [\[O [iii]{}\]]{} emission in many of our spectroscopically observed SNRs (Table 12) indicates slow shocks ($<$ 100 km s$^{-1}$; @Hartigan87). In an attempt to measure the shock velocities of sources with detectable [\[O [iii]{}\]]{} emission we used the plot of log([\[O [iii]{}\]]{}Å 5007/[H$\beta$]{}) versus log([\[N [ii]{}\]]{} Å 6584/[H$\alpha$]{}) by @Allen08 (the left panel of their Fig. 21). This plot is based on the commonly used BPT diagram [@Baldwin81], and uses theoretical shock model grids for different values of shock velocity, magnetic field parameters and chemical abundance. @Allen08, apart from solar abundances, calculate also grids for other chemical abundance sets such as those of LMC and SMC (0.33Z$_{\sun}$ and 0.20Z$_{\sun}$ respectively). They also created grids for shock + precursor theoretical models in various chemical abundances. We note that in both models, the [\[N [ii]{}\]]{} Å 6584/[H$\alpha$]{} ratio changes significantly with abundance in contrast to the [\[O [iii]{}\]]{} Å 5007/[H$\beta$]{} since nitrogen shows larger abundance differences mainly due to its secondary nucleosynthesis.
We calculated the log([\[O [iii]{}\]]{}(5007)/[H$\beta$]{}) and log([\[N [ii]{}\]]{}(6584)/[H$\alpha$]{}) ratios of our spectroscopically identified SNRs and placed them on the two diagrams (Figs 11-12). For comparison, we also placed the spectroscopic SNRs of @MF97. The horizontal lines of each grid denote shock velocities of 200 - 1000 km s$^{-1}$ (from top to bottom for shock only grids and from bottom to top for shock + precursor grids) with a step of 50 km s$^{-1}$. We notice that the SNRs in the irregular galaxies (apart from NGC3077) of the present study are located between the LMC and SMC grids. This is not surprising since our sample of galaxies present metallicities between the metallicities of the LMC, SMC (see Table 15). However, in order to obtain accurate shock velocities for these sources new grids should be constructed taking into account the metallicities of each galaxy. Nonetheless, there are galaxies in our sample that present similar metallicities with that of LMC (e.g. NGC4395 and NGC4449). For SNRs in these galaxies that lie on the LMC grid (and not on its degenerate parts) we can give quite safe shock velocity values. For example, on the shock + precurson grid of LMC, LBZ2 in NGC4449 presents $>$ 500 km s$^{-1}$ or LBZ7 in the same galaxy appears a shock velocity of $\sim$ 330 km s$^{-1}$.
Cross-correlating SNRs in multiwavelength bands
-----------------------------------------------
### Venn diagrams
In Figure 13 we present the overlap between optical, X-ray and radio-selected SNRs (see §4.3), in the form of Venn diagrams for NGC2403 and for all galaxies in our sample. In the case of all galaxies we include the results of NGC5204 even though no X-ray or radio SNRs are detected so far. The optical sources we consider in this comparison are all SNRs identified in the study, and all those previously reported in the literature. For completeness, we included the oxygen-rich SNR in NGC4449 as well as sources not detected by this study or classified as SNR/[H[ii]{}]{} but are already known optical SNRs (SNR-21, SNR-25, SNR-26, SNR-27, SNR-28, SNR-34 and SNR-35 in NGC2403 from [@MFBL97], SNR-3 from [@Dopita10]). All multi-wavelength comparisons were performed for the same area of each galaxy. For that reason we excluded the radio SNR in NGC4395 [@Vukotic05] as it is outside the field of the Chandra data used in Paper I. We also excluded the X-ray selected candidate SNR LZB10 in NGC4395 as it is outside the field of view used in the present study. In addition, we excluded the radio candidate SNRs $\alpha$ and $\beta$ in NGC4214 from the work of @Vukotic05, the nature of which is debated [@Chomiuk09] while from the work of @Chomiuk09 we consider only radio candidate SNRs, excluding SNR/[H[ii]{}]{} composite objects which present spectral index consistent either with an [H[ii]{}]{} region or SNR.\
From the 427 optically identified SNRs in Fig. 13a (mainly on the basis of narrow-band photometry) 19 possess X-ray counterparts (corresponding to a match rate of 4.4%), while 7 out of the 20 radio-candidate SNRs have X-ray counterparts (corresponding to a match rate of 35%). Little overlap appears to be between optical and radio SNRs (2.1%).
We note that oxygen-rich or Balmer dominated SNRs detected in the X-ray band, they are excluded from our optical sample since their detection method is based on the strong emission of [\[O [iii]{}\]]{} and [H$\alpha$]{}, [H$\beta$]{} Balmer lines respectively, rather than their enhanced [\[S [ii]{}\]]{}/[H$\alpha$]{} ratio. However, their detection rate is expected to be low and their fraction hardly affects the match rates in the Venn diagrams. Another case of possible unidentified SNRs is plerion-type SNRs. The known X-ray SNRs included in the Venn diagrams are selected based on their thermal, soft X-ray spectra (thermal X-ray SNRs). In this manner, plerion-type SNRs, which have hard X-ray spectra (e.g. @Safi-Harb01 [@Asaoka90]), and which nonetheless present optical properties consistent with those of SNRs (i.e. [\[S [ii]{}\]]{}/[H$\alpha$]{}$ >$ 0.4) are excluded from the X-ray SNR sample. In order to investigate to what extent plerion-type SNRs may be missed from our comparison, we used the CSRC[^5] (Chandra Supernova Remnant Catalog) which is a comprehensive catalog of X-ray emitting SNRs detected in our Galaxy and the MCs . Based on their hard spectra (fitted with a power law model) and/or their compact emission core, we find $\sim$50 Galactic (out of 90), 3-4 LMC (out of 23), and none (out of 6) SMC X-ray SNRs. The intriguing result of higher metallicity galaxies presenting higher fractions of plerions (e.g. $>$50$\%$ for our Galaxy, 17$\%$ for the LMC and 0$\%$ for the SMC), suggests that NGC2403 and NGC3077 (the galaxies in our sample with higher metallicities, see Table 15) may host more plerion-type SNRs than the rest of the galaxies in our sample, resulting to an increased but not substantially different match rate between X-ray and optical SNRs. Additionally, as discussed in section 5.3.4, a number of wind-blown bubbles may be misclassified as optical SNRs ($\sim$10$\%$). These sources present mainly thermal radio emission instead of non-thermal synchrotron emission which is typically the case in SNRs. Therefore a significant number of wind-blown bubbles may dilute an otherwise close correlation between non-thermal radio sources and shock-excited sources identified in optical wavelengths. However, given the relatively small percentage of wind-blown bubbles in our sample we do not expect a dramatic change in match rates between optical, radio and X-ray SNRs.
The number of optical SNRs exceed by far the number of X-ray or radio SNRs. Even if many photometric SNRs (mostly probable candidate SNRs) are not spectroscopically verified as such (see §5.1), the match rates will still remain low. The poor match rates between optical and X-ray/radio SNRs have also been the case for various other multi-wavelength SNR surveys (e.g. @Long10 for M33, @Pannuti07 for five nearby galaxies). This effect could be the result of various factors. The sample of radio SNRs is limited by the lack of deep radio surveys for half of our galaxies (as also discussed by @Long10 for M33). Sample issues aside, these differences could arise from physical effects since the detection rate of SNRs in different wavebands strongly depends on the properties of the surrounding medium of the source. For example, @Pannuti07 point out that optical searches are more likely to detect SNRs located in regions of low diffuse emission, while radio and X-ray searches are more likely to detect SNRs in regions of high optical confusion. The same is pointed out by @Long10 who find that confused environments in the optical, do not influence the detectability of the SNRs in the X-ray band. Furthermore, the trend of detecting more easily older SNRs in the optical band gives rise to the large difference in the match rate between optical and X-ray SNRs. All these facts could contribute to the large differences in match rates between optical, X-ray and radio SNRs, and highlight the importance of multi-wavelength surveys for the study of extragalactic SNR populations.
### SNRs or X-ray Binaries ?
Six optically detected candidate SNRs/probable candidate SNRs (LBZ6, LBZ102, LBZ108, LBZ127 in NGC2403; LBZ80 in NGC4214 and LBZ60 in NGC4449) are associated with XRBs (see Tables 16, 18, 20) on the basis of their hard X-ray emission and high X-ray luminosities (Paper I). Although these sources could be considered as plerionic SNRs (due to their hard X-ray spectra and their enhanced [\[S [ii]{}\]]{}/[H$\alpha$]{} ratio), their X-ray luminosities ($\sim$10$^{38}$) and their variability ($>$15$\%$ between flux observations, see Paper I) place them in the regime of XRBs rather than this of plerions (typical L$_{X}$ $\sim$10$^{35}$; @Gaensler06). Therefore, one possible interpretation for these sources is that of an X-ray binary coincident with a SNR, possibly associated with the supernova (SN) that produced the compact object in the binary. In this case the SNR is responsible for the observed optical (and radio emission) while the binary system produces the X-ray emission. The X-ray luminosity of active XRBs (10$^{37}$ erg s$^{-1}$) is higher than that of SNRs (typically 10$^{35}$ - 10$^{37}$ erg s$^{-1}$; @Mathewson83) and therefore they can overshadow the latter. The exemplar of this type of objects is the SS 433/W50 SNR/XRB system (e.g. @Boumis07, @Safi99), while a few other candidates have been identified in other galaxies on the basis of hard and/or variable X-ray sources associated with optically or radio identified SNRs (@Pannuti07).
### Young SNRs or SNRs embedded in [H[ii]{}]{} regions ?
The photometric investigation of the detected sources in our sample of galaxies revealed 20 low-excitation sources (([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot} <$ 0.3) which are not known optical SNRs from other studies and are associated with known X-ray or radio SNRs (see Tables 16-21). There are two possibilities for the nature of these sources: a) the SNR is in its first evolutionary stages where the optical emission is considerably fainter than the X-ray/radio emission. In this case it is also possible that their expanded shock fronts form a precursor [H[ii]{}]{} region [@Allen08] which gives the observed optical emission while the SNRs produce the X-ray or radio emission, and b) we observe SNRs embedded in [H[ii]{}]{} regions. In that case the [H$\alpha$]{} emission we observe comes from the [H[ii]{}]{} region, which is enhanced in relation to that radiated from the SNR, resulting to [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios below our 0.3 limit for the latter which nonetheless emits in the X-ray or radio band. Follow-up spectroscopic observations will help us clarify the nature of these sources. These sources are considered as SNR/[H[ii]{}]{} composite objects in this study.
### SNRs or wind blown bubbles?
Multiple supernovae and/or blown out stellar winds of massive stars in OB associations can create cavities of hot gas in the ISM, known as wind blown bubbles (bubbles or superbubbles). The shock-excited structure of these objects can grant them with moderate [\[S [ii]{}\]]{}/[H$\alpha$]{} values ($>$0.45; @Chen00, @Lasker77) since the expansion velocities of their radiative shocks are too low to produce enhanced [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios (e.g. @Long10) or the UV radiation of the OB associations in a superbubble photoionizes sulfur to higher ionization stages which lead to weaker [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios (e.g. @Chen99). Nonetheless there might be some superbubbles that have [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios within the range of SNRs. The only way to identify these objects is to use their typically larger sizes and lower luminosities compared to SNRs. Superbubbles present large sizes ($>$100 pc), which are rare among known SNRs [@Williams99], and slower expansion velocities than those of SNRs ($<$100 km/sec; e.g. @Franchetti12). On the other hand, their low-density environment is responsible for their rather faint X-ray emission (below that of SNRs: 10$^{34}$ - 10$^{36}$ erg s$^{-1}$; e.g. @Chu90). In cases when a source with the above characteristics exhibits diffuse X-ray emission with luminosities similar to those of SNRs and its [H$\alpha$]{} expansion velocity is high, then it is most probable that a superbubble encompasses a recently created SNR [@Chu90].\
Based on the above, we opted to investigate whether some of the SNRs we identify based on their [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios need to be reclassified as superbubbles. Since our low-resolution images do not allow us to reliably investigate for OB associations, and measurements for [H$\alpha$]{} expansion velocities of the objects are not yet available (reduction of echelle spectra for $\sim$30 large SNRs with the 2.1m telescope in SPM, Mexico are in progress), we relied solely on identifying SNRs with large diameters ($>$80 pc). The physical size of the sources was estimated by subtracting in quadrature the seeing in each exposure (typically 1.3$\arcsec$-2.5$\arcsec$) from the aperture used to perform the source photometry. The latter was defined including most of a source’s flux, while avoiding any neighbouring sources and minimizing the encompassed diffuse emission. Following this approach we set the following limits in order to reject a source as a possible wind-blown bubble: $\ge$8 pixels for NGC2403 (which corresponds to $\ga$2.24$\arcsec$ and physical scale of $\ga$75 pc in diameter), $\ge$6 pixels for NGC5204 ($\ga$1.68$\arcsec$, $\ga$90pc diameter), $\ge$12 pixels for NGC4395 ($\ga$3.36$\arcsec$, $\ga$90pc diameter), $\ge$7 pixels for NGC4449 ($\ga$1.96$\arcsec$, $\ga$90 pc diameter), $\ge$8 pixels for NGC3077 ($\ga$2.24$\arcsec$, $\ga$85pc diameter) and $\ge$ 6 pixels for NGC4214 ($\ga$1.68$\arcsec$, $\ga$90 pc diameter). We find 51 (39 in NGC2403, 2 in NGC5204 and 10 in NGC4214) that fulfill the above criteria. Four of these sources are also X-ray selected SNRs based on their X-ray colors and/or their soft spectra (see Tables 3-8 and Paper I). The large sizes of these objects in combination with their relatively high X-ray luminosities which is typical of SNRs, suggest the existence of a SNR inside a superbubble. Three more sources present X-ray properties consistent with those of XRBs, with even larger X-ray luminosities. Therefore, if we remove these seven sources from the 51 initially selected ones, we result to 44 possibly misclassified superbubbles as SNRs which constitute $\sim$10$\%$ of our optical SNR sample. This percentage is expected to be somewhat larger if we take into account the lack of expansion velocity measurements which can give us a reliable discrimination between SNRs and wind-blown bubbles.\
We note that the estimation of the sizes described above was based on the [H$\alpha$]{} morphology of the sources, rather than the [\[S [ii]{}\]]{} morphology which traces better the higher excitation part of the nebula associated with the shock-blown bubble. This choice was driven by the much higher S/N of the [H$\alpha$]{} images, but has the tendency to overestimate the size of the SNRs or bubbles. Therefore, the estimated sizes are an upper bound on the true size of the shock-blown bubbles. Furthermore, the majority of these sources present [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios within the secure range of SNRs ($>$0.5), and X-ray luminosities much higher than the typical luminosities of superbubbles. Based on these two facts we expect that the majority of these 44 objects are bona-fide SNRs and for this reason we do not exclude them from our analysis. However, we do indicate them as possible superbubbles in Tables 3-8.
Correlating X-ray selected SNRs with their optical properties
-------------------------------------------------------------
Mining for optical SNRs within the six nearby galaxies of our sample revealed 18 sources (SNRs/candidate SNRs/probable candidate SNRs) that are associated with X-ray selected SNRs from Paper I. In order to investigate the relation between the optical and X-ray properties of these sources and examine whether the optical properties of SNRs are good predictors of X-ray SNRs, we calculated their [H$\alpha$]{} luminosity and correlated it with their X-ray luminosity derived in Paper I. The [H$\alpha$]{} luminosities were calculated based on the non-extinction corrected photometric fluxes F([H$\alpha$]{}) in Tables 3-8 and the distances from Table 1. We used the photometric F([H$\alpha$]{}) instead of the spectroscopic ones since we did not have spectra for all 18 sources. Two X-ray SNRs (candidate SNRs, see Paper I) were excluded from the sample since because of their small number of counts we could not extract spectra and calculate accurate fluxes (their identification was based on their X-ray colours).
In Fig. 14 we plot the [H$\alpha$]{} luminosity against the non extinction-corrected X-ray luminosity of the 16 optically selected, X-ray emitting SNRs. Different colours in the plot indicate SNRs of different galaxies while the dashed line indicates the 1:1 relation between the two luminosities. The majority of SNRs tend to have higher [H$\alpha$]{} than X-ray luminosities while the most luminous X-ray SNRs typical present the highest optical luminosities. However, we do not find a correlation at a statistically significant level (linear correlation coefficient:-0.12). The variation of the ratio of X-ray-to-optical luminosities indicates the existence of different materials in a wide range of temperatures: the X-ray emission originates from hot material behind the shock front (plasma temperature of $\sim$10$^{7}$ K) and long cooling timescales (typical values of a few hundreds of kyr) while the [H$\alpha$]{} emission comes from cooling regions (plasma temperatures of $\sim$10$^{5}$ K) of dense recombining gas around the edges of the remnant and short cooling timescales (up to a few hundred years). With the same rationale we can interpret the lack of a significant correlation between the [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios of the 16 optically selected, X-ray emitting SNRs and their X-ray luminosities (Fig. 15). In a sample model one would expect that stronger shocks (higher [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios) would correlate with higher L$_{X}$. However, because of the long-cooling time of the X-ray material the shock-velocity we are measuring does not necessarily correspond to the shock that generated the bulk of the X-ray emission material.
SNRs and SFR
------------
In order to investigate the optical properties of SNRs in different star-forming environments and derive safe conclusions on their connection with SFR, we need primarily to examine to what luminosity limit our sample of SNRs is complete. For that reason we plot the luminosity distributions of photometric SNRs in each galaxy (Fig 16). Previous studies in extragalactic SNRs showed that these populations are distributed in the form of power laws [e.g. @Ghavamian05]. Therefore, the turnover in histograms of [H$\alpha$]{} luminosities indicates the effect of incompleteness for each galaxy’s SNR population. Three galaxies in our sample (NGC3077, NGC4214 and NGC5204) present the same limiting luminosity of 1.6$\times$10$^{37}$ erg s$^{-1}$ while for NGC2403 is 3.2$\times$10$^{37}$ erg s$^{-1}$, for NGC4395 is 4$\times$10$^{36}$ erg s$^{-1}$ and for NGC4449 is 2$\times$10$^{37}$ erg s$^{-1}$.
All galaxies in our sample have accurate measurements of their integrated [H$\alpha$]{} luminosity [@Kennicutt08], so we opted to use this as an SFR proxy. For consistency, we rederived the luminosities from @Kennicutt08 based on the distances in Table 1. No corrections for extinction internal to the galaxies themselves has been applied.
Since core-collapse SNe are the end points of the evolution of the most massive stars, their SNRs are good indicators of the current SFR. This work combining multi-wavelength samples of SNRs which are often not overlaping, provides an excellent census of the SNR populations in diferent galaxies. Therefore, we would expect a linear relation between the number of opically-selected SNRs and SFR (e.g., @Condon90). To verify this connection, we plot the number of photometric SNRs (Tables 3-8) above the completeness limit of each galaxy in our sample (see Fig. 16) against the integrated [H$\alpha$]{} luminosity of each galaxy (Fig. 17, top). We find a linear relation between them but the small number of objects does not allow us to quantify their scaling relation. However, a linear correlation coefficient of 0.87 for the photometric SNRs shows that this is a significant correlation.
The non-thermal radio emission is another indicator of the SN rate and hence high-mass star formation (e.g., @Condon90) since it comes from electrons in the magnetic field of the galaxies which are produced and accelerated in SNRs. Therefore, the radio emission should be correlated with the number of SNRs and SFR. We investigate the relation between the 1.4 GHz radio emission of the galaxies in our sample and the detected number of photometric SNRs (Fig. 17, bottom). We use integrated radio fluxes from @Condon87 and we find a correlation coefficient of 0.59. However, if we remove NGC4449 which seems to drive out the correlation we find a linear correlation coefficient of 0.86. The weaker correlation between the number of SNRs and the radio 1.4 GHz luminosity could be due to a significant contribution of thermal radio emission to the 1.4 GHz luminosity.
Conclusions
===========
In this paper we presented a systematic spectrophotometric study of optically emitting SNRs in a sample of six nearby galaxies. The SNRs are initially selected on the basis of their [\[S [ii]{}\]]{}/[H$\alpha$]{}$\ge$0.4 ratio revealing a total number of $\sim$400 photometric SNRs (including sources with 0.3$<$([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}<$0.4). Spectroscopic observations verified the nature of 67 shock-excited sources. 23 optical SNRs in this study are also detected in other wavebands. From the analysis of the sample we find that $\sim$4$\%$ and $\sim$2$\%$ of the optically selected SNRs have X-ray and radio counterparts respectively. The overlap between X-ray and radio classifications is 35$\%$. This little overlap between detection rates in different wavelengths could be either due to environmental effects (e.g. the properties of the surrounding medium which strongly affect the detection rates) or selection effects (such as the poor sensitivity of the existing radio surveys and /or the poor sensitivity of optical surveys in regions with strong star-formation and hence significant [H$\alpha$]{} emission) and the different evolutionary stages in the life of a remnant.\
Six sources identified as optically-selected SNRs in this study exhibit X-ray properties more consistent with XRBs. We propose that these sources are X-ray binaries coincident with an SNR.\
The present study revealed 20 SNR/[H[ii]{}]{} sources (based on their narrow-band photometry) that are associated with known X-ray or radio SNRs. Two possible interpretations are of young sources that have not entered the phase of their optical radiation or of sources embedded in [H[ii]{}]{} regions where the SNR produces the X-ray/radio emission and the [H[ii]{}]{} region outshadows the shock-excited optical gas to be detected.\
There is a trend for irregular galaxies to have lower [\[N [ii]{}\]]{}/[H$\alpha$]{} ratios. This is due to the lower metallicities of these galaxies since [\[N [ii]{}\]]{}/[H$\alpha$]{} is a very sensitive metallicity indicator (more than [\[S [ii]{}\]]{}/[H$\alpha$]{}) mainly due to its secondary nucleosynthesis.\
For the optically-emitting SNRs with X-ray counterparts, we do not see a correlation between their [H$\alpha$]{} and X-ray luminosities, which is due to the presence of material in a wide range of temperatures. Additionally, we do not find any trend between the X-ray luminosity of SNRs and their [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios.\
We find evidence for a linear relation between the number of luminous optical SNRs ($\sim$10$^{37}$ erg s$^{-1}$) and SFR in our sample of galaxies.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors would like to thank the referee John Danziger for providing constructive comments that have improved the clarity of this manuscript. We also thank M. Allen for the fruitful discussion and for providing us with different plots on the Mapping III shock model. This program has been supported by NASA grant GO6-7086X and NASA LTSA grant G5-13056. AZ acknowledge support by the EU IRG grant 224878. Space Astrophysics at the University of Crete is supported by EU FP7-REGPOT grant 206469 (ASTROSPACE). Skinakas Observatory is a collaborative project of the University of Crete, the Foundation for Research and Technology-Hellas and Max-Planck-Institute. The Kitt Peak National Observatory, National Optical Astronomy Observatory, is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under cooperative agreement with the National Science Foundation.
Allen, M. G., Groves, B. A., Dopita, M. A., Sutherland, R. S., Kewley, L. J. 2008, ApJS, 178, 20 Annibali, F., Aloisi, A., Mack. J., Tosi, M., van der Marel, R.P., Angeretti, L., Leitherer, C., Sirianni, M. 2008, AJ, 135, 1900 Asaoka, I. & Koyama, K. 1990, PASJ, 42, 625 Baldwin, J. A., Phillips, M. M., Terlevich, R. 1981, PASP, 93, 5 Bertin, E., Arnouts, S. 1996, A&AS, 117, 393 Blair, W. P., Kirshner, R. P., & Winkler, P. F. 1983, ApJ, 272, 84 Blair, W. P., & Long, K. S. 2004, ApJS, 155, 101 Boumis, P., Mavromatakis, F., Paleologou, E.V. 2002, A&A, 385, 1042 Boumis, P., Mavromatakis, F., Xilouris, E.M., Alikakos, J., Redman, M.P., Goudis, C.D. 2005, A&A, 443, 175 Boumis, P., Meaburn, J., Alikakos, J., Redman, M. P., Akras, S., Mavromatakis, F., Lopez, J. A., Caulet, A., Goudis, C. D. 2007, MNRAS, 381, 308 Boumis, P., Xilouris, E. M., Alikakos, J., Christopoulou, P. E., Mavromatakis, F., Katsiyannis, A. C., Goudis, C. D. 2009, A&A, 499, 789 Charles, P.A.,& Seward, F. D. 1995, Exploring the X-ray Universe (Cambridge: Cambridge Univ. Press) Chen, C.-H. Rosie, Chu, You-Hua, Points, S. D. 1999, AAS, 194, 7207 Chen, C.-H. Rosie, Chu, You-Hua, Gruendl, R. A., Points, S. D. 2000, AJ, 119, 131 Chomiuk, L., & Wilcots, E. M. 2009, AJ, 137, 3869 Chu, You-Hua, Mac Low, Mordecai-Mark 1990, ApJ, 365, 510 Condon, J. J. 1987, ApJS, 65, 485 Condon, J. J., & Yin, Q. F. 1990, ApJ, 357, 97 Cox, D. P., Raymond, J. C. 1985, ApJ, 298, 651 de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H. G., Buta, R. J., Paturel, G., & Fouque, P. 1995, Third Reference Catalog of Bright Galaxies (New York: Springer) (RC3) Dickel, J. R. 1999, in IAU Symp. 190, New Views of the Magellanic Clouds, ed. Y. H. Chu, N. B. Suntzeff, J. E. Hesser, & D. A. Bohlender (San Francisco, CA: ASP), 139 Dopita, M.A., et al. 2010, Astrophys Space Science, 330, 123 Eck, C. R., Cowan, J. J., & Branch, D. 2002, ApJ, 573, 306 Fesen, R. A., Milisavljevic, D. 2010, AJ, 140, 1163 Franchetti, N. A., Gruendl, R. A., Chu, You-Hua, Dunne, B. C., Pannuti, T. G., Kuntz, K. D., Chen, C.-H. Rosie, Grimes, C. K., Aldridge, T. M. 2012, AJ, 143, 85 Freedman, W.L., & Madore, B.F. 1988, ApJ, 332, L63 Freedman, W.L., et al. 1994, ApJ, 427, 628 Gaensler, B.M., Slane, P.O. 2006, ARA&A, 44, 17 Garcia-Lario, P., Manchado, A., Riera, A., Mampaso, A. & Pottasch, S. R., 1991, A&A, 249, 223 Garnett, D.R. 2002, ApJ, 581, 1019 Ghavamian, P., Blair, W. P., Long, K. S., Sasaki, M., Gaetz, T. J., & Plucinsky, P. 2005, AJ, 130, 539 Green, D.A. 2009, Bulletin of the Astronomical Society of India, 37, 45. (See: arxiv:0905.3699) Hamuy, M., Walker, A.R., Suntzeff, N.B., et al. 1992, PASP, 104, 533 Hartigan, P., Raymond, J., Hartmann, L. 1987, ApJ, 316, 323 Kennicutt, R.C., Jr., Janice C. Lee, José G. Funes, S. J., Shoko Sakai, Sanae Akiyama 2008, ApJS, 178, 247 Lasker, B.M. 1977, ApJ, 212, 390 Leonidaki, I., Zezas, A. & Boumis, P. 2010, ApJ, 725, 842 Lequeux, J., Peimbert, M., Rayo, J. F., Serrano, A., Torres-Peimbert, S. 1979, A&A, 80, 155 Long, K. S., Blair, W. P., Winkler, P. F., Becker, R. H., Gaetz, T. J., Ghavamian, P., Helfand, D. J., Hughes, J. P., Kirshner, R. P., Kuntz, K. D., McNeil, E. K., Pannuti, T. G., Plucinsky, P. P., Saul, D., Tüllmann, R., Williams, B. 2010, ApJS, 187, 495 Martin, C.L. 1997, ApJ, 491, 561 Mathewson, D. S. & Clarke, J.N. 1973, ApJ, 180, 725 Mathewson, D. S., Ford, V. L., Dopita, M. A., Tuohy, I. R., Long, K. S., Helfand, D. J. 1983, ApJS, 51, 345 Matonick, D. M., Fesen, R. A., 1997, ApJS, 112, 49 Matonick, D. M., et al., 1997, ApJS, 113, 333 Meaburn, J., Redman, M. P., Boumis P. & Harvey, E., 2010, MNRAS, 408, 1249 Osterbrock, D. E., & Ferland, G. J. 2006, Astrophysics of gaseous nebulae and AGN, 2nd edn (Sausalito, CA: University Science Books) Ott, J., Martin, C. L., & Walter, F. 2003, ApJ, 594, 776 Pagel, B. E. J., & Endmunds, M. G. 1981, ARA&A, 19, 77 Pannuti, T. G., Schlegel, E. M., & Lacey, C. K. 2007, AJ, 133, 1361 Pilyugin, L. S., Vílchez, J. M., Contini, T. 2004, A&A, 425, 849 Raymond, J. C., Hester, J. J., Cox, D., Blair, W. P., Fesen, R. A., Gull, T. R. 1988, ApJ, 324, 869 Reach, W. T., Rho, J., Tappe, A., Pannuti, T. G., Brogan, C. L., Churchwell, E. B., Meade, M. R., Babler, B., Indebetouw, R., Whitney, B. A. 2006, AJ, 131, 1479 Reynolds, S. P., Borkowski, K. J., Green, D. A., Hwang, U., Harrus, I., Petre, R. 2009, ApJ, 695, 149 Richer, M. G., & McCall, M. L. 1995, ApJ, 445, 642 Rosa-Gonzalez, D. 2005, MNRAS, 364, 1304 Rosado M., Georgelin Y. M., Laval A., Monnet G., 1983, Proc. IAU Symp. 101, Supernova Remnants and Their X-ray Emission, ed. P. Gorenstein and I.J. Danziger, Reidel, Dordrecht, p. 567 Russell, S. & Dopita, M. A., 1992, ApJ, 384, 508 Sabbadin, F., Minello, S. & Bianchini, A. 1977, A&A, 60, 147 Sabbadin, F., Ortolani, S., Bianchini, A. 1984, A&A, 131, 1S Safi-Harb, S., & Petre, R. 1999, ApJ, 512, 784 Safi-Harb, S., Harrus, I. M., Petre, R., Pavlov, G. G., Koptsevich, A. B., & Samwal, D. 2001, ApJ, 561, 308 Saha, A., Labhardt, L., Schwengeler, H., Macchetto, F.D., Panagia, N., Sandage, A. & Tammann, G.A. 1994, ApJ, 425, 14 Slane, P., Smith, R. K., Hughes, J. P., Petre, R. 2002, ApJ, 564, 284 Storchi-Bergmann, T., Calzetti, D./ Kinney, A.L. 1994, ApJ, 429, 572 Stupar, M., & Parker, Q. A. 2009, MNRAS, 394, 1791 Summers, L.K., Stevens, I.R., Strickland, D.K. & Heckman, T.M. 2003, MNRAS, 342, 690 Tully, R. 1988, Nearby galaxies Catalog (Cambridge: Cambridge University Press) Turner, J. L., & Ho, P. T. P. 1994, ApJ, 421, 122 Viironen, K., Delgado-Inglada, G., Mampaso, A., Magrini, L., & Corradi, R.L.M. 2007, MNRAS, 381, 1719 Vukotic, B., Bojiicic, I., Pannuti, T. G., & Urosevic, D. 2005, Serb. Astron. J., 170, 101 Williams, R. M., Chu, You-Hua, Dickel, J. R., Petre, R., Smith, R. C., Tavarez, M. 1999, ApJS, 123, 467
--------- ------------ ---------- ---------- ---------------------- ------------- ------------------- ---------- -------------
Galaxy RA DEC Distance Major/Minor axis Inclination Galactic latitude Type Phys. scale
(J2000) (J2000) (Mpc) (arcmin) (degrees) (degrees) (pc)
NGC2403 07:36:51.4 65:36:09 3.2 21.9[$\times$]{}12.3 62 29 SAB(s)cd 21.7
NGC5204 13:29:36.5 58:25:07 4.8 5.0[$\times$]{}3.0 53 58 SA(s)m 32.6
NGC4395 12:25:48.9 33:32:48 2.6 13.2[$\times$]{}11.0 38 82 SA(s)m 17.6
NGC4449 12:28:11.9 44:05:40 4.2 6.2[$\times$]{}4.4 56 72 IBm 28.5
NGC3077 10:03:19.1 68:44:02 3.6 5.4[$\times$]{}4.5 42 I0 pec 24.4
NGC4214 12:15:39.2 36:19:37 4.7 8.5[$\times$]{}6.6 37 78 IAB(s)m 31.9
--------- ------------ ---------- ---------- ---------------------- ------------- ------------------- ---------- -------------
--------------------------------------------- --------------- ----------------- ------------
Filter $\lambda_{c}$ $\Delta\lambda$ T$_{peak}$
(Å) (Å) (%)
[H$\alpha$]{}+[\[N [ii]{}\]]{}6548 & 6584 Å 6570 75 80
[\[S [ii]{}\]]{}6716 & 6731 Å 6720 27 80
[\[O [iii]{}\]]{}5007 Å 5010 20 63
Continuum red 6096 134 -
Continuum blue 5470 230 -
--------------------------------------------- --------------- ----------------- ------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ---------------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LBZ1 07:36:30.4 65:35:43.4 8 10 10 107.0 $\pm$0.4 40.8 $\pm$0.3 0.38$\pm$0.01 yes (M) SNR [^6]
LBZ2 07:36:35.2 65:37:17.5 5 10 10 19.4 $\pm$0.5 12.9 $\pm$0.4 0.66$\pm$0.03 yes (M) SNR
LBZ3 07:36:41.1 65:36:18.0 6 10 10 12.8 $\pm$0.4 11.7 $\pm$0.3 0.91$\pm$0.05 yes (M) SNR
LBZ4 07:36:48.3 65:34:40.3 8 40 5 39.3 $\pm$0.5 14.1 $\pm$0.3 0.36$\pm$0.01 yes (M) SNR$^{g}$
LBZ5 07:36:50.6 65:35:35.8 6 10 10 101.0 $\pm$0.6 36.2 $\pm$0.4 0.36$\pm$0.01 yes (M) SNR
LBZ6 07:36:55.4 65:35:42.2 6 6 2 187.0 $\pm$0.7 117.0$\pm$0.3 0.63$\pm$0.03 yes (M) SNR[^7]
LBZ7 07:37:03.2 65:35:51.8 5 10 10 14.3 $\pm$0.4 4.4 $\pm$0.1 0.31$\pm$0.02 yes (M) SNR
LBZ8 07:37:03.2 65:37:13.7 6 6 2 6.0 $\pm$0.6 4.9 $\pm$0.2 0.81$\pm$0.09 yes (M) SNR[^8]
LBZ9 07:37:03.5 65:37:17.4 6 10 10 34.2 $\pm$0.5 10.4 $\pm$0.2 0.30$\pm$0.01 yes (M) SNR
LBZ10 07:37:04.9 65:36:10.7 5 5 2 5.5 $\pm$0.5 1.8 $\pm$0.2 0.33$\pm$0.05 yes (M) SNR
LBZ11 07:37:16.0 65:33:28.9 6 6 2 13.3 $\pm$0.7 4.5 $\pm$0.2 0.34$\pm$0.03 yes (M) SNR[^9]
LBZ12 07:37:21.4 65:33:06.9 6 6 2 9.0 $\pm$0.5 4.9 $\pm$0.2 0.54$\pm$0.03 yes (M) SNR[^10]
LBZ13 07:36:08.4 65:37:45.6 10 15 5 39.3 $\pm$0.5 25.1 $\pm$0.4 0.64 $\pm$0.01 no candidate SNR$^{g}$
LBZ14 07:36:09.4 65:37:45.1 6 10 10 22.0 $\pm$0.3 8.9 $\pm$0.2 0.41 $\pm$0.01 no candidate SNR
LBZ15 07:36:12.8 65:38:16.0 8 10 10 22.0 $\pm$0.3 12.2 $\pm$0.3 0.55 $\pm$0.01 no candidate SNR$^{g}$
LBZ16 07:36:17.6 65:37:11.8 8 10 10 15.8 $\pm$0.5 6.6 $\pm$0.4 0.42 $\pm$0.03 no candidate SNR$^{g}$
LBZ17 07:36:18.1 65:37:03.4 8 10 10 31.7 $\pm$0.4 22.5 $\pm$0.4 0.71 $\pm$0.02 no candidate SNR$^{g}$
LBZ18 07:36:19.0 65:37:29.6 8 10 10 24.0 $\pm$0.6 12.9 $\pm$0.5 0.54 $\pm$0.03 no candidate SNR$^{g}$
LBZ19 07:36:19.5 65:37:26.3 8 10 10 37.8 $\pm$0.6 16.2 $\pm$0.5 0.43 $\pm$0.02 no candidate SNR$^{g}$
LBZ20 07:36:19.5 65:37:37.2 6 10 10 13.8 $\pm$0.4 7.3 $\pm$0.3 0.53 $\pm$0.03 no candidate SNR
LBZ21 07:36:19.9 65:37:57.7 8 10 10 20.9 $\pm$0.4 10.9 $\pm$0.4 0.52 $\pm$0.02 no candidate SNR$^{g}$
LBZ22 07:36:24.1 65:36:07.2 5 5 2 2.35 $\pm$0.1 1.0 $\pm$0.1 0.43 $\pm$0.04 no candidate SNR[^11]
LBZ23 07:36:20.2 65:36:53.5 8 10 10 27.6 $\pm$0.5 19.8 $\pm$0.4 0.72 $\pm$0.02 no candidate SNR$^{g}$
LBZ24 07:36:20.5 07:36:20.6 6 10 10 23.5 $\pm$0.4 8.9 $\pm$0.3 0.38 $\pm$0.02 no candidate SNR
LBZ25 07:36:20.8 65:39:02.9 8 10 10 22.0 $\pm$0.5 11.6 $\pm$0.4 0.53 $\pm$0.02 no candidate SNR$^{g}$
LBZ26 07:36:21.5 65:37:36.8 10 20 30 50.1 $\pm$0.5 36.4 $\pm$0.5 0.73 $\pm$0.01 no candidate SNR$^{g}$
LBZ27 07:36:23.8 65:38:45.8 8 10 10 51.1 $\pm$0.6 19.8 $\pm$0.5 0.39 $\pm$0.01 no candidate SNR$^{g}$
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ---------------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ---------------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LBZ28 07:36:26.0 65:37:58.8 8 10 10 18.4 $\pm$0.5 18.2 $\pm$0.4 0.99 $\pm$0.03 no candidate SNR$^{b}$
LBZ29 07:36:27.0 65:37:04.8 6 10 10 23.0 $\pm$0.4 10.9 $\pm$0.3 0.47 $\pm$0.02 no candidate SNR
LBZ30 07:36:27.2 65:38:53.4 6 10 10 22.0 $\pm$0.4 15.5 $\pm$0.3 0.70 $\pm$0.02 no candidate SNR
LBZ31 07:36:27.4 65:38:48.2 6 10 10 17.4 $\pm$0.4 10.9 $\pm$0.3 0.63 $\pm$0.02 no candidate SNR
LBZ32 07:36:29.2 65:37:00.1 5 5 2 3.68 $\pm$0.3 4.3 $\pm$0.3 1.17 $\pm$0.12 no candidate SNR
LBZ33 07:36:29.9 65:40:28.8 8 10 10 12.3 $\pm$0.1 8.6 $\pm$0.1 0.70 $\pm$0.01 no candidate SNR$^{b}$
LBZ34 07:36:32.1 65:37:04.1 6 10 10 59.4 $\pm$0.3 23.7 $\pm$0.2 0.40 $\pm$0.01 no candidate SNR
LBZ35 07:36:34.6 65:37:21.9 5 5 2 5.62 $\pm$0.4 4.3 $\pm$0.4 0.77 $\pm$0.09 no candidate SNR
LBZ36 07:36:35.3 65:37:03.8 6 20 30 26.0 $\pm$0.3 21.8 $\pm$0.3 0.84 $\pm$0.02 no candidate SNR
LBZ37 07:36:35.4 65:36:45.8 6 10 10 42.3 $\pm$0.4 20.2 $\pm$0.3 0.48 $\pm$0.01 no candidate SNR
LBZ38 07:36:35.4 65:36:58.8 6 10 10 38.3 $\pm$0.4 15.6 $\pm$0.3 0.41 $\pm$0.02 no candidate SNR
LBZ39 07:36:35.5 65:37:49.5 8 10 10 28.1 $\pm$0.6 16.2 $\pm$0.5 0.58 $\pm$0.02 no candidate SNR$^{b}$
LBZ40 07:36:35.6 65:37:38.0 8 10 10 28.1 $\pm$0.6 16.9 $\pm$0.6 0.60 $\pm$0.02 no candidate SNR$^{b}$
LBZ41 07:36:35.7 65:37:59.5 6 10 10 14.3 $\pm$0.4 8.3 $\pm$0.4 0.58 $\pm$0.03 no candidate SNR
LBZ42 07:36:36.3 65:38:05.6 6 10 10 11.7 $\pm$0.4 7.9 $\pm$0.3 0.68 $\pm$0.04 no candidate SNR
LBZ43 07:36:36.4 65:37:11.5 6 6 2 9.8 $\pm$0.3 6.9 $\pm$0.3 0.70 $\pm$0.06 no candidate SNR
LBZ44 07:36:36.9 65:36:51.0 10 10 10 30.9 $\pm$0.7 34.7 $\pm$0.5 1.12 $\pm$0.03 no candidate SNR$^{b}$
LBZ45 07:36:37.5 65:36:31.5 6 10 10 23.8 $\pm$0.4 14.8 $\pm$0.3 0.62 $\pm$0.03 no candidate SNR
LBZ46 07:36:37.8 65:37:03.6 6 10 10 40.3 $\pm$0.3 18.8 $\pm$0.3 0.47 $\pm$0.01 no candidate SNR
LBZ47 07:36:37.9 65:37:52.8 8 10 10 46.5 $\pm$0.7 24.1 $\pm$0.6 0.52 $\pm$0.01 no candidate SNR$^{b}$
LBZ48 07:36:38.8 07:36:38.8 6 10 10 33.4 $\pm$0.3 19.4 $\pm$0.3 0.58 $\pm$0.02 no candidate SNR
LBZ49 07:36:40.8 65:36:20.6 6 10 10 27.6 $\pm$0.4 11.3 $\pm$0.3 0.41 $\pm$0.02 no candidate SNR
LBZ50 07:36:40.8 65:36:34.9 6 6 2 12.5 $\pm$0.3 12.8 $\pm$0.3 1.02 $\pm$0.05 no candidate SNR
LBZ51 07:36:41.0 65:36:48.6 6 10 10 36.1 $\pm$0.4 17.2 $\pm$0.3 0.48 $\pm$0.02 no candidate SNR
LBZ52 07:36:41.1 65:37:05.0 8 10 10 50.6 $\pm$0.4 25.8 $\pm$0.3 0.51 $\pm$0.01 no candidate SNR$^{b}$
LBZ53 07:36:41.2 65:36:52.7 6 10 10 48.2 $\pm$0.4 23.3 $\pm$0.3 0.48 $\pm$0.01 no candidate SNR
LBZ54 07:36:41.3 65:38:58.2 8 10 10 36.8 $\pm$0.4 16.2 $\pm$0.3 0.44 $\pm$0.01 no candidate SNR$^{b}$
LBZ55 07:36:41.5 65:36:50.5 6 10 10 35.3 $\pm$0.4 14.3 $\pm$0.3 0.40 $\pm$0.02 no candidate SNR
LBZ56 07:36:41.9 65:36:51.7 6 10 10 66.7 $\pm$0.4 29.5 $\pm$0.3 0.44 $\pm$0.01 no candidate SNR[^12]
LBZ57 07:36:42.0 65:37:10.6 6 10 10 13.7 $\pm$0.3 17.8 $\pm$0.2 1.31 $\pm$0.04 no candidate SNR
LBZ58 07:36:42.5 65:37:02.8 6 10 10 49.7 $\pm$0.4 20.4 $\pm$0.3 0.41 $\pm$0.01 no candidate SNR
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ---------------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ---------------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LBZ59 07:36:42.5 65:39:03.7 6 10 10 18.9 $\pm$0.2 7.9 $\pm$0.2 0.42 $\pm$0.01 no candidate SNR
LBZ60 07:36:42.9 65:34:51.9 6 10 10 8.9 $\pm$0.3 5.9 $\pm$0.2 0.67 $\pm$0.03 no candidate SNR[^13]
LBZ61 07:36:44.1 65:39:10.9 6 10 10 16.9 $\pm$0.3 8.3 $\pm$0.2 0.49 $\pm$0.02 no candidate SNR
LBZ62 07:36:44.2 65:36:45.7 6 10 10 18.8 $\pm$0.4 18.2 $\pm$0.3 0.97 $\pm$0.04 no candidate SNR
LBZ63 07:36:44.2 65:37:20.5 6 10 10 18.9 $\pm$0.6 10.3 $\pm$0.5 0.54 $\pm$0.03 no candidate SNR
LBZ64 07:36:45.2 65:36:35.8 6 10 10 32.5 $\pm$0.4 15.0 $\pm$0.4 0.46 $\pm$0.02 no candidate SNR
LBZ65 07:36:45.3 65:36:42.0 6 10 10 64.3 $\pm$0.5 30.9 $\pm$0.4 0.48 $\pm$0.01 no candidate SNR
LBZ66 07:36:45.7 65:36:40.6 6 10 10 155.0$\pm$0.6 149.0$\pm$0.4 0.96 $\pm$0.01 no candidate SNR[^14]
LBZ67 07:36:45.8 65:36:36.0 5 10 10 84.4 $\pm$0.4 32.7 $\pm$0.3 0.39 $\pm$0.01 no candidate SNR[^15]
LBZ68 07:36:46.0 65:37:43.2 6 10 10 22.0 $\pm$0.5 12.2 $\pm$0.4 0.55 $\pm$0.02 no candidate SNR
LBZ69 07:36:46.0 65:39:06.0 12 20 30 86.8 $\pm$0.6 39.7 $\pm$0.4 0.46 $\pm$0.01 no candidate SNR$^{f}$
LBZ70 07:36:46.4 07:36:46.4 6 10 10 44.4 $\pm$0.6 20.8 $\pm$0.5 0.47 $\pm$0.01 no candidate SNR
LBZ71 07:36:47.0 65:39:44.6 8 10 10 11.2 $\pm$0.2 6.9 $\pm$0.1 0.62 $\pm$0.01 no candidate SNR$^{f}$
LBZ72 07:36:47.1 65:36:54.5 5 5 2 11.3 $\pm$0.4 6.3 $\pm$0.3 0.56 $\pm$0.05 no candidate SNR
LBZ73 07:36:47.7 65:36:07.1 6 6 2 3.2 $\pm$0.6 2.1 $\pm$0.4 0.65 $\pm$0.18 no candidate SNR
LBZ74 07:36:47.9 65:36:23.9 6 6 2 13.1 $\pm$0.6 10.1 $\pm$0.5 0.77 $\pm$0.07 no candidate SNR [^16]
LBZ75 07:36:48.0 65:37:56.4 5 10 10 19.4 $\pm$0.4 8.9 $\pm$0.3 0.46 $\pm$0.02 no candidate SNR
LBZ76 07:36:48.1 65:36:59.3 6 10 10 15.9 $\pm$0.4 12.3 $\pm$0.3 0.77 $\pm$0.04 no candidate SNR
LBZ77 07:36:48.3 65:34:58.5 6 10 10 6.1 $\pm$0.5 2.4 $\pm$0.2 0.40 $\pm$0.05 no candidate SNR
LBZ78 07:36:48.5 65:37:50.7 8 10 10 29.6 $\pm$0.6 16.2 $\pm$0.5 0.55 $\pm$0.02 no candidate SNR$^{f}$
LBZ79 07:36:48.9 65:35:30.3 6 10 10 7.3 $\pm$0.6 2.7 $\pm$0.3 0.37 $\pm$0.05 no candidate SNR
LBZ80 07:36:49.5 65:34:39.5 6 10 10 4.4 $\pm$0.4 2.1 $\pm$0.2 0.48 $\pm$0.06 no candidate SNR
LBZ81 07:36:50.9 65:36:24.5 10 15 5 62.0 $\pm$1.0 40.6 $\pm$0.8 0.65 $\pm$0.03 no candidate SNR$^{f}$
LBZ82 07:36:51.1 65:33:51.7 5 10 10 4.9 $\pm$0.1 3.6 $\pm$0.1 0.77 $\pm$0.02 no candidate SNR
LBZ83 07:36:51.1 65:36:36.6 6 10 10 26.4 $\pm$0.5 15.4 $\pm$0.4 0.58 $\pm$0.03 no candidate SNR
LBZ84 07:36:51.1 65:36:55.8 6 10 10 41.4 $\pm$0.7 18.5 $\pm$0.5 0.45 $\pm$0.01 no candidate SNR
LBZ85 07:36:51.5 65:35:36.4 6 10 10 10.1 $\pm$0.7 6.6 $\pm$0.4 0.65 $\pm$0.06 no candidate SNR
LBZ86 07:36:51.5 65:36:09.5 6 10 10 19.0 $\pm$0.7 10.1 $\pm$0.6 0.53 $\pm$0.06 no candidate SNR
LBZ87 07:36:52.2 65:33:41.9 6 6 2 3.8 $\pm$0.1 3.8 $\pm$0.1 0.99 $\pm$0.03 no candidate SNR[^17]
LBZ88 07:36:52.9 65:36:13.8 6 10 10 35.6 $\pm$0.6 16.6 $\pm$0.5 0.47 $\pm$0.03 no candidate SNR
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ---------------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ------------------------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LBZ89 07:36:53.4 65:35:59.8 6 10 10 58.4 $\pm$0.6 34.2 $\pm$0.5 0.59 $\pm$0.02 no candidate SNR[^18]
LBZ90 07:36:53.8 65:33:41.7 8 10 10 68.8 $\pm$0.2 30.3 $\pm$0.2 0.44 $\pm$0.01 no candidate SNR[^19]
LBZ91 07:36:54.3 65:34:04.0 5 10 10 3.2 $\pm$0.2 2.5 $\pm$0.1 0.80 $\pm$0.06 no candidate SNR
LBZ92 07:36:54.4 65:35:11.0 5 10 10 4.0 $\pm$0.4 2.5 $\pm$0.2 0.63 $\pm$0.07 no candidate SNR
LBZ93 07:36:55.1 65:35:38.1 10 15 5 78.1 $\pm$0.9 43.0 $\pm$0.5 0.55 $\pm$0.01 no candidate SNR[^20]
LBZ94 07:36:55.6 65:35:36.1 6 10 10 31.9 $\pm$0.6 13.7 $\pm$0.2 0.43 $\pm$0.02 no candidate SNR[^21]
LBZ95 07:36:55.8 65:35:38.6 6 10 10 33.8 $\pm$0.6 14.2 $\pm$0.2 0.42 $\pm$0.01 no candidate SNR[^22]
LBZ96 07:36:57.2 65:36:03.9 6 10 10 5.7 $\pm$0.6 6.9 $\pm$0.3 1.21 $\pm$0.13 no candidate SNR[^23]
LBZ97 07:36:58.1 65:36:28.7 10 10 5 20.6 $\pm$0.8 15.0 $\pm$0.4 0.73 $\pm$0.03 no candidate SNR$^{m}$
LBZ98 07:36:59.3 65:35:38.0 6 10 10 19.1 $\pm$0.5 9.8 $\pm$0.2 0.51 $\pm$0.02 no candidate SNR
LBZ99 07:37:00.0 65:37:28.9 6 6 2 10.2 $\pm$0.5 4.0 $\pm$0.4 0.39 $\pm$0.04 no candidate SNR
LBZ100 07:37:01.3 65:34:38.4 6 10 10 26.3 $\pm$0.4 10.4 $\pm$0.2 0.40 $\pm$0.01 no candidate SNR
LBZ101 07:37:01.4 65:34:35.7 6 10 10 15.5 $\pm$0.4 8.1 $\pm$0.2 0.52 $\pm$0.02 no candidate SNR
LBZ102 07:37:01.8 65:34:13.4 10 20 5 98.9 $\pm$0.6 50.3 $\pm$0.2 0.51 $\pm$0.01 no candidate SNR[^24]
LBZ103 07:37:02.4 65:36:01.7 6 6 2 18.7 $\pm$0.4 9.5 $\pm$0.2 0.51 $\pm$0.01 no candidate SNR [^25]
LBZ104 07:37:02.8 65:34:38.1 6 10 10 37.1 $\pm$0.4 17.9 $\pm$0.2 0.48 $\pm$0.01 no candidate SNR [^26]
LBZ105 07:37:03.5 65:37:25.2 6 10 10 14.8 $\pm$0.4 8.9 $\pm$0.3 0.60 $\pm$0.03 no candidate SNR
LBZ106 07:37:06.4 65:34:46.1 6 10 10 13.0 $\pm$0.5 5.6 $\pm$0.2 0.43 $\pm$0.02 no candidate SNR
LBZ107 07:37:10.7 65:33:11.0 5 5 5 9.85 $\pm$0.5 4.2 $\pm$0.2 0.42 $\pm$0.03 no candidate SNR[^27]
LBZ108 07:37:12.4 65:33:45.9 6 10 10 6.4 $\pm$0.6 4.2 $\pm$0.2 0.65 $\pm$0.07 no candidate SNR[^28]
LBZ109 07:37:21.6 65:33:14.4 6 6 1 5.57 $\pm$0.4 2.1 $\pm$0.1 0.38 $\pm$0.03 no candidate SNR[^29]
LBZ110 07:36:22.8 65:36:54.2 8 10 10 66.4 $\pm$0.5 22.5 $\pm$0.4 0.34 $\pm$0.01 no probable candidate SNR$^{m}$
LBZ111 07:36:25.6 65:36:12.4 6 10 10 43.8 $\pm$0.2 16.6 $\pm$0.2 0.38 $\pm$0.01 no probable candidate SNR
LBZ112 07:36:25.7 65:38:49.6 6 10 10 45.5 $\pm$0.6 16.9 $\pm$0.5 0.37 $\pm$0.01 no probable candidate SNR
LBZ113 07:36:26.9 65:37:01.8 6 10 10 29.1 $\pm$0.4 10.9 $\pm$0.3 0.37 $\pm$0.01 no probable candidate SNR
LBZ114 07:36:27.5 65:37:43.6 8 10 10 45.5 $\pm$0.5 16.9 $\pm$0.4 0.37 $\pm$0.01 no probable candidate SNR$^{m}$
LBZ115 07:36:30.5 65:35:24.4 8 10 10 25.9 $\pm$0.3 7.5 $\pm$0.2 0.29 $\pm$0.01 no probable candidate SNR$^{m}$
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ------------------------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ------------------------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LBZ116 07:36:30.7 65:35:48.6 8 10 10 74.2 $\pm$0.3 24.4 $\pm$0.2 0.33 $\pm$0.01 no probable candidate SNR$^{j}$
LBZ117 07:36:37.0 65:36:35.4 6 10 10 110.0$\pm$0.4 35.2 $\pm$0.3 0.32 $\pm$0.01 no probable candidate SNR
LBZ118 07:36:37.0 65:36:39.1 8 10 10 311.0$\pm$0.6 114.0$\pm$0.5 0.37 $\pm$0.01 no probable candidate SNR[^30]
LBZ119 07:36:38.1 65:36:26.2 6 10 10 151.0$\pm$0.5 44.8 $\pm$0.4 0.30 $\pm$0.01 no probable candidate SNR
LBZ120 07:36:40.2 65:39:22.0 6 10 10 23.0 $\pm$0.2 8.60 $\pm$0.2 0.37 $\pm$0.01 no probable candidate SNR
LBZ121 07:36:42.5 65:36:11.6 6 10 10 70.8 $\pm$0.4 20.3 $\pm$0.3 0.29 $\pm$0.01 no probable candidate SNR
LBZ122 07:36:42.7 65:36:59.4 6 10 10 71.4 $\pm$0.4 23.6 $\pm$0.3 0.33 $\pm$0.01 no probable candidate SNR
LBZ123 07:36:44.2 65:35:02.6 6 6 2 71.2 $\pm$0.4 20.6 $\pm$0.3 0.29 $\pm$0.01 no probable candidate SNR
LBZ124 07:36:44.9 65:36:05.9 6 10 10 31.7 $\pm$0.4 10.2 $\pm$0.4 0.32 $\pm$0.02 no probable candidate SNR
LBZ125 07:36:45.9 65:39:39.6 8 10 10 19.9 $\pm$0.2 6.9 $\pm$0.1 0.35 $\pm$0.01 no probable candidate SNR$^{j}$
LBZ126 07:36:46.3 65:35:57.9 6 10 10 55.5 $\pm$0.5 17.3 $\pm$0.4 0.31 $\pm$0.01 no probable candidate SNR
LBZ127 07:36:46.5 65:36:10.8 6 10 10 175.0$\pm$0.6 55.3 $\pm$0.4 0.32 $\pm$0.01 no probable candidate SNR[^31]
LBZ128 07:36:47.5 65:36:19.9 6 10 10 47.7 $\pm$0.6 16.7 $\pm$0.5 0.35 $\pm$0.02 no probable candidate SNR
LBZ129 07:36:47.6 07:36:47.5 6 10 10 29.6 $\pm$0.5 10.6 $\pm$0.4 0.36 $\pm$0.01 no probable candidate SNR
LBZ130 07:36:47.9 65:36:26.3 6 10 10 52.4 $\pm$0.6 19.6 $\pm$0.5 0.37 $\pm$0.02 no probable candidate SNR
LBZ131 07:36:49.2 65:34:30.6 10 15 5 31.2 $\pm$0.6 9.7 $\pm$0.3 0.31 $\pm$0.01 no probable candidate SNR[^32]
LBZ132 07:36:49.3 65:36:21.6 6 10 10 89.5 $\pm$0.6 30.6 $\pm$0.5 0.34 $\pm$0.01 no probable candidate SNR
LBZ133 07:36:50.1 65:36:48.9 6 10 10 79.0 $\pm$0.6 25.6 $\pm$0.4 0.32 $\pm$0.01 no probable candidate SNR
LBZ134 07:36:51.0 65:36:14.3 6 10 10 41.2 $\pm$0.7 12.2 $\pm$0.5 0.30 $\pm$0.02 no probable candidate SNR
LBZ135 07:36:52.7 65:35:50.2 8 10 10 168.0$\pm$0.8 58.0 $\pm$0.6 0.35 $\pm$0.01 no probable candidate SNR [^33]
LBZ136 07:36:53.2 65:35:54.1 6 10 10 19.7 $\pm$0.8 6.2 $\pm$0.3 0.32 $\pm$0.03 no probable candidate SNR
LBZ137 07:36:53.7 65:35:11.5 5 10 10 10.9 $\pm$0.4 4.1 $\pm$0.2 0.37 $\pm$0.02 no probable candidate SNR[^34]
LBZ138 07:36:53.8 65:35:32.1 6 10 10 31.5 $\pm$0.5 9.2 $\pm$0.4 0.29 $\pm$0.02 no probable candidate SNR
LBZ139 07:36:56.3 65:34:05.6 8 10 10 48.6 $\pm$0.4 15.0 $\pm$0.2 0.31 $\pm$0.01 no probable candidate SNR [^35]
LBZ140 07:36:57.4 65:33:58.8 6 10 10 19.0 $\pm$0.3 6.5 $\pm$0.1 0.34 $\pm$0.01 no probable candidate SNR
LBZ141 07:36:57.9 65:37:31.6 6 10 10 29.6 $\pm$0.5 8.6 $\pm$0.4 0.29 $\pm$0.01 no probable candidate SNR
LBZ142 07:36:58.2 65:34:07.7 6 10 10 20.3 $\pm$0.3 7.1 $\pm$0.1 0.35 $\pm$0.01 no probable candidate SNR
LBZ143 07:37:01.3 65:34:59.5 5 5 2 13.4 $\pm$0.4 5.0 $\pm$0.1 0.37 $\pm$0.02 no probable candidate SNR
LBZ144 07:37:01.9 65:33:42.6 5 10 10 9.8 $\pm$0.5 2.7 $\pm$0.2 0.27 $\pm$0.03 no probable candidate SNR[^36]
LBZ145 07:37:02.1 65:34:36.6 6 10 10 36.9 $\pm$0.4 13.0 $\pm$0.2 0.35 $\pm$0.01 no probable candidate SNR[^37]
LBZ146 07:37:03.0 65:33:46.1 6 6 2 8.1 $\pm$0.5 2.7 $\pm$0.2 0.33 $\pm$0.03 no probable candidate SNR [^38]
LBZ147 07:37:04.6 65:36:38.2 6 10 10 31.7 $\pm$0.6 11.9 $\pm$0.4 0.38 $\pm$0.01 no probable candidate SNR
LBZ148 07:37:04.7 65:34:35.9 6 10 10 26.9 $\pm$0.4 9.2 $\pm$0.1 0.34 $\pm$0.01 no probable candidate SNR
LBZ149 07:37:05.8 65:34:32.2 10 20 5 34.0 $\pm$0.5 10.6 $\pm$0.2 0.31 $\pm$0.01 no probable candidate SNR$^{j}$
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ------------------------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- -----------------------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LBZ1 10:03:14.2 68:43:52.1 4 10 10 10.3$\pm$0.7 4.6$\pm$0.3 0.45$\pm$0.08 yes (M) SNR
LBZ2 10:03:16.5 68:44:41.8 4 10 10 7.0$\pm$0.4 3.6$\pm$0.2 0.51$\pm$0.08 yes (M) SNR
LBZ3 10:03:17.1 68:44:37.5 5 10 10 24.5$\pm$0.6 6.8$\pm$0.3 0.28$\pm$0.02 yes (M) SNR
LBZ4 10:03:18.4 68:43:19.3 5 5 10 13.7$\pm$0.6 3.8$\pm$0.3 0.28$\pm$0.04 yes (M) SNR
LBZ5 10:03:18.8 68:43:37.9 4 5 5 3.6$\pm$0.7 2.7$\pm$0.3 0.75$\pm$0.34 yes (M) SNR
LBZ6 10:03:23.4 68:44:14.8 4 10 10 9.9$\pm$0.8 3.4$\pm$0.4 0.34$\pm$0.08 yes (S) SNR
LBZ7 10:03:14.9 68:43:47.2 4 10 10 7.4$\pm$0.7 5.2$\pm$0.4 0.71$\pm$0.16 no candidate SNR
LBZ8 10:03:15.7 68:43:48.3 4 10 10 4.8$\pm$0.8 2.4$\pm$0.4 0.51$\pm$0.22 no candidate SNR
LBZ9 10:03:16.5 68:44:35.6 4 10 10 4.6$\pm$0.5 2.9$\pm$0.2 0.64$\pm$0.16 no candidate SNR
LBZ10 10:03:16.6 68:43:44.7 4 10 10 9.9$\pm$0.8 4.1$\pm$0.4 0.41$\pm$0.09 no candidate SNR
LBZ11 10:03:16.8 68:44:31.9 4 10 10 7.9$\pm$0.5 3.6$\pm$0.3 0.45$\pm$0.08 no candidate SNR
LBZ12 10:03:18.1 68:44:32.6 4 10 10 8.3$\pm$0.6 3.7$\pm$0.3 0.45$\pm$0.08 no candidate SNR
LBZ13 10:03:18.1 68:44:36.0 4 10 10 5.7$\pm$0.5 2.7$\pm$0.3 0.47$\pm$0.12 no candidate SNR
LBZ14 10:03:18.6 68:44:30.1 3 10 10 6.6$\pm$0.4 2.8$\pm$0.2 0.43$\pm$0.08 no candidate SNR
LBZ15 10:03:19.4 68:43:15.4 4 10 10 10.5$\pm$0.5 4.2$\pm$0.2 0.40$\pm$0.05 no candidate SNR
LBZ16 10:03:20.3 68:44:29.0 4 6 5 11.1$\pm$0.6 5.4$\pm$0.3 0.49$\pm$0.07 no candidate SNR
LBZ17 10:03:22.3 68:43:56.1 4 10 10 9.7$\pm$0.8 3.8$\pm$0.4 0.40$\pm$0.10 no candidate SNR
LBZ18 10:03:22.7 68:44:15.8 4 10 10 9.0$\pm$0.8 5.2$\pm$0.4 0.58$\pm$0.13 no candidate SNR
LBZ19 10:03:14.0 68:44:01.4 4 10 10 18.3$\pm$0.7 5.2$\pm$0.3 0.29$\pm$0.04 no probable candidate SNR
LBZ20 10:03:14.4 68:43:59.2 4 10 10 16.9$\pm$0.8 5.0$\pm$0.4 0.29$\pm$0.04 no probable candidate SNR
LBZ21 10:03:15.1 68:44:25.1 4 10 10 17.5$\pm$0.6 5.6$\pm$0.3 0.32$\pm$0.03 no probable candidate SNR
LBZ22 10:03:17.7 68:44:31.0 4 10 10 11.8$\pm$0.6 3.6$\pm$0.3 0.31$\pm$0.05 no probable candidate SNR
LBZ23 10:03:17.8 68:43:12.0 4 10 10 13.4$\pm$0.5 3.9$\pm$0.2 0.29$\pm$0.03 no probable candidate SNR
LBZ24 10:03:20.8 68:41:40.2 4 10 10 4.2$\pm$0.2 1.2$\pm$0.1 0.29$\pm$0.05 no probable candidate SNR[^39]
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- -----------------------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- --------------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LBZ1 12:15:33.6 36:19:26.9 5 ext ext 18.8$\pm$0.2 4.7$\pm$0.3 0.25$\pm$0.03 yes (S) SNR
LBZ2 12:15:33.8 36:19:12.0 3 20 10 2.4$\pm$0.1 1.7$\pm$0.2 0.71$\pm$0.15 yes (M) SNR
LBZ3 12:15:33.8 36:19:30.9 5 ext ext 22.2$\pm$0.2 5.3$\pm$0.3 0.24$\pm$0.02 yes (S) SNR
LBZ4 12:15:35.7 36:17:41.6 5 10 10 16.8$\pm$0.2 9.3$\pm$0.2 0.55$\pm$0.03 yes (M) SNR
LBZ5 12:15:37.6 36:20:12.3 3 10 10 4.1$\pm$0.2 3.0$\pm$0.3 0.73$\pm$0.16 yes (S) SNR
LBZ6 12:15:37.6 36:20:33.3 6 45 10 8.2$\pm$0.3 6.5$\pm$0.4 0.79$\pm$0.12 yes (S) SNR$^{d}$
LBZ7 12:15:37.7 36:19:12.9 5 10 10 13.5$\pm$0.3 6.2$\pm$0.4 0.46$\pm$0.06 yes (S) SNR
LBZ8 12:15:37.8 36:16:12.7 6 20 30 29.1$\pm$0.1 23.0$\pm$0.2 0.79$\pm$0.02 yes (S) SNR$^{d}$
LBZ9 12:15:39.4 36:19:09.5 5 2 5 7.2$\pm$0.4 3.8$\pm$0.5 0.53$\pm$0.15 yes (M) SNR
LBZ10 12:15:39.4 36:20:06.5 5 10 10 8.2$\pm$0.4 4.8$\pm$0.5 0.58$\pm$0.13 yes (M) SNR
LBZ11 12:15:39.6 36:20:11.8 5 10 10 7.4$\pm$0.3 6.6$\pm$0.5 0.89$\pm$0.15 yes (M) SNR
LBZ12 12:15:39.9 36:20:03.5 5 6 5 5.3$\pm$0.4 3.4$\pm$0.5 0.64$\pm$0.22 yes (M) SNR
LBZ13 12:15:40.5 36:18:25.4 4 ext ext 8.9$\pm$0.2 4.1$\pm$0.2 0.46$\pm$0.05 yes (M) SNR
LBZ14 12:15:40.9 36:19:50.0 5 10 5 14.0$\pm$0.4 5.4$\pm$0.5 0.39$\pm$0.07 yes (M) SNR
LBZ15 12:15:41.7 36:18:40.5 5 5 5 3.8$\pm$0.2 3.8$\pm$0.4 1.01$\pm$0.25 yes (M) SNR
LBZ16 12:15:42.5 36:19:47.7 8 20 30 176.0$\pm$0.5 98.0$\pm$0.7 0.55$\pm$0.01 yes (M) SNR[^40]
LBZ17 12:15:44.7 36:18:31.9 3 22 3 2.6$\pm$0.9 2.0$\pm$0.1 0.78$\pm$0.12 yes (M) SNR
LBZ18 12:15:45.7 36:19:41.8 7 20 30 6.2$\pm$0.3 3.5$\pm$0.4 0.57$\pm$0.14 yes (M) SNR[^41]
LBZ19 12:15:21.8 36:19:25.0 5 10 10 4.1$\pm$0.1 2.6$\pm$0.2 0.63$\pm$0.08 no candidate SNR
LBZ20 12:15:23.1 36:21:45.0 5 10 10 1.4$\pm$0.1 1.4$\pm$0.1 1.03$\pm$0.24 no candidate SNR
LBZ21 12:15:23.6 36:17:00.5 5 10 10 3.2$\pm$0.1 2.0$\pm$0.2 0.61$\pm$0.09 no candidate SNR
LBZ22 12:15:23.8 36:20:37.4 5 10 10 8.9$\pm$0.1 4.2$\pm$0.1 0.47$\pm$0.03 no candidate SNR
LBZ23 12:15:25.9 36:22:04.6 5 10 10 3.1$\pm$0.1 1.4$\pm$0.2 0.46$\pm$0.09 no candidate SNR
LBZ24 12:15:30.2 36:16:50.1 5 10 10 12.7$\pm$0.1 5.0$\pm$0.2 0.40$\pm$0.02 no candidate SNR
LBZ25 12:15:31.9 36:22:24.3 5 10 10 2.7$\pm$0.1 1.4$\pm$0.2 0.51$\pm$0.12 no candidate SNR
LBZ26 12:15:32.1 36:22:05.6 5 10 10 11.1$\pm$0.2 6.1$\pm$0.2 0.55$\pm$0.04 no candidate SNR
LBZ27 12:15:32.4 36:22:20.7 5 10 10 8.6$\pm$0.2 3.2$\pm$0.2 0.37$\pm$0.05 no candidate SNR
LBZ28 12:15:32.6 36:21:58.5 3 4 10 2.7$\pm$0.1 1.5$\pm$0.1 0.55$\pm$0.08 no candidate SNR
LBZ29 12:15:32.7 36:21:50.1 5 10 10 2.9$\pm$0.1 2.5$\pm$0.2 0.86$\pm$0.14 no candidate SNR
LBZ30 12:15:32.9 36:22:13.2 5 10 10 10.6$\pm$0.1 7.0$\pm$0.2 0.66$\pm$0.04 no candidate SNR
LBZ31 12:15:33.3 36:19:04.4 5 10 10 5.5$\pm$0.2 3.5$\pm$0.3 0.64$\pm$0.10 no candidate SNR
LBZ32 12:15:33.3 36:19:25.6 5 10 10 6.8$\pm$0.2 3.5$\pm$0.3 0.51$\pm$0.09 no candidate SNR
LBZ33 12:15:33.3 36:21:07.9 5 10 10 7.9$\pm$0.1 3.0$\pm$0.2 0.38$\pm$0.05 no candidate SNR
LBZ34 12:15:33.3 36:21:56.4 5 10 10 12.1$\pm$0.2 6.9$\pm$0.2 0.57$\pm$0.04 no candidate SNR
LBZ35 12:15:33.4 36:19:01.0 5 10 10 5.5$\pm$0.2 4.9$\pm$0.3 0.89$\pm$0.11 no candidate SNR[^42]
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- --------------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ---------------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LBZ36 12:15:34.0 36:21:53.5 5 10 10 5.3$\pm$0.1 2.3$\pm$0.2 0.43$\pm$0.07 no candidate SNR
LBZ37 12:15:34.5 36:18:23.9 5 10 10 6.2$\pm$0.2 4.1$\pm$0.2 0.67$\pm$0.08 no candidate SNR
LBZ38 12:15:35.2 36:21:18.4 5 10 10 5.8$\pm$0.2 2.7$\pm$0.2 0.46$\pm$0.07 no candidate SNR
LBZ39 12:15:35.2 36:22:16.2 7 15 5 8.7$\pm$0.2 8.5$\pm$0.3 0.98$\pm$0.08 no candidate SNR$^{d}$
LBZ40 12:15:35.4 36:21:12.2 5 10 10 4.6$\pm$0.2 2.0$\pm$0.2 0.43$\pm$0.09 no candidate SNR
LBZ41-1 12:15:35.8 36:20:52.8 5 10 10 6.3$\pm$0.2 4.0$\pm$0.3 0.63$\pm$0.09 no candidate SNR
LBZ42-2 12:15:35.9 36:20:51.1 5 10 10 9.4$\pm$0.2 5.8$\pm$0.3 0.61$\pm$0.06 no candidate SNR
LBZ43 12:15:36.2 36:20:48.6 5 10 10 9.9$\pm$0.2 7.3$\pm$0.3 0.74$\pm$0.07 no candidate SNR
LBZ44 12:15:36.7 36:20:55.7 5 10 10 16.9$\pm$0.2 9.0$\pm$0.3 0.53$\pm$0.04 no candidate SNR
LBZ45 12:15:37.9 36:19:55.6 5 10 10 39.3$\pm$0.4 16.0$\pm$0.7 0.41$\pm$0.03 no candidate SNR
LBZ46 12:15:38.0 36:21:16.4 5 10 10 4.3$\pm$0.2 2.7$\pm$0.3 0.63$\pm$0.13 no candidate SNR
LBZ47 12:15:38.0 36:22:22.4 10 12 5 66.7$\pm$0.4 42.0$\pm$0.8 0.63$\pm$0.02 no candidate SNR[^43]
LBZ48 12:15:38.3 36:19:08.6 5 10 10 8.5$\pm$0.3 4.8$\pm$0.5 0.56$\pm$0.11 no candidate SNR
LBZ49-1 12:15:38.3 36:20:09.3 5 10 10 20.5$\pm$0.4 8.2$\pm$0.5 0.40$\pm$0.05 no candidate SNR
LBZ50-2 12:15:38.6 36:20:09.5 5 10 10 14.0$\pm$0.4 11.0$\pm$0.5 0.78$\pm$0.08 no candidate SNR
LBZ51-3 12:15:38.8 36:20:09.7 5 10 10 12.8$\pm$0.4 5.0$\pm$0.5 0.39$\pm$0.08 no candidate SNR
LBZ52 12:15:38.9 36:19:16.1 5 10 10 14.0$\pm$0.4 6.6$\pm$0.5 0.47$\pm$0.07 no candidate SNR
LBZ53 12:15:39.0 36:19:05.4 5 10 10 7.4$\pm$0.3 6.6$\pm$0.4 0.89$\pm$0.16 no candidate SNR
LBZ54 12:15:39.0 36:19:08.6 5 5 5 7.2$\pm$0.4 3.9$\pm$0.5 0.54$\pm$0.14 no candidate SNR
LBZ55 12:15:39.2 36:20:12.1 5 10 10 6.5$\pm$0.3 2.4$\pm$0.5 0.37$\pm$0.14 no candidate SNR
LBZ56 12:15:39.4 36:20:54.1 4 20 5 2.2$\pm$0.1 2.0$\pm$0.2 0.89$\pm$0.21 no candidate SNR[^44]
LBZ57 12:15:40.0 36:18:39.4 5 10 10 9.6$\pm$0.2 8.9$\pm$0.4 0.92$\pm$0.09 no candidate SNR[^45]
LBZ58 12:15:40.2 36:18:41.1 5 10 10 10.8$\pm$0.2 8.0$\pm$0.4 0.74$\pm$0.08 no candidate SNR
LBZ59 12:15:40.8 36:19:58.8 5 10 10 15.2$\pm$0.3 8.3$\pm$0.5 0.54$\pm$0.07 no candidate SNR
LBZ60 12:15:41.8 36:18:50.5 5 10 10 5.6$\pm$0.3 3.1$\pm$0.4 0.55$\pm$0.14 no candidate SNR
LBZ61 12:15:41.9 36:19:35.9 5 10 10 7.4$\pm$0.3 5.1$\pm$0.4 0.69$\pm$0.13 no candidate SNR
LBZ62 12:15:42.0 36:19:43.1 5 10 10 8.4$\pm$0.3 7.0$\pm$0.4 0.83$\pm$0.12 no candidate SNR
LBZ63 12:15:42.7 36:18:34.1 5 10 10 36.0$\pm$0.3 23.0$\pm$0.4 0.64$\pm$0.02 no candidate SNR
LBZ64 12:15:42.8 36:16:58.6 5 10 10 4.3$\pm$0.1 1.7$\pm$0.2 0.40$\pm$0.07 no candidate SNR
LBZ65 12:15:44.7 36:18:03.6 5 10 10 6.3$\pm$0.2 3.5$\pm$0.2 0.55$\pm$0.07 no candidate SNR
LBZ66 12:15:46.1 36:17:02.0 5 10 10 10.1$\pm$0.1 3.9$\pm$0.2 0.39$\pm$0.03 no candidate SNR
LBZ67 12:15:46.2 36:17:39.4 5 10 10 6.7$\pm$0.1 5.2$\pm$0.2 0.78$\pm$0.05 no candidate SNR
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ---------------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ------------------------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LBZ68 12:15:47.1 36:17:14.8 5 10 10 4.8$\pm$0.1 2.7$\pm$0.2 0.56$\pm$0.06 no candidate SNR
LBZ69-1 12:15:47.6 36:17:37.2 5 10 10 8.2$\pm$0.1 5.4$\pm$0.2 0.65$\pm$0.05 no candidate SNR
LBZ70-2 12:15:47.7 36:17:35.5 5 10 10 10.1$\pm$0.1 7.5$\pm$0.2 0.74$\pm$0.04 no candidate SNR
LBZ71-3 12:15:47.8 36:17:37.1 5 10 10 7.9$\pm$0.1 5.6$\pm$0.2 0.71$\pm$0.05 no candidate SNR
LBZ72-4 12:15:47.9 36:17:36.7 5 10 10 8.6$\pm$0.1 5.9$\pm$0.2 0.69$\pm$0.05 no candidate SNR
LBZ73 12:15:48.8 36:17:02.3 6 70 5 8.9$\pm$0.1 4.8$\pm$0.2 0.54$\pm$0.05 no candidate SNR[^46]
LBZ74 12:15:33.2 36:16:45.3 5 10 10 5.8$\pm$0.1 1.9$\pm$0.1 0.33$\pm$0.04 no probable candidate SNR
LBZ75 12:15:34.9 36:22:48.5 5 10 10 4.6$\pm$0.1 1.5$\pm$0.1 0.33$\pm$0.05 no probable candidate SNR
LBZ76 12:15:35.4 36:19:44.6 5 10 10 14.0$\pm$0.3 4.6$\pm$0.5 0.33$\pm$0.06 no probable candidate SNR
LBZ77 12:15:35.8 36:21:02.0 6 10 10 27.4$\pm$0.2 9.1$\pm$0.3 0.33$\pm$0.02 no probable candidate SNR$^{f}$
LBZ78 12:15:36.3 36:20:02.6 5 10 10 47.9$\pm$0.4 15.0$\pm$0.5 0.32$\pm$0.02 no probable candidate SNR
LBZ79 12:15:36.6 36:22:44.6 5 10 10 14.2$\pm$0.2 4.1$\pm$0.2 0.29$\pm$0.02 no probable candidate SNR
LBZ80 12:15:38.2 36:19:45.2 10 10 5 94.1$\pm$0.6 33.0$\pm$1.1 0.35$\pm$0.02 no probable candidate SNR[^47]
LBZ81 12:15:38.6 36:20:04.6 5 10 10 42.8$\pm$0.4 14.0$\pm$0.6 0.33$\pm$0.02 no probable candidate SNR
LBZ82 12:15:38.9 36:18:58.9 4 ext ext 25.7$\pm$0.2 8.5$\pm$0.3 0.33$\pm$0.02 no probable candidate SNR[^48]
LBZ83 12:15:40.2 36:19:30.2 5 10 10 410.0$\pm$1.0 150.0$\pm$0.9 0.36$\pm$0.01 no probable candidate SNR[^49]
LBZ84 12:15:40.8 36:18:46.3 5 10 10 13.5$\pm$0.3 4.7$\pm$0.4 0.34$\pm$0.05 no probable candidate SNR
LBZ85 12:15:40.8 36:18:49.9 5 10 10 23.9$\pm$0.3 6.9$\pm$0.4 0.29$\pm$0.03 no probable candidate SNR
LBZ86 12:15:40.9 36:18:52.8 5 10 10 23.9$\pm$0.3 7.9$\pm$0.4 0.33$\pm$0.03 no probable candidate SNR
LBZ87 12:15:41.9 36:19:15.5 6 6 2 83.8$\pm$0.5 30.0$\pm$0.6 0.36$\pm$0.01 no probable candidate SNR[^50]
LBZ88 12:15:42.9 36:18:13.3 5 10 10 23.9$\pm$0.2 8.5$\pm$0.3 0.36$\pm$0.02 no probable candidate SNR
LBZ89 12:15:43.1 36:16:49.6 5 10 10 17.1$\pm$0.2 6.0$\pm$0.2 0.35$\pm$0.02 no probable candidate SNR
LBZ90 12:15:43.6 36:16:50.0 5 10 10 20.5$\pm$0.2 6.7$\pm$0.2 0.33$\pm$0.02 no probable candidate SNR
LBZ91 12:15:44.7 36:22:53.3 5 10 10 2.7$\pm$0.1 1.0$\pm$0.2 0.36$\pm$0.10 no probable candidate SNR
LBZ92 12:15:50.8 36:21:25.0 5 10 10 5.5$\pm$0.1 1.7$\pm$0.1 0.31$\pm$0.04 no probable candidate SNR
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ------------------------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ------------------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LZB1 12:25:44.9 33:30:36.3 6 10 10 14.0$\pm$0.2 3.5$\pm$0.1 0.25$\pm$0.02 yes (M) SNR
LBZ2 12:25:46.1 33:30:47.2 4 10 10 4.3$\pm$0.1 1.2$\pm$0.1 0.27$\pm$0.04 yes (M) SNR
LBZ3 12:25:47.2 33:32:45.3 3 10 10 5.5$\pm$0.1 1.4$\pm$0.1 0.26$\pm$0.04 yes (M) SNR
LBZ4 12:25:53.4 33:31:03.9 5 20 30 14.0$\pm$0.2 3.3$\pm$0.1 0.24$\pm$0.02 yes (M) SNR
LBZ5 12:25:54.1 33:30:57.6 6 15 5 17.0$\pm$0.3 4.3$\pm$0.1 0.25$\pm$0.02 yes (S) SNR
LBZ6 12:25:54.3 33:30:50.0 4 10 5 1.5$\pm$0.2 1.3$\pm$0.2 0.88$\pm$0.43 yes (S) SNR
LBZ7 12:25:31.9 33:33:49.6 4 20 10 2.4$\pm$0.1 0.8$\pm$0.1 0.37$\pm$0.05 no candidate SNR
LBZ8 12:25:37.9 33:30:34.3 5 10 10 5.3$\pm$0.1 2.0$\pm$0.1 0.39$\pm$0.03 no candidate SNR
LBZ9 12:25:44.6 33:35:08.4 4 10 10 4.4$\pm$0.2 1.6$\pm$0.1 0.36$\pm$0.06 no candidate SNR
LBZ10 12:25:50.7 33:30:09.0 4 10 10 3.1$\pm$0.1 1.1$\pm$0.1 0.36$\pm$0.05 no candidate SNR
LBZ11 12:25:52.5 33:30:25.9 4 5 5 3.9$\pm$0.2 2.0$\pm$0.1 0.53$\pm$0.08 no candidate SNR
LBZ12 12:25:53.4 33:31:05.9 5 6 5 7.5$\pm$0.2 3.0$\pm$0.1 0.40$\pm$0.04 no candidate SNR
LBZ13 12:25:54.5 33:31:03.4 4 10 10 2.7$\pm$0.2 1.8$\pm$0.1 0.64$\pm$0.14 no candidate SNR
LBZ14 12:25:54.6 33:31:14.6 4 10 10 5.8$\pm$0.2 2.4$\pm$0.1 0.41$\pm$0.05 no candidate SNR
LBZ15 12:25:54.9 33:31:12.3 4 10 10 5.3$\pm$0.2 1.9$\pm$0.1 0.35$\pm$0.05 no candidate SNR
LBZ16 12:25:56.6 33:31:31.7 4 10 10 4.1$\pm$0.2 2.8$\pm$0.1 0.68$\pm$0.11 no candidate SNR
LBZ17 12:25:57.2 33:36:22.5 4 10 10 2.6$\pm$0.1 0.7$\pm$0.1 0.34$\pm$0.06 no candidate SNR
LBZ18 12:25:57.7 33:31:54.7 4 10 10 6.2$\pm$0.2 2.2$\pm$0.1 0.36$\pm$0.04 no candidate SNR
LBZ19 12:25:59.9 33:31:32.6 4 10 10 5.5$\pm$0.2 2.0$\pm$0.1 0.37$\pm$0.05 no candidate SNR
LBZ20 12:26:01.1 33:29:04.3 4 10 10 2.0$\pm$0.1 0.7$\pm$0.1 0.35$\pm$0.06 no candidate SNR
LBZ21 12:25:31.3 33:35:24.8 4 10 10 3.1$\pm$0.1 0.8$\pm$0.1 0.26$\pm$0.04 no probable candidate SNR
LBZ22 12:25:34.7 33:33:02.9 4 10 10 3.1$\pm$0.1 0.9$\pm$0.1 0.30$\pm$0.05 no probable candidate SNR
LBZ23 12:25:42.1 33:31:22.2 4 10 10 3.9$\pm$0.1 1.2$\pm$0.1 0.32$\pm$0.04 no probable candidate SNR
LBZ24 12:25:42.7 33:30:47.9 4 10 10 7.9$\pm$0.2 2.2$\pm$0.1 0.27$\pm$0.03 no probable candidate SNR
LBZ25 12:25:42.7 33:31:03.5 4 10 10 5.3$\pm$0.2 1.8$\pm$0.1 0.34$\pm$0.04 no probable candidate SNR
LBZ26 12:25:43.5 33:33:30.2 4 10 10 16.0$\pm$0.2 4.5$\pm$0.1 0.29$\pm$0.02 no probable candidate SNR
LBZ27 12:25:44.3 33:32:56.2 4 10 10 9.1$\pm$0.2 2.5$\pm$0.1 0.27$\pm$0.03 no probable candidate SNR
LBZ28 12:25:45.7 33:30:51.6 4 10 10 5.3$\pm$0.1 1.6$\pm$0.1 0.29$\pm$0.03 no probable candidate SNR
LBZ29 12:25:49.1 33:32:36.8 4 10 10 8.9$\pm$0.2 2.3$\pm$0.2 0.26$\pm$0.04 no probable candidate SNR
LBZ30 12:25:49.5 33:30:08.4 4 5 5 3.1$\pm$0.1 1.0$\pm$0.1 0.34$\pm$0.05 no probable candidate SNR
LBZ31 12:25:50.5 33:30:16.9 4 10 10 5.1$\pm$0.1 1.5$\pm$0.1 0.30$\pm$0.03 no probable candidate SNR
LBZ32 12:25:50.9 33:29:29.7 4 10 10 4.1$\pm$0.1 1.4$\pm$0.1 0.34$\pm$0.04 no probable candidate SNR
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ------------------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ------------------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LBZ33 12:25:51.9 33:30:32.8 4 10 10 4.3$\pm$0.1 1.1$\pm$0.1 0.26$\pm$0.04 no probable candidate SNR
LBZ34 12:25:53.6 33:35:09.9 4 10 10 3.4$\pm$0.1 0.9$\pm$0.1 0.26$\pm$0.04 no probable candidate SNR
LBZ35 12:25:53.7 33:30:57.4 4 10 10 5.8$\pm$0.2 1.8$\pm$0.1 0.32$\pm$0.05 no probable candidate SNR
LBZ36 12:25:53.9 33:30:49.5 4 10 10 3.6$\pm$0.2 1.1$\pm$0.1 0.29$\pm$0.09 no probable candidate SNR
LBZ37 12:25:55.7 33:31:30.5 4 10 10 8.7$\pm$0.2 2.6$\pm$0.1 0.30$\pm$0.03 no probable candidate SNR
LBZ38 12:25:56.3 33:30:10.2 4 10 10 4.9$\pm$0.1 1.4$\pm$0.1 0.28$\pm$0.03 no probable candidate SNR
LBZ39 12:25:56.7 33:35:18.9 5 20 30 6.0$\pm$0.1 1.9$\pm$0.1 0.31$\pm$0.03 no probable candidate SNR
LBZ40 12:25:59.2 33:36:40.4 4 10 10 2.9$\pm$0.1 0.8$\pm$0.1 0.28$\pm$0.05 no probable candidate SNR
LBZ41 12:25:59.6 33:31:04.3 4 10 10 7.5$\pm$0.2 2.0$\pm$0.1 0.27$\pm$0.03 no probable candidate SNR
LBZ42 12:25:59.7 33:31:21.4 4 10 10 6.2$\pm$0.2 1.6$\pm$0.1 0.26$\pm$0.04 no probable candidate SNR
LBZ43 12:26:00.7 33:28:40.1 4 10 10 3.4$\pm$0.1 1.0$\pm$0.1 0.31$\pm$0.05 no probable candidate SNR
LBZ44 12:26:02.2 33:31:21.3 4 10 10 5.1$\pm$0.2 1.4$\pm$0.1 0.27$\pm$0.04 no probable candidate SNR
LBZ45 12:26:07.5 33:32:03.0 4 20 10 3.2$\pm$0.1 0.9$\pm$0.1 0.27$\pm$0.04 no probable candidate SNR
LBZ46 12:26:08.2 33:35:26.4 4 20 10 2.7$\pm$0.1 0.8$\pm$0.1 0.29$\pm$0.05 no probable candidate SNR
LBZ47 12:26:08.8 33:32:46.2 4 10 10 6.0$\pm$0.2 1.7$\pm$0.1 0.29$\pm$0.05 no probable candidate SNR
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ------------------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ----------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LBZ1 12:28:07.3 44:04:44.9 4 4 5 3.8$\pm$0.2 3.6$\pm$0.3 0.93$\pm$0.21 yes (M) SNR
LBZ2 12:28:07.4 44:05:51.6 5 10 10 8.0$\pm$0.3 4.4$\pm$0.3 0.54$\pm$0.08 yes (M) SNR
LBZ3 12:28:09.5 44:05:49.2 5 10 10 19.0$\pm$0.4 7.0$\pm$0.5 0.39$\pm$0.05 yes (M) SNR
LBZ4 12:28:11.8 44:05:13.9 5 10 10 17.0$\pm$0.5 5.4$\pm$0.6 0.30$\pm$0.06 yes (M) SNR
LBZ5 12:28:11.9 44:04:49.4 5 5 5 7.5$\pm$0.3 4.4$\pm$0.3 0.59$\pm$0.10 yes (M) SNR
LBZ6b 12:28:12.6 44:06:38.1 3 5 5 2.9$\pm$0.2 1.0$\pm$0.2 0.34$\pm$0.16 yes (M) SNR
LBZ6a 12:28:12.7 44:06:39.5 3 5 5 3.1$\pm$0.2 1.9$\pm$0.2 0.62$\pm$0.19 yes (M) SNR
LBZ7 12:28:13.0 44:06:06.1 5 10 10 77.0$\pm$0.6 18.0$\pm$0.6 0.24$\pm$0.01 yes (S) SNR
LBZ8 12:28:13.0 44:06:28.8 5 10 10 24.0$\pm$0.4 9.0$\pm$0.4 0.36$\pm$0.03 yes (S) SNR
LBZ9 12:28:13.1 44:06:34.9 5 10 10 41.0$\pm$0.5 11.0$\pm$0.5 0.27$\pm$0.02 yes (S) SNR
LBZ10 12:28:13.1 44:06:22.7 6 10 10 20.0$\pm$0.5 7.7$\pm$0.5 0.36$\pm$0.05 yes (S) SNR
LBZ11 12:28:13.2 44:06:41.9 4 5 5 4.1$\pm$0.3 3.7$\pm$0.3 0.91$\pm$0.23 yes (S) SNR
LBZ12 12:28:13.3 44:05:56.1 5 10 10 17.0$\pm$0.6 13.0$\pm$0.7 0.77$\pm$0.10 yes (M) SNR
LBZ13 12:28:14.4 44:05:45.4 5 10 10 7.9$\pm$0.5 4.9$\pm$0.5 0.63$\pm$0.16 yes (M) SNR
LBZ14 12:28:14.9 44:05:29.3 4 5 3 4.3$\pm$0.3 3.4$\pm$0.3 0.79$\pm$0.19 yes (M) SNR
LBZ15 12:28:14.9 44:06:56.3 5 5 5 8.4$\pm$0.3 3.0$\pm$0.3 0.35$\pm$0.07 yes (M) SNR
LBZ16 12:28:15.1 44:06:08.5 3 10 10 5.8$\pm$0.2 2.9$\pm$0.3 0.51$\pm$0.10 yes (M) SNR
LBZ17 12:28:16.2 44:06:15.6 3 4 5 3.4$\pm$0.2 2.3$\pm$0.2 0.68$\pm$0.16 yes (M) SNR
LBZ18 12:28:19.5 44:06:13.9 5 5 2 6.0$\pm$0.3 6.6$\pm$0.3 1.12$\pm$0.17 yes (M) SNR
LBZ19 12:27:55.8 44:05:35.7 5 6 10 11.0$\pm$0.2 4.0$\pm$0.2 0.37$\pm$0.03 no candidate SNR
LBZ20 12:28:04.1 44:05:45.1 5 10 10 8.7$\pm$0.2 4.6$\pm$0.2 0.52$\pm$0.05 no candidate SNR
LBZ21 12:28:06.9 44:03:26.4 5 10 10 24.0$\pm$0.2 6.2$\pm$0.2 0.46$\pm$0.05 no candidate SNR
LBZ22 12:28:07.0 44:04:29.4 5 5 5 3.6$\pm$0.3 8.1$\pm$0.3 0.45$\pm$0.05 no candidate SNR
LBZ23 12:28:07.8 44:05:43.7 5 10 10 9.6$\pm$0.3 7.7$\pm$0.4 0.80$\pm$0.10 no candidate SNR
LBZ24 12:28:08.5 44:05:42.8 5 10 10 5.1$\pm$0.4 2.3$\pm$0.4 0.53$\pm$0.07 no candidate SNR
LBZ25 12:28:08.6 44:04:45.0 5 10 10 14.0$\pm$0.3 5.3$\pm$0.4 0.43$\pm$0.05 no candidate SNR
LBZ26 12:28:08.6 44:05:56.3 5 5 5 10.0$\pm$0.3 7.0$\pm$0.4 0.67$\pm$0.11 no candidate SNR
LBZ27 12:28:08.8 44:04:48.4 5 10 10 15.0$\pm$0.4 6.2$\pm$0.4 0.37$\pm$0.04 no candidate SNR
LBZ28 12:28:08.9 44:05:35.1 5 10 10 19.0$\pm$0.5 7.6$\pm$0.5 0.41$\pm$0.06 no candidate SNR
LBZ29 12:28:09.4 44:06:00.2 5 10 10 11.0$\pm$0.3 6.1$\pm$0.4 0.54$\pm$0.07 no candidate SNR
LBZ30 12:28:09.4 44:06:47.6 5 5 5 13.0$\pm$0.3 6.0$\pm$0.2 0.43$\pm$0.10 no candidate SNR
LBZ31 12:28:09.5 44:05:34.9 5 10 10 6.3$\pm$0.6 7.1$\pm$0.6 1.13$\pm$0.35 no candidate SNR
LBZ32 12:28:09.5 44:05:57.6 5 10 10 19.0$\pm$0.4 7.2$\pm$0.4 0.40$\pm$0.04 no candidate SNR
LBZ33 12:28:09.6 44:06:21.2 5 5 5 5.5$\pm$0.3 3.0$\pm$0.3 0.54$\pm$0.13 no candidate SNR
LBZ34 12:28:09.8 44:07:00.9 5 10 10 26.0$\pm$0.2 5.8$\pm$0.2 0.51$\pm$0.08 no candidate SNR
LBZ35 12:28:10.5 44:05:08.1 5 6 6 18.0$\pm$0.6 8.1$\pm$0.6 0.52$\pm$0.09 no candidate SNR
LBZ36 12:28:10.6 44:06:06.9 5 10 10 10.0$\pm$0.4 3.5$\pm$0.4 0.34$\pm$0.07 no candidate SNR
LBZ37 12:28:11.0 44:06:14.1 5 10 10 13.0$\pm$0.4 4.5$\pm$0.4 0.36$\pm$0.06 no candidate SNR
LBZ38 12:28:11.2 44:04:53.7 5 10 10 14.0$\pm$0.4 5.2$\pm$0.4 0.37$\pm$0.06 no candidate SNR
LBZ39 12:28:12.0 44:05:18.8 5 10 10 22.0$\pm$0.6 3.3$\pm$0.6 0.58$\pm$0.24 no candidate SNR
LBZ40 12:28:12.0 44:06:52.6 5 10 10 3.9$\pm$0.4 3.6$\pm$0.3 0.94$\pm$0.29 no candidate SNR
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ----------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- -----------------------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LBZ41 12:28:12.2 44:06:47.4 5 10 10 8.2$\pm$0.4 4.4$\pm$0.4 0.54$\pm$0.10 no candidate SNR
LBZ42 12:28:12.2 44:06:50.9 5 10 10 7.5$\pm$0.4 2.7$\pm$0.3 0.36$\pm$0.09 no candidate SNR
LBZ43-1 12:28:13.8 44:06:35.2 5 10 10 6.5$\pm$0.4 6.2$\pm$0.4 0.85$\pm$0.31 no candidate SNR
LBZ44-2 12:28:14.0 44:06:33.7 5 10 10 10.0$\pm$0.4 3.4$\pm$0.4 0.44$\pm$0.07 no candidate SNR
LBZ45-3 12:28:14.0 44:06:35.4 5 5 2 11.0$\pm$0.4 3.1$\pm$0.4 0.51$\pm$0.13 no candidate SNR
LBZ46 12:28:14.1 44:05:59.0 5 6 5 8.2$\pm$0.5 10.$\pm$0.5 0.79$\pm$0.16 no candidate SNR
LBZ47 12:28:14.1 44:06:31.0 5 5 3 15.0$\pm$0.4 3.2$\pm$0.4 0.53$\pm$0.16 no candidate SNR
LBZ48 12:28:14.2 44:05:07.1 5 10 10 14.0$\pm$0.3 7.0$\pm$0.3 0.59$\pm$0.16 no candidate SNR
LBZ49 12:28:14.2 44:05:10.1 5 10 10 5.8$\pm$0.3 3.0$\pm$0.3 0.41$\pm$0.05 no candidate SNR
LBZ50 12:28:14.4 44:06:01.7 5 10 10 11.0$\pm$0.5 6.6$\pm$0.5 0.46$\pm$0.09 no candidate SNR
LBZ51 12:28:14.4 44:06:24.6 6 6 5 15.0$\pm$0.5 7.6$\pm$0.5 0.50$\pm$0.07 no candidate SNR
LBZ52 12:28:14.5 44:06:04.4 5 10 10 12.0$\pm$0.5 2.9$\pm$0.5 0.44$\pm$0.07 no candidate SNR
LBZ53 12:28:14.6 44:06:02.2 5 10 10 8.2$\pm$0.5 5.5$\pm$0.5 0.48$\pm$0.10 no candidate SNR
LBZ54 12:28:15.2 44:06:04.0 5 10 10 8.7$\pm$0.4 5.9$\pm$0.4 0.43$\pm$0.04 no candidate SNR
LBZ55 12:28:15.3 44:05:57.8 5 10 10 17.0$\pm$0.4 5.2$\pm$0.4 0.75$\pm$0.14 no candidate SNR
LBZ56 12:28:15.4 44:06:56.3 5 5 2 14.0$\pm$0.3 4.6$\pm$0.3 0.81$\pm$0.23 no candidate SNR
LBZ57 12:28:19.2 44:06:55.7 5 10 10 3.0$\pm$0.2 6.7$\pm$0.2 0.55$\pm$0.07 no candidate SNR[^51]
LBZ58 12:28:07.9 44:05:20.5 5 10 10 7.0$\pm$0.4 3.3$\pm$0.4 0.28$\pm$0.04 no probable candidate SNR
LBZ59 12:28:08.5 44:04:43.4 5 10 10 16.0$\pm$0.3 6.5$\pm$0.3 0.35$\pm$0.04 no probable candidate SNR
LBZ60 12:28:09.7 44:05:54.8 5 10 10 22.0$\pm$0.4 6.0$\pm$0.4 0.27$\pm$0.03 no probable candidate SNR[^52]
LBZ61 12:28:11.8 44:05:16.7 5 10 10 24.0$\pm$0.6 6.9$\pm$0.6 0.25$\pm$0.08 no probable candidate SNR
LBZ62 12:28:11.9 44:05:08.4 5 10 10 5.1$\pm$0.5 6.3$\pm$0.5 0.32$\pm$0.04 no probable candidate SNR
LBZ63 12:28:11.9 44:05:10.2 5 10 10 22.0$\pm$0.5 3.4$\pm$0.5 0.27$\pm$0.04 no probable candidate SNR
LBZ64 12:28:14.2 44:06:18.7 5 10 10 26.0$\pm$0.4 7.1$\pm$0.4 0.27$\pm$0.03 no probable candidate SNR
LBZ65 12:28:14.3 44:04:33.4 5 10 10 12.0$\pm$0.2 4.4$\pm$0.2 0.35$\pm$0.03 no probable candidate SNR
LBZ66 12:28:14.6 44:05:16.9 5 5 5 24.0$\pm$0.4 7.5$\pm$0.4 0.30$\pm$0.03 no probable candidate SNR
LBZ67 12:28:14.9 44:04:45.9 5 10 10 256.0$\pm$0.2 8.5$\pm$0.2 0.34$\pm$0.02 no probable candidate SNR
LBZ68 12:28:18.8 44:06:54.3 5 10 10 13.0$\pm$0.2 4.1$\pm$0.2 0.32$\pm$0.03 no probable candidate SNR
LBZ69 12:28:18.9 44:06:09.2 5 5 5 20.0$\pm$0.3 6.6$\pm$0.3 0.33$\pm$0.03 no probable candidate SNR
LBZ70 12:28:18.9 44:06:18.7 5 10 10 5.6$\pm$0.3 5.5$\pm$0.3 0.31$\pm$0.05 no probable candidate SNR
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- -----------------------------
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ------------------------
SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ Spectra Classification
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
LBZ1 13:29:32.9 58:24:53.3 5 6 5 5.1$\pm$0.3 1.3$\pm$0.2 0.26$\pm$0.08 yes (S) SNR
LBZ2 13:29:33.2 58:24:48.4 5 10 10 4.6$\pm$0.4 1.4$\pm$0.2 0.29$\pm$0.10 yes (S) SNR
LBZ3 13:29:33.7 58:25:09.4 7 8 2 20.5$\pm$0.6 2.9$\pm$0.3 0.15$\pm$0.03 yes (S) SNR$^{d}$
LBZ4 13:29:34.5 58:24:23.8 5 10 10 10.5$\pm$0.3 2.9$\pm$0.2 0.28$\pm$0.04 yes (M) SNR[^53]
LBZ5 13:29:38.8 58:25:34.8 5 10 10 10.9$\pm$0.4 2.7$\pm$0.2 0.25$\pm$0.04 yes (M) SNR
LBZ6 13:29:39.0 58:26:12.0 5 10 10 19.5$\pm$0.2 4.5$\pm$0.1 0.23$\pm$0.01 yes (M) SNR
LBZ7 13:29:40.0 58:24:57.3 5 5 2 8.2$\pm$0.4 1.7$\pm$0.2 0.20$\pm$0.04 yes (S) SNR
LBZ8 13:29:28.6 58:25:16.9 4 10 10 2.4$\pm$0.2 0.9$\pm$0.1 0.37$\pm$0.08 no candidate SNR
LBZ9 13:29:30.3 58:25:20.6 5 10 10 5.2$\pm$0.2 1.8$\pm$0.1 0.34$\pm$0.06 no candidate SNR[^54]
LBZ10 13:29:31.0 58:25:33.5 5 10 10 4.5$\pm$0.2 1.7$\pm$0.1 0.38$\pm$0.08 no candidate SNR
LBZ11 13:29:32.3 58:23:37.0 5 10 10 2.3$\pm$0.2 1.1$\pm$0.1 0.48$\pm$0.12 no candidate SNR
LBZ12 13:29:32.4 58:25:16.2 5 10 10 6.8$\pm$0.3 2.6$\pm$0.1 0.38$\pm$0.06 no candidate SNR
LBZ13-1 13:29:36.7 58:26:25.6 5 10 10 7.3$\pm$0.3 2.8$\pm$0.1 0.37$\pm$0.05 no candidate SNR
LBZ14-2 13:29:36.7 58:26:23.4 5 10 10 7.1$\pm$0.3 2.8$\pm$0.1 0.40$\pm$0.05 no candidate SNR
LBZ15-3 13:29:36.8 58:26:20.5 5 10 10 4.4$\pm$0.3 1.6$\pm$0.1 0.35$\pm$0.08 no candidate SNR
LBZ16 13:29:36.9 58:24:26.9 4 4 2 2.0$\pm$0.3 1.8$\pm$0.2 0.91$\pm$0.42 no candidate SNR[^55]
LBZ17 13:29:37.2 58:23:41.8 5 10 10 5.4$\pm$0.2 2.4$\pm$0.1 0.44$\pm$0.07 no candidate SNR
LBZ18 13:29:37.7 58:26:04.5 5 10 10 5.2$\pm$0.3 1.9$\pm$0.2 0.37$\pm$0.08 no candidate SNR
LBZ19 13:29:38.3 58:26:01.6 7 10 10 13.3$\pm$0.4 4.9$\pm$0.2 0.37$\pm$0.04 no candidate SNR$^{d}$
LBZ20 13:29:28.2 58:25:14.6 5 10 10 5.2$\pm$0.2 1.5$\pm$0.1 0.29$\pm$0.04 no probable candidate SNR
LBZ21 13:29:30.7 58:24:48.2 5 10 10 5.4$\pm$0.2 1.5$\pm$0.1 0.27$\pm$0.05 no probable candidate SNR
LBZ22 13:29:30.8 58:23:38.5 5 10 10 2.3$\pm$0.2 0.9$\pm$0.1 0.38$\pm$0.10 no probable candidate SNR
LBZ23 13:29:30.8 58:25:49.1 5 10 10 4.99$\pm$0.2 1.6$\pm$0.1 0.33$\pm$0.05 no probable candidate SNR
LBZ24 13:29:31.5 58:23:18.2 5 10 10 3.3$\pm$0.1 0.9$\pm$0.1 0.27$\pm$0.05 no probable candidate SNR
LBZ25 13:29:31.9 58:24:12.1 5 10 10 7.7$\pm$0.3 2.2$\pm$0.1 0.29$\pm$0.04 no probable candidate SNR
LBZ26 13:29:36.4 58:26:27.5 5 10 10 7.1$\pm$0.2 2.4$\pm$0.1 0.33$\pm$0.04 no probable candidate SNR
LBZ27 13:29:36.6 58:23:31.0 5 10 10 5.0$\pm$0.2 1.6$\pm$0.1 0.32$\pm$0.05 no probable candidate SNR
LBZ28 13:29:37.1 58:26:24.0 5 10 10 5.9$\pm$0.3 1.9$\pm$0.1 0.33$\pm$0.06 no probable candidate SNR
LBZ29 13:29:38.5 58:26:30.2 5 10 10 4.7$\pm$0.2 1.4$\pm$0.1 0.30$\pm$0.05 no probable candidate SNR
LBZ30 13:29:39.1 58:26:20.2 5 10 10 6.8$\pm$0.2 2.1$\pm$0.1 0.31$\pm$0.04 no probable candidate SNR
LBZ31 13:29:40.3 58:24:03.1 5 10 10 5.6$\pm$0.2 1.6$\pm$0.1 0.29$\pm$0.04 no probable candidate SNR
LBZ32 13:29:41.2 58:25:08.9 5 10 10 8.3$\pm$0.3 2.3$\pm$0.1 0.27$\pm$0.04 no probable candidate SNR
LBZ33 13:29:41.4 58:25:06.0 5 10 10 7.8$\pm$0.3 2.4$\pm$0.1 0.31$\pm$0.04 no probable candidate SNR
LBZ34-1 13:29:47.7 58:23:56.6 5 10 10 2.3$\pm$0.1 0.6$\pm$0.1 0.25$\pm$0.06 no probable candidate SNR
LBZ35-2 13:29:48.2 58:23:53.8 5 10 10 2.1$\pm$0.1 0.6$\pm$0.1 0.30$\pm$0.08 no probable candidate SNR
LBZ36-3 13:29:48.3 58:23:55.1 5 10 10 2.0$\pm$0.2 0.6$\pm$0.1 0.29$\pm$0.09 no probable candidate SNR
---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- ------------------------------------------- --------- ------------------------
--------- ---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- -------------------------------------------
Galaxy SourceID RA Dec Rad An Dan F([H$\alpha$]{}) F([\[S [ii]{}\]]{}) ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$
(h:m:s) (d:m:s) (pix) (pix) (pix) (erg sec$^{-1}$ cm$^{-2}$) (erg sec$^{-1}$ cm$^{-2}$)
(J2000) (J2000) ($\times$ 10$^{-15}$) ($\times$ 10$^{-15}$)
NGC2403 LBZ1059 07:36:13.6 65:37:57.5 5 10 10 10.7 $\pm$0.2 9.6 $\pm$0.2 0.90$\pm$0.03
- LBZ731 07:36:27.1 65:36:30.2 5 10 10 41.5 $\pm$0.2 17.3$\pm$0.2 0.42$\pm$0.01
- LBZ561 07:36:28.4 65:37:02.6 5 10 10 3.3 $\pm$0.4 5.6 $\pm$0.3 1.72$\pm$0.22
- LBZ484 07:36:32.1 65:35:42.7 5 10 10 17.0 $\pm$0.4 4.1 $\pm$0.2 0.24$\pm$0.01
- LBZ982 07:36:35.1 65:37:35.7 5 10 10 61.3 $\pm$0.7 25.8$\pm$0.6 0.42$\pm$0.01
- LBZ1291 07:36:36.8 65:36:54.8 5 10 10 22.5 $\pm$0.4 14.9$\pm$0.3 0.66$\pm$0.03
- LBZ840 07:36:37.7 65:36:53.1 5 10 10 25.6 $\pm$0.3 13.9$\pm$0.3 0.54$\pm$0.02
- LBZ796 07:36:38.2 65:36:43.8 5 10 10 42.6 $\pm$0.4 18.8$\pm$0.3 0.44$\pm$0.01
- LBZ620 07:36:41.6 65:36:12.4 5 10 10 119.0$\pm$0.5 47.0$\pm$0.4 0.39$\pm$0.01
- LBZ514 07:36:45.5 65:36:06.9 5 10 10 11.3 $\pm$0.5 8.6 $\pm$0.4 0.76$\pm$0.07
- LBZ1180 07:36:50.4 65:34:36.4 5 10 10 7.7 $\pm$0.4 2.2 $\pm$0.2 0.28$\pm$0.03
- LBZ963 07:37:00.7 65:37:28.0 5 10 10 33.2 $\pm$0.5 3.1 $\pm$0.2 0.10$\pm$0.01
NGC3077 LBZ396 10:03:15.5 68:44:21.6 4 10 10 19.0$\pm$0.61 4.4$\pm$0.30 0.23$\pm$0.03
- LBZ363 10:03:20.0 68:43:21.3 4 10 10 17.5$\pm$0.54 3.7$\pm$0.27 0.22$\pm$0.03
NGC4214 LBZ1089 12:15:33.5 36:19:42.8 5 10 10 46.2$\pm$0.27 17.0$\pm$0.36 0.37$\pm$0.01
- LBZ597 12:15:34.2 36:19:57.7 5 10 10 70.1$\pm$0.30 16.9$\pm$0.42 0.24$\pm$0.01
- LBZ1091 12:15:34.5 36:19:45.8 5 10 10 46.2$\pm$0.32 13.2$\pm$0.43 0.29$\pm$0.02
- LBZ988 12:15:34.5 36:20:00.9 5 10 10 148.8$\pm$0.37 29.5$\pm$0.44 0.20$\pm$0.01
- LBZ917 12:15:35.0 36:19:53.6 5 10 10 188.0$\pm$0.48 32.2$\pm$0.54 0.16$\pm$0.01
- LBZ971 12:15:36.5 36:21:06.8 5 10 10 12.0$\pm$0.18 5.1$\pm$0.24 0.43$\pm$0.04
- LBZ928 12:15:37.6 36:19:00.7 5 10 10 30.8$\pm$0.26 6.2$\pm$0.38 0.20$\pm$0.02
- LBZ911 12:15:38.3 36:20:38.8 5 10 10 20.5$\pm$0.25 7.5$\pm$0.35 0.36$\pm$0.03
- LBZ889 12:15:39.3 36:17:40.9 5 10 10 27.4$\pm$0.16 5.5$\pm$0.22 0.20$\pm$0.01
- LBZ899 12:15:39.6 36:19:21.8 5 10 10 76.9$\pm$0.49 18.0$\pm$0.64 0.23$\pm$0.01
- LBZ362 12:15:40.1 36:21:57.9 5 10 10 6.2$\pm$0.10 2.6$\pm$0.16 0.42$\pm$0.05
- LBZ863 12:15:41.2 36:20:29.3 5 10 10 15.7$\pm$0.20 5.8$\pm$0.30 0.37$\pm$0.04
- LBZ740 12:15:42.4 36:18:51.0 5 10 10 7.5$\pm$0.24 6.0$\pm$0.35 0.81$\pm$0.11
- LBZ800 12:15:43.1 36:18:13.7 5 10 10 11.3$\pm$0.20 4.1$\pm$0.27 0.37$\pm$0.05
- LBZ845 12:15:43.3 36:18:52.3 5 10 10 15.6$\pm$0.24 6.6$\pm$0.35 0.43$\pm$0.04
- LBZ836 12:15:43.6 36:19:00.5 5 10 10 29.1$\pm$0.24 10.1$\pm$0.35 0.35$\pm$0.02
- LBZ690 12:15:44.9 36:18:17.5 5 10 10 4.3$\pm$0.14 4.1$\pm$0.20 0.95$\pm$0.12
NGC4395 LBZ1252 12:25:52.5 33:30:22.5 4 10 10 7.2$\pm$0.16 1.9$\pm$0.10 0.26$\pm$0.03
- LBZ391 12:25:56.7 33:30:20.7 4 10 10 6.3$\pm$0.12 2.1$\pm$0.08 0.33$\pm$0.03
- LBZ267 12:25:59.1 33:30:59.3 4 10 10 2.8$\pm$0.13 1.3$\pm$0.08 0.47$\pm$0.08
- LBZ151 12:26:00.2 33:31:37.7 4 10 10 7.2$\pm$0.16 2.1$\pm$0.11 0.29$\pm$0.03
NGC4449 LBZ581 12:28:05.6 44:05:33.0 5 10 10 44.5$\pm$0.28 10.6$\pm$0.29 0.23$\pm$0.01
- LBZ593 12:28:06.2 44:04:06.1 5 10 10 6.7$\pm$0.19 4.5$\pm$0.19 0.66$\pm$0.07
- LBZ567 12:28:08.7 44:04:08.1 5 10 10 6.0$\pm$0.19 3.9$\pm$0.20 0.96$\pm$0.09
- LBZ527 12:28:09.5 44:06:29.6 5 10 10 41.0$\pm$0.30 9.8$\pm$0.28 0.24$\pm$0.01
- LBZ503 12:28:10.2 44:04:52.6 5 10 10 49.6$\pm$0.47 10.5$\pm$0.48 0.21$\pm$0.02
- LBZ500 12:28:10.4 44:04:10.6 5 10 10 9.6 $\pm$0.21 4.4$\pm$0.22 0.46$\pm$0.05
- LBZ521 12:28:10.5 44:04:38.4 5 10 10 20.5$\pm$0.32 7.2$\pm$0.34 0.35$\pm$0.03
- LBZ266 12:28:10.6 44:05:20.0 5 10 10 10.3$\pm$0.76 10.5$\pm$0.72 0.10$\pm$0.01
- LBZ449 12:28:12.9 44:04:33.1 5 10 10 23.9$\pm$0.24 5.3$\pm$0.23 0.22$\pm$0.02
- LBZ398 12:28:13.0 44:05:33.1 5 10 10 188.$\pm$0.65 14.7$\pm$0.59 0.08$\pm$0.01
- LBZ391 12:28:13.1 44:05:43.0 5 10 10 564.$\pm$0.86 59.0$\pm$0.73 0.10$\pm$0.01
- LBZ432 12:28:13.4 44:04:39.5 5 10 10 27.4$\pm$0.23 6.9$\pm$0.22 0.26$\pm$0.02
- LBZ311 12:28:14.0 44:04:53.9 5 10 10 20.5$\pm$0.24 6.1$\pm$0.24 0.30$\pm$0.02
- LBZ401 12:28:14.5 44:06:06.2 5 10 10 5.5$\pm$0.45 5.1$\pm$0.49 0.92$\pm$0.27
- LBZ387 12:28:14.8 44:05:50.8 5 10 10 11.1$\pm$0.44 4.8$\pm$0.48 0.43$\pm$0.09
- LBZ394 12:28:14.8 44:07:00.7 5 10 10 18.8$\pm$0.30 2.6$\pm$0.27 0.13$\pm$0.02
- LBZ95 12:28:15.1 44:07:28.5 5 10 10 9.4 $\pm$0.17 3.7$\pm$0.15 0.39$\pm$0.03
- LBZ260 12:28:16.0 44:04:29.6 5 10 10 9.8$\pm$0.14 2.8$\pm$0.13 0.29$\pm$0.03
- LBZ318 12:28:19.4 44:06:20.9 5 10 10 23.9$\pm$0.28 8.3$\pm$0.24 0.35$\pm$0.02
NGC5204 LBZ154 13:29:32.4 58:26:13.1 5 10 10 11.1$\pm$0.26 2.4$\pm$0.13 0.22$\pm$0.02
- LBZ487 13:29:33.9 58:24:46.7 5 10 10 14.7$\pm$0.42 1.2$\pm$0.21 0.08$\pm$0.02
- LBZ458 13:29:34.9 58:25:13.9 5 10 10 16.8$\pm$0.53 1.1$\pm$0.26 0.06$\pm$0.03
- LBZ439 13:29:35.5 58:24:22.4 5 10 10 6.8$\pm$0.33 1.4$\pm$0.17 0.21$\pm$0.05
- LBZ412 13:29:36.6 58:26:01.6 5 10 10 10.4$\pm$0.31 1.9$\pm$0.16 0.18$\pm$0.03
- LBZ242 13:29:38.2 58:25:06.1 5 10 10 6.5$\pm$0.55 0.6$\pm$0.27 0.09$\pm$0.07
- LBZ299 13:29:42.4 58:25:48.2 5 10 10 22.2$\pm$0.29 3.8$\pm$0.14 0.17$\pm$0.01
--------- ---------- ------------ ------------ ------- ------- ------- ---------------------------- ---------------------------- -------------------------------------------
[cccc]{} &\
\
Galaxy & RA & DEC &Total Exposure Time\
& (h:m:s) &(d:m:s) &(sec)\
&(J2000)& (J2000) &\
NGC 3077 – Slit 1 &10:03:23.3 &+68:44:27.1 &3000 (1)\
NGC 4214 – Slit 1 &12:15:33.7 &+36:19:07.2 &5400 (3)\
NGC 4214 – Slit 2 &12:15:37.6 &+36:18:30.2 &3600 (2)\
NGC 4395 – Slit 1 &12:25:54.1 &+33:31:39.6 &5400 (3)\
NGC 4395 – Slit 2 &12:25:41.1 &+33:30:56.0 &3600 (2)\
NGC 4449 – Slit 1 &12:28:13.3 &+44:05:20.8 &5400 (3)\
NGC 5204 – Slit 1 &13:29:33.3 &+58:25:06.1 &3600 (2)\
NGC 5204 – Slit 2 &13:29:40.0 &+58:24:40.9 &3600 (2)\
[@cccccccccccccccc@]{} Line &[H$\beta$]{}&[\[O [iii]{}\]]{}&[\[O [iii]{}\]]{}&[\[O [i]{}\]]{}&[\[He [i]{}\]]{}&[\[He [ii]{}\]]{}&[\[O [i]{}\]]{}&[\[O [i]{}\]]{}&[\[N [ii]{}\]]{}&[H$\alpha$]{}&[\[N [ii]{}\]]{}&[\[He [i]{}\]]{}&[\[S [ii]{}\]]{}&[\[S [ii]{}\]]{}&[\[Ar [iii]{}\]]{}\
(Å)&(4861) &(4959) &(5007) &(5577) &(5876) &(6234) &(6300) &(6364) &(6548) &(6563) &(6584) &(6678) &(6716) &(6731) &(7136)\
\
F &31 &- &14 &- &- &- &- &- &7 &100 &26 &- &28 &22 &-\
I &35 &- &16 &- &- &- &- &- &7 &100 &26 &- &28 &21 &-\
S/N &8 &- &6 &- &- &- &- &- &7 &30 &14 &- &15 &12 &-\
\
F &14 &- &8 &- &- &- &3 &- &7 &100 &27 &- &24 &18 &-\
I &35 &- &17&- &- &- &4 &- &7 &100 &27 &- &22 &17 &-\
S/N &8 &- &7 &- &- &- &5 &- &11&49 &25 &- &22 &16 &-\
\
F &14 &3 &12 &10 &- &2 &8 &3 &9 &100 &28 &- &25 &18 &2\
I &35 &6 &28 &16 &- &2 &9 &3 &9 &100 &28 &- &24 &17 &1\
S/N &13 &6 &15 &22 &- &12 &21 &12 &20 &71 &36 &- &34 &29 &8\
\
F &30 &- &- &18 &- &- &10 &6 &9 &100 &28 &- &33 &24 &-\
I &35 &- &- &19 &- &- &10 &6 &9 &100 &28 &- &33 &24 &-\
S/N &13 &- &- &19 &- &- &17 &10 &13 &59 &28 &- &32 &25 &-\
\
F &17 &- &4 &- &2 &- &1 &- &11 &100 &36 &- &27 &19 &-\
I &35 &- &8 &- &3 &- &1 &- &11 &100 &36 &- &25 &18 &-\
S/N &15 &- &8 &- &9 &- &9 &- &25 &78 &47 &- &41 &33 &-\
\
F &16 &6 &9 &- &- &- &- &- &12 &100 &36 &- &38 &27 &-\
I &35 &12&18 &- &- &- &- &- &12 &100 &35 &- &36 &26 &-\
S/N &11 &6 &8 &- &- &- &- &- &18 &59 &34 &- &33 &27 &-\
\
F &- &- &- &- &- &- &- &- &14 &100 &32 &- &53 &41 &-\
I &- &- &- &- &- &- &- &- &- &- &- &- &- &- &-\
S/N &- &- &- &- &- &- &- &- &5 &14 &8 &- &10 &9 &-\
\
F &24 &- &- &- &- &- &- &- &5 &100 &23 &- &29 &29 &-\
I &35 &- &- &- &- &- &- &- &5 &100 &23 &- &28 &20 &-\
S/N &5 &- &- &- &- &- &- &- &3 &25 &11 &- &12 &10 &-\
[@cccccccccccccccc@]{} Line &[H$\beta$]{}&[\[O [iii]{}\]]{}&[\[O [iii]{}\]]{}&[\[O [i]{}\]]{}&[\[He [i]{}\]]{}&[\[He [ii]{}\]]{}&[\[O [i]{}\]]{}&[\[O [i]{}\]]{}&[\[N [ii]{}\]]{}&[H$\alpha$]{}&[\[N [ii]{}\]]{}&[\[He [i]{}\]]{}&[\[S [ii]{}\]]{}&[\[S [ii]{}\]]{}&[\[Ar [iii]{}\]]{}\
(Å)&(4861) &(4959) &(5007) &(5577) &(5876) &(6234) &(6300) &(6364) &(6548) &(6563) &(6584) &(6678) &(6716) &(6731) &(7136)\
\
F &34 &11 &32 &16 &5 &- &10 &4 &9 &100 &21 &2 &24 &17 &3\
I &35 &11 &32 &16 &5 &- &10 &4 &9 &100 &21 &2 &24 &17 &3\
S/N &10 &5 &13 &12 &5 &- &12 &8 &9 &37 &16 &3 &17 &14 &5\
\
F &16 &- &32 &- &- &- &- &- &9 &100 &24 &- &35 &26 &-\
I &35 &- &61 &- &- &- &- &- &9 &100 &24 &- &33 &24 &-\
S/N &6 &- &9 &- &- &- &- &- &7 &24 &10 &- &14 &8 &-\
\
F &22 &26 &86 &- &- &- &21 &4 &11 &100 &29 &- &43 &41 &-\
I &35 &38 &124 &- &- &- &22 &4 &12 &100 &29 &- &42 &39 &-\
S/N &6 &7 &15 &- &- &- &9 &5 &7 &22 &10 &- &14 &16 &-\
\
F &31 &54 &153 &- &- &- &- &- &7 &100 &21 &- &24 &19 &5\
I &35 &61 &171 &- &- &- &- &- &7 &100 &21 &- &24 &19 &5\
S/N &13 &21 &40 &- &- &- &- &- &12 &50 &22 &- &25 &21 &7\
\
F &26 &9 &15 &12 &- &- &5 &5 &11 &100 &31 &- &21 &18 &-\
I &35 &11&19 &14 &- &- &5 &5 &11 &100 &31 &- &21 &17 &-\
S/N &10 &6 &7 &7 &- &- &10 &9 &13 &46 &21 &- &15 &17 &-\
\
F &20 &-&- &13 &- &- &6 &- &12 &100 &19 &- &24 &18 &-\
I &35 &-&- &17 &- &- &17 &- &12 &100 &19 &- &23 &17 &-\
S/N &3 &-&- &10 &- &- &8 &- &10 &27 &11 &- &11 &9 &-\
\
F &29 &- &- &- &- &- &- &- &9 &100 &22 &- &25 &17 &-\
I &35 &- &- &- &- &- &- &- &9 &100 &22 &- &25 &17 &-\
S/N &7 &- &- &- &- &- &- &- &6 &24 &10 &- &10 &5 &-\
\
F &28 &- &4 &23 &- &- &5 &2 &7 &100 &25 &- &25 &17 &-\
I &35 &- &5 &26 &- &- &5 &2 &7 &100 &25 &- &24 &17 &-\
S/N &10 &- &5 &16 &- &- &9 &6 &9 &38 &18 &- &17 &15 &-\
[@cccccccccccccccc@]{} Line &[H$\beta$]{}&[\[O [iii]{}\]]{}&[\[O [iii]{}\]]{}&[\[O [i]{}\]]{}&[\[He [i]{}\]]{}&[\[He [ii]{}\]]{}&[\[O [i]{}\]]{}&[\[O [i]{}\]]{}&[\[N [ii]{}\]]{}&[H$\alpha$]{}&[\[N [ii]{}\]]{}&[\[He [i]{}\]]{}&[\[S [ii]{}\]]{}&[\[S [ii]{}\]]{}&[\[Ar [iii]{}\]]{}\
(Å)&(4861) &(4959) &(5007) &(5577) &(5876) &(6234) &(6300) &(6364) &(6548) &(6563) &(6584) &(6678) &(6716) &(6731) &(7136)\
\
F &16 &- &17 &22 &- &- &4 &- &10&100 &35&- &26 &20 &-\
I &35 &- &34 &32 &- &- &5 &- &11&100 &35&- &24 &19 &-\
S/N &6 &- &5 &10 &- &- &6 &- &6 &33 &16&- &15 &10 &-\
\
F &- &- &- &- &- &- &- &- &17 &100 &27 &- &27 &27 &-\
I &- &- &- &- &- &- &- &- &- &- &- &- &- &- &-\
S/N &- &- &- &- &- &- &- &- &10 &33 &14 &- &11 &12 &-\
\
F &- &- &- &- &- &- &- &- &- &100 &12 &- &33 &28 &-\
I &- &- &- &- &- &- &- &- &- &- &- &- &- &- &-\
S/N &- &- &- &- &- &- &- &- &- &34 &12 &- &12 &10 &-\
\
F &15 &- &- &- &- &- &5 &- &10 &100 &21 &- &30 &21 &-\
I &35 &- &- &- &- &- &6 &- &10 &100 &11 &- &28 &20 &-\
S/N &8 &- &- &- &- &- &13 &- &9 &39 &10 &- &19 &15 &-\
\
F &- &- &- &- &- &- &- &- &- &100 &- &- &28 &21 &-\
I &- &- &- &- &- &- &- &- &- &- &- &- &- &- &-\
S/N &- &- &- &- &- &- &- &- &- &24 &- &- &13 &10 &-\
\
F &27 &11 &31 &- &3 &- &7 &3 &4 &100 &12 &- &29 &21 &-\
I &35 &14 &39 &- &4 &- &7 &3 &4 &100 &12 &- &29 &20 &-\
S/N &19 &12 &24 &- &8 &- &15 &- &8 &60 &18 &- &30 &25 &-\
\
F &- &70 &- &- &- &- &- &- &- &100 &16 &- &26 &18 &-\
I &- &- &- &- &- &- &- &- &- &- &- &- &- &- &-\
S/N &- &18 &- &- &- &- &- &- &- &40 &11 &- &15 &11 &-\
\
F &- &- &- &- &- &- &- &- &- &100 &- &- &25 &24 &-\
I &- &- &- &- &- &- &- &- &- &- &- &- &- &- &-\
S/N &- &- &- &- &- &- &- &- &- &33 &- &- &13 &13 &-\
[@cccccccccccccccc@]{} Line &[H$\beta$]{}&[\[O [iii]{}\]]{}&[\[O [iii]{}\]]{}&[\[O [i]{}\]]{}&[\[He [i]{}\]]{}&[\[He [ii]{}\]]{}&[\[O [i]{}\]]{}&[\[O [i]{}\]]{}&[\[N [ii]{}\]]{}&[H$\alpha$]{}&[\[N [ii]{}\]]{}&[\[He [i]{}\]]{}&[\[S [ii]{}\]]{}&[\[S [ii]{}\]]{}&[\[Ar [iii]{}\]]{}\
(Å)&(4861) &(4959) &(5007) &(5577) &(5876) &(6234) &(6300) &(6364) &(6548) &(6563) &(6584) &(6678) &(6716) &(6731) &(7136)\
\
F &15 &- &27 &- &- &- &- &- &- &100 &13 &- &44 &34 &-\
I &35 &- &56 &- &- &- &- &- &- &100 &13 &- &41 &32 &-\
S/N &8 &- &14 &- &- &- &- &- &- &41 &12 &- &22 &16 &-\
\
F &- &- &- &- &- &- &18 &- &- &100 &- &- &34 &29 &-\
I &- &- &- &- &- &- &- &- &- &- &- &- &- &- &-\
S/N &- &- &- &- &- &- &17 &- &- &37 &- &- &22 &19 &-\
\
F &- &- &- &- &- &- &15 &- &- &100 &16 &- &40 &34 &-\
I &- &- &- &- &- &- &- &- &- &- &- &- &- &- &-\
S/N &- &- &- &- &- &- &4 &- &- &16 &3 &- &8 &7 &-\
\
F &25 &18 &53 &- &- &- &- &- &7 &100 &17 &- &24 &18 &-\
I &35 &24 &71 &- &- &- &- &- &7 &100 &17 &- &23 &18 &-\
S/N &14 &13 &25 &- &- &- &- &- &10 &47 &16 &- &21 &17 &-\
\
F &27 &- &25 &- &- &- &- &- &9 &100 &12 &- &24 &19 &-\
I &35 &- &31 &- &- &- &- &- &9 &100 &12 &- &23 &18 &-\
S/N &10 &- &13 &- &- &- &- &- &9 &42 &15 &- &17 &15 &-\
\
F &21 &- &23 &- &- &- &13 &- &19 &100 &24 &- &38 &32 &-\
I &35 &- &36 &- &- &- &14 &- &19 &100 &24 &- &37 &31 &-\
S/N &4 &- &4 &- &- &- &5 &- &6 &15 &6 &- &6 &5 &-\
\
F &22 &- &29 &- &- &- &5 &- &5 &100 &17 &- &26 &19 &-\
I &35 &- &44 &- &- &- &6 &- &5 &100 &17 &- &26 &18 &-\
S/N &7 &- &8 &- &- &- &6 &- &6 &25 &8 &- &12 &8 &-\
\
F &31 &16 &47 &- &3 &- &6 &3 &6 &100 &18 &2 &28 &20 &-\
I &35 &18 &52 &- &3 &- &6 &3 &6 &100 &18 &2 &28 &19 &-\
S/N &21 &16 &29 &- &7 &- &13 &7 &13 &60 &24 &7 &31 &25 &-\
[@cccccccccccccccc@]{} Line &[H$\beta$]{}&[\[O [iii]{}\]]{}&[\[O [iii]{}\]]{}&[\[O [i]{}\]]{}&[\[He [i]{}\]]{}&[\[He [ii]{}\]]{}&[\[O [i]{}\]]{}&[\[O [i]{}\]]{}&[\[N [ii]{}\]]{}&[H$\alpha$]{}&[\[N [ii]{}\]]{}&[\[He [i]{}\]]{}&[\[S [ii]{}\]]{}&[\[S [ii]{}\]]{}&[\[Ar [iii]{}\]]{}\
(Å)&(4861) &(4959) &(5007) &(5577) &(5876) &(6234) &(6300) &(6364) &(6548) &(6563) &(6584) &(6678) &(6716) &(6731) &(7136)\
\
F &28 &12 &47 &- &- &- &4 &- &3 &100 &16 &- &26 &18 &-\
I &35 &15 &57 &- &- &- &4 &- &3 &100 &16 &- &25 &18 &-\
S/N &11 &10 &19 &- &- &- &8 &- &6 &41 &14 &- &19 &15 &-\
\
F &24 &26 &82 &- &3 &- &14 &4 &4 &100 &13 &- &34 &24 &3\
I &35 &36 &111 &- &4 &- &14 &4 &4 &100 &13 &- &33 &24 &3\
S/N &24 &28 &53 &- &12 &- &30 &16 &15 &87 &29 &- &49 &41 &10\
\
F &- &- &- &- &- &- &- &- &- &100 &11 &- &24 &20 &-\
I &- &- &- &- &- &- &- &- &- &- &- &- &- &- &-\
S/N &- &- &- &- &- &- &- &- &- &25 &5 &- &10 &9 &-\
\
F &23 &22 &69 &- &3 &- &14 &4 &6 &100 &19 &1 &42 &31 &3\
I &35 &32 &102 &- &3 &- &14 &4 &6 &100 &18 &1 &41 &30 &2\
S/N &17 &19 &38 &- &8 &- &22 &11 &14 &63 &25 &6 &39 &34 &5\
\
F &30 &- &13 &- &- &- &- &- &4 &100 &7 &- &25 &18 &-\
I &35 &- &15 &- &- &- &- &- &4 &100 &7 &- &24 &17 &-\
S/N &13 &- &9 &- &- &- &- &- &7 &40 &10 &- &19 &15 &-\
\
F &18 &- &29 &- &- &- &- &- &- &100 &- &- &30 &31 &-\
I &35 &- &51 &- &- &- &- &- &- &100 &- &- &29 &30 &-\
S/N &2 &- &4 &- &- &- &- &- &- &11 &- &- &5 &4 &-\
\
F &11 &22 &10 &- &- &- &- &- &- &100 &9 &- &24 &19 &-\
I &35 &64 &29 &- &- &- &- &- &- &100 &9 &- &22 &18 &-\
S/N &3 &5 &5 &- &- &- &- &- &- &23 &7 &- &8 &8 &-\
\
F &25 &11 &32 &- &- &- &5 &- &5 &100 &14 &- &25 &17 &-\
I &35 &15 &42 &- &- &- &5 &- &5 &100 &14 &- &24 &17 &-\
S/N &9 &5 &12 &- &- &- &4 &- &7 &32 &12 &- &15 &13 &-\
[@cccccccccccccccc@]{} Line &[H$\beta$]{}&[\[O [iii]{}\]]{}&[\[O [iii]{}\]]{}&[\[O [i]{}\]]{}&[\[He [i]{}\]]{}&[\[He [ii]{}\]]{}&[\[O [i]{}\]]{}&[\[O [i]{}\]]{}&[\[N [ii]{}\]]{}&[H$\alpha$]{}&[\[N [ii]{}\]]{}&[\[He [i]{}\]]{}&[\[S [ii]{}\]]{}&[\[S [ii]{}\]]{}&[\[Ar [iii]{}\]]{}\
(Å)&(4861) &(4959) &(5007) &(5577) &(5876) &(6234) &(6300) &(6364) &(6548) &(6563) &(6584) &(6678) &(6716) &(6731) &(7136)\
\
F &22 &12 &53 &- &- &- &- &- &- &100 &15 &- &35 &28 &-\
I &35 &18 &78 &- &- &- &- &- &- &100 &14 &- &34 &26 &-\
S/N &9 &8 &15 &- &- &- &- &- &- &38 &13 &- &20 &16 &-\
\
F &17 &10 &49 &- &- &- &- &- &- &100 &16 &- &35 &27 &-\
I &35 &19 &93 &- &- &- &- &- &- &100 &16 &- &33 &26 &-\
S/N &9 &8 &14 &- &- &- &- &- &- &29 &11 &- &16 &14 &-\
\
F &19 &- &- &- &- &- &8 &- &5 &100 &19 &- &33 &27 &-\
I &35 &- &- &- &- &- &9 &- &5 &100 &19 &- &32 &26 &-\
S/N &7 &- &- &- &- &- &10 &- &7 &40 &16 &- &21 &17 &-\
\
F &29 &56 &173 &16 &- &- &10 &- &7 &100 &16 &- &23 &16 &6\
I &34 &70 &206 &17 &- &- &10 &- &7 &100 &14 &- &23 &16 &6\
S/N &14 & 21&40 &15 &- &- &9 &- &5 &44 &15 &- &19 &15 &6\
\
F &24 &6 &35 &- &- &- &10 &- &9 &100 &19 &- &28 &20 &-\
I &35 &8 &49 &- &- &- &11 &- &9 &100 &19 &- &27 &19 &-\
S/N &11 &4 &13 &- &- &- &8 &- &9 &36 &15 &- &18 &16 &-\
\
F &34 &11 &33 &- &4 &- &3 &- &6 &100 &16 &- &25 &17 &-\
I &35 &11 &34 &- &4 &- &3 &- &6 &100 &16 &- &25 &17 &-\
S/N &23 &16 &29 &- &11 &- &14 &- &16 &75 &28 &- &34 &28 &-\
\
F &28 &10 &34 &- &- &- &7 &- &6 &100 &19 &- &33 &24 &-\
I &35 &12 &41 &- &- &- &7 &- &6 &100 &19 &- &33 &24 &-\
S/N &11 &6 &15 &- &- &- &6 &- &8 &40 &15 &- &21 &18 &-\
\
F &30 &10 &42 &42 &- &- &11 &- &5 &100 &13 &- &27 &21 &-\
I &35 &12 &48 &45 &- &- &11 &- &5 &100 &13 &- &27 &21 &-\
S/N &16 &11 &21 &33 &- &- &11 &- &9 &51 &18 &- &26 &22 &-\
[@cccccccccccccccc@]{} Line &[H$\beta$]{}&[\[O [iii]{}\]]{}&[\[O [iii]{}\]]{}&[\[O [i]{}\]]{}&[\[He [i]{}\]]{}&[\[He [ii]{}\]]{}&[\[O [i]{}\]]{}&[\[O [i]{}\]]{}&[\[N [ii]{}\]]{}&[H$\alpha$]{}&[\[N [ii]{}\]]{}&[\[He [i]{}\]]{}&[\[S [ii]{}\]]{}&[\[S [ii]{}\]]{}&[\[Ar [iii]{}\]]{}\
(Å)&(4861) &(4959) &(5007) &(5577) &(5876) &(6234) &(6300) &(6364) &(6548) &(6563) &(6584) &(6678) &(6716) &(6731) &(7136)\
\
F &28 &14 &48 &21 &- &- &4 &- &3 &100 &15 &- &32 &23 &-\
I &35 &17 &59 &23 &- &- &4 &- &3 &100 &15 &- &32 &22 &-\
S/N &10 &6 &14 &16 &- &- &7 &- &7 &38 &12 &- &19 &16 &-\
\
F &11 &18 &49 &- &3 &- &3 &- &3 &100 &14 &- &26 &21 &-\
I &35 &52 &138 &- &4 &- &4 &- &3 &100 &13 &- &24 &19 &-\
S/N &12 &14 &36 &- &12 &- &7 &- &14 &98 &29 &- &36 &29 &-\
\
F &22 &16 &53 &- &- &- &4 &- &3 &100 &14 &- &28 &21 &-\
I &35 &24 &78 &- &- &- &5 &- &3 &100 &14 &- &28 &20 &-\
S/N &22 &31 &37 &- &- &- &15 &- &11 &91 &30 &- &41 &34 &-\
\
F &32 &24 &77 &- &3 &- &7 &- &3 &100 &14 &- &28 &20 &-\
I &35 &25 &83 &- &4 &- &7 &- &3 &100 &14 &- &28 &20 &-\
S/N &33 &29 &59 &- &14 &- &30 &- &19 &123 &41 &- &56 &46 &-\
\
F &19 &14 &53 &- &- &- &4 &- &7 &100 &16 &- &28 &19 &-\
I &35 &24 &89 &- &- &- &4 &- &7 &100 &16 &- &26 &19 &-\
S/N &33 &28 &49 &- &- &- &20 &- &24 &115 &45 &- &57 &49 &-\
\
F &32 &16 &46 &- &4 &- &7 &- &4 &100 &14 &- &34 &25 &-\
I &35 &17 &50 &- &4 &- &7 &- &4 &100 &14 &- &34 &25 &-\
S/N &27 &20 &37 &- &13 &- &22 &- &17 &109 &40 &- &56 &46 &-\
\
F &23 &11 &34 &- &4 &- &4 &- &6 &100 &19 &- &24 &17 &-\
I &35 &16 &50 &- &4 &- &4 &- &6 &100 &19 &- &23 &17 &-\
S/N &19 &17 &29 &- &12 &- &13 &- &17 &74 &31 &- &37 &30 &-\
\
F &25 &16 &32 &- &- &- &28 &- &- &100 &25 &- &62 &46 &18\
I &35 &22 &42 &- &- &- &29 &- &- &100 &25 &- &60 &45 &16\
S/N &4 &5 &8 &- &- &- &10 &- &- &22 &10 &- &14 &13 &6\
[@cccccccccccccccc@]{} Line &[H$\beta$]{}&[\[O [iii]{}\]]{}&[\[O [iii]{}\]]{}&[\[O [i]{}\]]{}&[\[He [i]{}\]]{}&[\[He [ii]{}\]]{}&[\[O [i]{}\]]{}&[\[O [i]{}\]]{}&[\[N [ii]{}\]]{}&[H$\alpha$]{}&[\[N [ii]{}\]]{}&[\[He [i]{}\]]{}&[\[S [ii]{}\]]{}&[\[S [ii]{}\]]{}&[\[Ar [iii]{}\]]{}\
(Å)&(4861) &(4959) &(5007) &(5577) &(5876) &(6234) &(6300) &(6364) &(6548) &(6563) &(6584) &(6678) &(6716) &(6731) &(7136)\
\
F &23 &- &- &- &- &- &- &- &4 &100 &12 &- &38 &29 &-\
I &35 &- &- &- &- &- &- &- &4 &100 &12 &- &37 &28 &-\
S/N &4 &- &- &- &- &- &- &- &3 &14 &5 &- &9 &6 &-\
\
F &13 &- &16 &- &- &- &- &- &2 &100 &15 &- &35 &25 &-\
I &35 &- &38 &- &- &- &- &- &2 &100 &14 &- &33 &23 &-\
S/N &7 &- &8 &- &- &- &- &- &6 &34 &11 &- &19 &16 &-\
\
F &22 &9 &20 &- &- &- &- &- &5 &100 &17 &- &24 &18 &-\
I &35 &14 &29 &- &- &- &- &- &5 &100 &17 &- &23 &17 &-\
S/N &12 &7 &14 &- &- &- &- &- &9 &51 &19 &- &23 &19 &-\
\
F &26 &11 &32 &13 &- &- &13 &6 &3 &100 &12 &- &30 &22 &-\
I &35 &14 &41 &15 &- &- &13 &6 &3 &100 &12 &- &29 &21 &-\
S/N &15 &10 &18 &21 &- &- &14 &8 &8 &53 &17 &- &28 &24 &-\
\
F &27 &13 &34 &- &- &- &4 &2 &4 &100 &12 &- &27 &20 &2\
I &35 &16 &42 &- &- &- &3 &4 &4 &100 &12 &- &27 &20 &1\
S/N &15 &11 &21 &- &- &- &7 &9 &9 &52 &17 &- &26 &22 &5\
\
F &- &- &- &- &- &- &- &- &3 &100 &15 &- &35 &33 &-\
I &- &- &- &- &- &- &- &- &- &- &- &- &- &- &-\
S/N &- &- &- &- &- &- &- &- &4 &17 &6 &- &8 &6 &-\
\
F &28 &21 &140 &- &- &- &- &- &- &100 &9 &- &25 &17 &-\
I &35 &26 &172 &- &- &- &- &- &- &100 &9 &- &24 &17 &-\
S/N &6 &13 &18 &- &- &- &- &- &- &21 &6 &- &7 &6 &-\
\
F &13 &20 &66 &- &- &- &- &- &2 &100 &2 &- &26 &19 &-\
I &35 &50 &162 &- &- &- &- &- &2 &100 &9 &- &24 &18 &-\
S/N &8 &9 &13 &- &- &- &- &- &7 &36 &11 &- &15 & 9 &-\
[@cccccccccccccccc@]{} Line &[H$\beta$]{}&[\[O [iii]{}\]]{}&[\[O [iii]{}\]]{}&[\[O [i]{}\]]{}&[\[He [i]{}\]]{}&[\[He [ii]{}\]]{}&[\[O [i]{}\]]{}&[\[O [i]{}\]]{}&[\[N [ii]{}\]]{}&[H$\alpha$]{}&[\[N [ii]{}\]]{}&[\[He [i]{}\]]{}&[\[S [ii]{}\]]{}&[\[S [ii]{}\]]{}&[\[Ar [iii]{}\]]{}\
(Å)&(4861) &(4959) &(5007) &(5577) &(5876) &(6234) &(6300) &(6364) &(6548) &(6563) &(6584) &(6678) &(6716) &(6731) &(7136)\
\
F &14 &- &- &- &- &- &12 &- &5 &100 &7 &- &38 &30 &-\
I &35 &- &- &- &- &- &13 &- &5 &100 &7 &- &36 &28 &-\
S/N &7 &- &- &- &- &- &11 &- &8 &32 &8 &- &19 &15 &-\
\
F &25 &- &16 &- &- &- &- &- &3 &100 &12 &- &23 &19 &-\
I &35 &- &21 &- &- &- &- &- &3 &100 &12 &- &22 &18 &-\
S/N &6 &- &5 &- &- &- &- &- &3 &24 &7 &- &9 & 8 &-\
\
F &25 &3 &7 &- &- &- &3 &- &4 &100 &11 &- &22 &17 &-\
I &35 &4 &9 &- &- &- &3 &- &4 &100 &11 &- &22 &16 &-\
S/N &12 &3 &8 &- &- &- &5 &- &7 &50 &14 &- &22 &18 &-\
\
F &20 &- &- &- &- &- &- &- &17 &100 &2 &- &24 &20 &-\
I &35 &- &- &- &- &- &- &- &17 &100 &2 &- &23 &19 &-\
S/N &16 &- &- &- &- &- &- &- &20 &43 &13 &- &21 &19 &-\
--------- ---------- ------------------ ----------------- --------------- ----------------------------- ----------------------------- ----------------------------- ------------------------- ------------------------------
Galaxy SourceID F([H$\alpha$]{}) c([H$\beta$]{}) E$_{(B-V)}$ [H$\alpha$]{}/ [H$\beta$]{} [\[S [ii]{}\]]{}(6716+6731) [\[N [ii]{}\]]{}(6548+6584) [\[S [ii]{}\]]{}(6716) [\[O [iii]{}\]]{}(4959+5007)
/ [H$\alpha$]{} / [H$\alpha$]{} /[\[S [ii]{}\]]{}(6731) / [H$\beta$]{}
NGC2403 LBZ1 0.17 0.15$\pm$0.17 0.12$\pm$0.13 3.25$\pm$0.44 0.49$\pm$0.03 0.32$\pm$0.02 1.32$\pm$0.14 0.46$\pm$0.10[^56]
- LBZ2 0.29 1.19$\pm$0.17 0.92$\pm$0.13 7.42$\pm$0.99 0.39$\pm$0.02 0.33$\pm$0.01 1.34$\pm$0.10 0.50$\pm$0.10[^57]
- LBZ3 0.73 1.19$\pm$0.10 0.92$\pm$0.08 7.43$\pm$0.59 0.41$\pm$0.01 0.37$\pm$0.01 1.39$\pm$0.06 0.80$\pm$0.11
- LBZ4 0.48 0.19$\pm$0.10 0.15$\pm$0.07 3.35$\pm$0.26 0.56$\pm$0.02 0.37$\pm$0.01 1.39$\pm$0.07 -
- LBZ5 0.92 0.92$\pm$0.09 0.71$\pm$0.07 5.97$\pm$0.42 0.43$\pm$0.01 0.47$\pm$0.01 1.40$\pm$0.05 0.22$\pm$0.03[^58]
- LBZ6 0.53 0.95$\pm$0.11 0.73$\pm$0.09 6.14$\pm$0.56 0.61$\pm$0.02 0.47$\pm$0.02 1.40$\pm$0.07 0.51$\pm$0.13
- LBZ7 0.04 - - - 0.94$\pm$0.10 0.46$\pm$0.06 1.30$\pm$0.19 -
- LBZ8 0.07 0.46$\pm$0.26 0.35$\pm$0.20 4.13$\pm$0.87 0.48$\pm$0.04 0.27$\pm$0.03 1.39$\pm$0.18 -
- LBZ9 0.19 0.02$\pm$0.13 0.01$\pm$0.10 2.91$\pm$0.30 0.41$\pm$0.02 0.30$\pm$0.02 1.39$\pm$0.13 0.92$\pm$0.15
- LBZ10 0.11 0.95$\pm$0.21 0.73$\pm$0.16 6.12$\pm$1.00 0.57$\pm$0.05 0.36$\pm$0.06 1.35$\pm$0.20 1.76$\pm$0.35[^59]
- LBZ11 0.11 0.53$\pm$0.21 0.41$\pm$0.17 4.38$\pm$0.75 0.81$\pm$0.06 0.41$\pm$0.04 1.07$\pm$0.10 3.57$\pm$0.86
- LBZ12 0.36 0.15$\pm$0.10 0.12$\pm$0.08 3.25$\pm$0.26 0.43$\pm$0.02 0.28$\pm$0.01 1.30$\pm$0.08 4.90$\pm$0.54
NGC3077 LBZ1 0.25 0.36$\pm$0.13 0.28$\pm$0.10 3.83$\pm$0.40 0.38$\pm$0.02 0.42$\pm$0.02 1.20$\pm$0.11 0.55$\pm$0.13
- LBZ2 0.10 0.72$\pm$0.39 0.56$\pm$0.30 5.10$\pm$1.59 0.40$\pm$0.03 0.31$\pm$0.02 1.33$\pm$0.19 -
- LBZ3 0.06 0.24$\pm$0.19 0.19$\pm$0.15 3.48$\pm$0.54 0.42$\pm$0.05 0.32$\pm$0.03 1.43$\pm$0.32 -
- LBZ4 0.15 0.29$\pm$0.13 0.22$\pm$0.10 3.61$\pm$0.38 0.41$\pm$0.02 0.31$\pm$0.02 1.41$\pm$0.13 0.14$\pm$0.03[^60]
- LBZ5 0.22 0.97$\pm$0.21 0.74$\pm$0.16 6.20$\pm$1.02 0.43$\pm$0.03 0.45$\pm$0.03 1.31$\pm$0.16 0.98$\pm$0.23[^61]
- LBZ6 0.51 - - - 0.55$\pm$0.04 0.44$\pm$0.03 1.01$\pm$0.12 -
NGC4214 LBZ1 0.30 - - - 0.61$\pm$0.04 0.12$\pm$0.01[^62] 1.17$\pm$0.15 -
- LBZ2 0.15 1.03$\pm$0.16 0.80$\pm$0.12 6.54$\pm$0.83 0.48$\pm$0.02 0.21$\pm$0.02 1.40$\pm$0.12 -
- LBZ3 0.17 - - - 0.49$\pm$0.04 - 1.36$\pm$0.17 -
- LBZ4 0.49 0.30$\pm$0.07 0.23$\pm$0.05 3.65$\pm$0.20 0.49$\pm$0.02 0.16$\pm$0.10 1.42$\pm$0.08 1.11$\pm$0.10
- LBZ5 0.64 - - - 0.44$\pm$0.03 0.16$\pm$0.02[^63] 1.41$\pm$0.16 -
- LBZ6 0.36 - - - 0.49$\pm$0.03 - 1.06$\pm$0.11 -
- LBZ7 0.56 1.03$\pm$0.16 0.80$\pm$0.13 6.54$\pm$0.85 0.73$\pm$0.03 0.13$\pm$0.01[^64] 1.29$\pm$0.10 1.60$\pm$0.24[^65]
- LBZ8 0.51 - - - 0.62$\pm$0.03 - 1.18$\pm$0.08 -
- LBZ9 0.03 - - - 0.74$\pm$0.09 0.16$\pm$0.05[^66] 1.20$\pm$0.25 -
- LBZ10 0.06 0.42$\pm$0.10 0.32$\pm$0.07 4.00$\pm$0.30 0.41$\pm$0.02 0.23$\pm$0.01 1.29$\pm$0.10 2.04$\pm$0.23
- LBZ11 0.26 0.31$\pm$0.13 0.24$\pm$0.10 3.66$\pm$0.39 0.41$\pm$0.02 0.21$\pm$0.01 1.27$\pm$0.11 0.88$\pm$0.12[^67]
- LBZ12 0.02 0.64$\pm$0.37 0.49$\pm$0.29 4.79$\pm$1.42 0.68$\pm$0.10 0.43$\pm$0.06 1.20$\pm$0.31 1.03$\pm$0.41[^68]
- LBZ13 0.12 0.60$\pm$0.19 0.46$\pm$0.15 4.63$\pm$0.72 0.44$\pm$0.04 0.22$\pm$0.02 1.42$\pm$0.22 1.27$\pm$0.25[^69]
- LBZ14 0.38 0.17$\pm$0.06 0.13$\pm$0.05 3.28$\pm$0.17 0.47$\pm$0.01 0.23$\pm$0.01 1.44$\pm$0.07 1.50$\pm$0.12
- LBZ15 0.21 0.29$\pm$0.12 0.22$\pm$0.09 3.60$\pm$0.33 0.43$\pm$0.02 0.19$\pm$0.01 1.41$\pm$0.12 1.63$\pm$0.21
- LBZ16 0.24 0.44$\pm$0.05 0.34$\pm$0.04 4.09$\pm$0.18 0.57$\pm$0.01 0.17$\pm$0.01 1.41$\pm$0.04 3.19$\pm$0.20
- LBZ17 0.10 - - - 0.44$\pm$0.04 0.11$\pm$0.02[^70] 1.20$\pm$0.19 -
- LBZ18 0.14 0.55$\pm$0.08 0.42$\pm$0.06 4.45$\pm$0.27 0.71$\pm$0.02 0.24$\pm$0.01 1.34$\pm$0.05 2.92$\pm$0.25
--------- ---------- ------------------ ----------------- --------------- ----------------------------- ----------------------------- ----------------------------- ------------------------- ------------------------------
--------- ---------- ------------------ ----------------- --------------- ----------------------------- ----------------------------- ----------------------------- ------------------------- ------------------------------
Galaxy SourceID F([H$\alpha$]{}) c([H$\beta$]{}) E$_{(B-V)}$ [H$\alpha$]{}/ [H$\beta$]{} [\[S [ii]{}\]]{}(6716+6731) [\[N [ii]{}\]]{}(6548+6584) [\[S [ii]{}\]]{}(6716) [\[O [iii]{}\]]{}(4959+5007)
/ [H$\alpha$]{} / [H$\alpha$]{} /[\[S [ii]{}\]]{}(6731) / [H$\beta$]{}
NGC4395 LBZ1 0.22 0.20$\pm$0.10 0.15$\pm$0.08 3.37$\pm$0.28 0.42$\pm$0.02 0.11$\pm$0.01 1.41$\pm$0.12 0.42$\pm$0.06[^71]
- LBZ2 0.02 0.80$\pm$0.58 0.62$\pm$0.45 5.42$\pm$2.50 0.59$\pm$0.11 - 0.96$\pm$0.29 1.45$\pm$0.82[^72]
- LBZ3 0.07 1.48$\pm$0.37 1.14$\pm$0.29 9.35$\pm$2.78 0.40$\pm$0.04 0.09$\pm$0.02[^73] 1.26$\pm$0.22 0.82$\pm$1.01
- LBZ4 0.13 0.42$\pm$0.15 0.32$\pm$0.11 4.00$\pm$0.48 0.41$\pm$0.02 0.19$\pm$0.02 1.43$\pm$0.15 1.21$\pm$0.24
- LBZ5 0.33 0.57$\pm$0.14 0.44$\pm$0.11 4.50$\pm$0.50 0.61$\pm$0.03 0.14$\pm$0.01[^74] 1.29$\pm$0.10 2.24$\pm$0.36
- LBZ6 0.21 0.92$\pm$0.15 0.71$\pm$0.11 5.97$\pm$0.69 0.59$\pm$0.04 0.16$\pm$0.02[^75] 1.31$\pm$0.12 2.68$\pm$0.45
NGC4449 LBZ1 0.19 0.75$\pm$0.17 0.58$\pm$0.13 5.23$\pm$0.72 0.58$\pm$0.03 0.24$\pm$0.02 1.23$\pm$0.09 -
- LBZ2 0.20 0.25$\pm$0.10 0.19$\pm$0.07 3.51$\pm$0.27 0.39$\pm$0.02 0.21$\pm$0.02 1.44$\pm$0.12 5.90$\pm$0.61
- LBZ3 0.21 0.48$\pm$0.12 0.37$\pm$0.09 4.20$\pm$0.40 0.46$\pm$0.02 0.28$\pm$0.02 1.40$\pm$0.12 1.41$\pm$0.20
- LBZ4 0.82 0.02$\pm$0.06 0.01$\pm$0.04 2.91$\pm$0.13 0.42$\pm$0.01 0.22$\pm$0.01 1.42$\pm$0.07 0.96$\pm$0.07
- LBZ5 0.20 0.27$\pm$0.11 0.21$\pm$0.09 3.56$\pm$0.33 0.56$\pm$0.02 0.25$\pm$0.02 1.40$\pm$0.10 1.17$\pm$0.17
- LBZ6a 0.34 0.20$\pm$0.08 0.15$\pm$0.06 3.37$\pm$0.22 0.47$\pm$0.02 0.18$\pm$0.01 1.29$\pm$0.08 1.38$\pm$0.13
- LBZ6b 0.16 0.29$\pm$0.14 0.22$\pm$0.10 3.61$\pm$0.39 0.54$\pm$0.03 0.18$\pm$0.01 1.41$\pm$0.12 1.69$\pm$0.27
- LBZ7 2.70 1.48$\pm$0.11 1.14$\pm$0.08 9.35$\pm$0.82 0.43$\pm$0.01 0.16$\pm$0.01 1.29$\pm$0.06 3.96$\pm$0.57
- LBZ8 2.20 0.56$\pm$0.06 0.43$\pm$0.04 4.50$\pm$0.21 0.47$\pm$0.01 0.17$\pm$0.01 1.38$\pm$0.05 2.24$\pm$0.16
- LBZ9 4.00 0.11$\pm$0.04 0.08$\pm$0.03 3.12$\pm$0.10 0.48$\pm$0.01 0.17$\pm$0.01 1.40$\pm$0.04 2.37$\pm$0.11
- LBZ10 3.20 0.74$\pm$0.04 0.57$\pm$0.03 5.19$\pm$0.16 0.45$\pm$0.01 0.23$\pm$0.01 1.42$\pm$0.04 2.56$\pm$0.12
- LBZ11 2.80 0.13$\pm$0.05 0.10$\pm$0.04 3.17$\pm$0.12 0.59$\pm$0.01 0.18$\pm$0.01 1.36$\pm$0.04 1.44$\pm$0.08
- LBZ12 0.65 0.55$\pm$0.07 0.42$\pm$0.05 4.44$\pm$0.25 0.40$\pm$0.01 0.26$\pm$0.01 1.38$\pm$0.06 1.44$\pm$0.12
- LBZ13 0.06 0.40$\pm$0.29 0.31$\pm$0.23 3.94$\pm$0.92 1.06$\pm$0.07 0.25$\pm$0.03[^76] 1.33$\pm$0.14 1.19$\pm$0.48
- LBZ14 0.04 0.53$\pm$0.31 0.41$\pm$0.24 4.38$\pm$1.10 0.65$\pm$0.08 0.16$\pm$0.03 1.33$\pm$0.25 -
- LBZ15 0.18 1.25$\pm$0.19 0.96$\pm$0.14 7.78$\pm$1.16 0.56$\pm$0.03 0.16$\pm$0.01 1.43$\pm$0.12 1.08$\pm$0.22[^77]
- LBZ16 0.34 0.56$\pm$0.11 0.43$\pm$0.08 4.50$\pm$0.39 0.40$\pm$0.02 0.22$\pm$0.01 1.35$\pm$0.09 0.83$\pm$0.14
- LBZ17 0.33 0.35$\pm$0.09 0.27$\pm$0.07 3.79$\pm$0.26 0.51$\pm$0.02 0.15$\pm$0.01 1.39$\pm$0.08 1.16$\pm$0.13
- LBZ18 0.32 0.32$\pm$0.09 0.25$\pm$0.07 3.70$\pm$0.26 0.46$\pm$0.02 0.16$\pm$0.01 1.36$\pm$0.08 1.22$\pm$0.14
NGC5204 LBZ1 0.11 - - - 0.69$\pm$0.08 0.17$\pm$0.03 1.06$\pm$0.22 -
- LBZ2 0.15 0.29$\pm$0.21 0.23$\pm$0.16 3.62$\pm$0.61 0.41$\pm$0.05 0.09$\pm$0.02[^78] 1.43$\pm$0.31 4.94$\pm$0.99
- LBZ3 0.41 1.28$\pm$0.16 0.99$\pm$0.13 7.98$\pm$1.03 0.42$\pm$0.03 0.11$\pm$0.01 1.36$\pm$0.17 4.66$\pm$0.97
- LBZ4 0.17 1.16$\pm$0.18 0.89$\pm$0.14 7.20$\pm$1.01 0.64$\pm$0.04 0.12$\pm$0.01 1.30$\pm$0.11 -
- LBZ5 0.07 0.40$\pm$0.23 0.31$\pm$0.17 3.96$\pm$0.71 0.41$\pm$0.04 0.14$\pm$0.02 1.22$\pm$0.20 0.60$\pm$0.17[^79]
- LBZ6 0.32 0.40$\pm$0.11 0.31$\pm$0.08 3.94$\pm$0.33 0.38$\pm$0.02 0.15$\pm$0.01 1.33$\pm$0.10 0.27$\pm$0.06
- LBZ7 0.51 0.67$\pm$0.08 0.52$\pm$0.07 4.91$\pm$0.33 0.42$\pm$0.02 0.19$\pm$0.01 1.24$\pm$0.09 -
--------- ---------- ------------------ ----------------- --------------- ----------------------------- ----------------------------- ----------------------------- ------------------------- ------------------------------
--------- ---------- ------------------ ----------------- --------------- ----------------------------- ----------------------------- ----------------------------- ------------------------- ------------------------------
Galaxy SourceID F([H$\alpha$]{}) c([H$\beta$]{}) E$_{(B-V)}$ [H$\alpha$]{}/ [H$\beta$]{} [\[S [ii]{}\]]{}(6716+6731) [\[N [ii]{}\]]{}(6548+6584) [\[S [ii]{}\]]{}(6716) [\[O [iii]{}\]]{}(4959+5007)
/ [H$\alpha$]{} / [H$\alpha$]{} /[\[S [ii]{}\]]{}(6731) / [H$\beta$]{}
NGC2403 LBZ1059 0.15 - - - 0.35$\pm$0.02 - 1.17$\pm$0.14 -
- LBZ731 0.11 0.39$\pm$0.20 0.30$\pm$0.16 3.93$\pm$0.63 0.19$\pm$0.02 0.28$\pm$0.03 1.24$\pm$0.24 0.89$\pm$0.18[^80]
- LBZ561 0.22 0.30$\pm$0.10 0.23$\pm$0.08 3.64$\pm$0.30 0.23$\pm$0.02 0.25$\pm$0.02 1.32$\pm$0.17 1.16$\pm$0.15
- LBZ484 0.11 0.74$\pm$0.21 0.57$\pm$0.16 5.17$\pm$0.87 0.26$\pm$0.03 0.32$\pm$0.03 0.94$\pm$0.17 5.99$\pm$1.43
- LBZ982 0.64 0.43$\pm$0.09 0.33$\pm$0.07 4.03$\pm$0.27 0.21$\pm$0.01 0.28$\pm$0.01 1.36$\pm$0.12 0.49$\pm$0.07
- LBZ1291 0.34 0.82$\pm$0.16 0.63$\pm$0.13 5.49$\pm$0.71 0.30$\pm$0.02 0.32$\pm$0.02 1.20$\pm$0.13 0.56$\pm$0.10[^81]
- LBZ840 0.04 - - - 0.15$\pm$0.03 - 1.42$\pm$0.59 -
- LBZ796 0.31 0.37$\pm$0.13 0.28$\pm$0.10 3.85$\pm$0.38 0.14$\pm$0.01 0.33$\pm$0.02 0.97$\pm$0.15 0.43$\pm$0.09
- LBZ620 0.40 0.11$\pm$0.08 0.09$\pm$0.06 3.13$\pm$0.19 0.20$\pm$0.01 0.32$\pm$0.01 1.14$\pm$0.10 1.67$\pm$0.15
- LBZ514 0.35 0.59$\pm$0.13 0.45$\pm$0.10 4.58$\pm$0.48 0.32$\pm$0.02 0.37$\pm$0.02 1.38$\pm$0.13 0.46$\pm$0.12
- LBZ1180 0.17 0.20$\pm$0.11 0.15$\pm$0.09 3.37$\pm$0.30 0.15$\pm$0.01 0.26$\pm$0.02 0.95$\pm$0.16 1.06$\pm$0.20
- LBZ963 0.20 1.02$\pm$0.34 0.78$\pm$0.26 6.46$\pm$1.73 0.23$\pm$0.02 0.18$\pm$0.01 1.36$\pm$0.17 -
NGC3077 LBZ363 0.10 1.74$\pm$0.49 1.34$\pm$0.38 11.5$\pm$4.49 0.25$\pm$0.03 0.33$\pm$0.04 1.30$\pm$0.23 -
- LBZ396 0.19 0.40$\pm$0.13 0.31$\pm$0.10 3.94$\pm$0.42 0.20$\pm$0.02 0.36$\pm$0.02 1.22$\pm$0.15 -
NGC4214 LBZ1089 1.30 0.43$\pm$0.05 0.33$\pm$0.04 4.03$\pm$0.15 0.21$\pm$0.01 0.09$\pm$0.01 1.34$\pm$0.06 9.39$\pm$0.47
- LBZ597 0.85 0.74$\pm$0.07 0.57$\pm$0.05 5.18$\pm$0.27 0.20$\pm$0.01 0.15$\pm$0.01 1.30$\pm$0.07 0.95$\pm$0.09
- LBZ1091 1.20 0.70$\pm$0.07 0.54$\pm$0.05 5.02$\pm$0.26 0.12$\pm$0.01 0.08$\pm$0.01 1.27$\pm$0.09 2.59$\pm$0.20
- LBZ988 0.20 0.07$\pm$0.05 0.06$\pm$0.04 3.04$\pm$0.12 0.27$\pm$0.01 0.14$\pm$0.01 1.40$\pm$0.07 0.22$\pm$0.02
- LBZ917 4.70 0.52$\pm$0.02 0.40$\pm$0.02 4.34$\pm$0.08 0.14$\pm$0.01 0.11$\pm$0.01 1.35$\pm$0.04 2.16$\pm$0.06
- LBZ971 0.24 0.19$\pm$0.12 0.15$\pm$0.10 3.35$\pm$0.33 0.17$\pm$0.01 0.06$\pm$0.01[^82] 1.08$\pm$0.16 9.05$\pm$1.20
- LBZ928 1.10 0.76$\pm$0.10 0.58$\pm$0.08 5.24$\pm$0.42 0.15$\pm$0.01 - 1.43$\pm$0.16 6.22$\pm$0.72
- LBZ911 0.41 0.60$\pm$0.09 0.46$\pm$0.07 4.62$\pm$0.33 0.14$\pm$0.01 0.13$\pm$0.01 1.37$\pm$0.14 2.08$\pm$0.23
- LBZ889 0.56 0.28$\pm$0.06 0.21$\pm$0.05 3.58$\pm$0.18 0.13$\pm$0.01 0.11$\pm$0.01 1.33$\pm$0.13 2.21$\pm$0.16
- LBZ899 1.70 0.19$\pm$0.04 0.14$\pm$0.03 3.33$\pm$0.09 0.26$\pm$0.01 0.16$\pm$0.01 1.39$\pm$0.05 4.71$\pm$0.18
- LBZ362 0.10 1.02$\pm$0.20 0.79$\pm$0.15 6.48$\pm$1.02 0.27$\pm$0.03 0.19$\pm$0.03 1.30$\pm$0.23 -
- LBZ863 0.36 0.52$\pm$0.08 0.40$\pm$0.06 4.34$\pm$0.27 0.17$\pm$0.01 0.15$\pm$0.01 1.37$\pm$0.13 1.21$\pm$0.12
- LBZ740 0.15 0.18$\pm$0.11 0.14$\pm$0.09 3.32$\pm$0.30 0.19$\pm$0.02 0.10$\pm$0.01 1.17$\pm$0.22 8.09$\pm$0.97
- LBZ800 0.16 0.42$\pm$0.14 0.32$\pm$0.10 4.00$\pm$0.43 0.19$\pm$0.02 0.08$\pm$0.01 1.34$\pm$0.35 1.15$\pm$0.18
- LBZ845 0.27 0.41$\pm$0.11 0.32$\pm$0.08 3.99$\pm$0.35 0.11$\pm$0.01 0.13$\pm$0.01 1.14$\pm$0.17 0.70$\pm$0.10
- LBZ836 0.43 0.41$\pm$0.07 0.32$\pm$0.06 3.99$\pm$0.23 0.16$\pm$0.01 0.14$\pm$0.01 1.31$\pm$0.12 1.10$\pm$0.10
- LBZ690 0.16 0.84$\pm$0.22 0.64$\pm$0.17 5.59$\pm$0.97 0.37$\pm$0.02 0.14$\pm$0.01 1.28$\pm$0.16 1.05$\pm$0.29
NGC4395 LBZ1252 0.18 0.27$\pm$0.13 0.21$\pm$0.10 3.55$\pm$0.36 0.20$\pm$0.01 0.09$\pm$0.01 1.41$\pm$0.18 0.95$\pm$0.15
- LBZ391 0.14 0.66$\pm$0.22 0.51$\pm$0.17 4.87$\pm$0.85 0.24$\pm$0.02 0.09$\pm$0.02[^83] 1.36$\pm$0.23 -
- LBZ267 0.05 0.47$\pm$0.26 0.36$\pm$0.20 4.16$\pm$0.88 0.24$\pm$0.04 0.13$\pm$0.03[^84] 1.33$\pm$0.37 3.21$\pm$0.94
- LBZ151 0.20 0.24$\pm$0.10 0.19$\pm$0.08 3.48$\pm$0.29 0.22$\pm$0.01 0.07$\pm$0.01[^85] 1.35$\pm$0.01 6.81$\pm$0.75
--------- ---------- ------------------ ----------------- --------------- ----------------------------- ----------------------------- ----------------------------- ------------------------- ------------------------------
--------- ---------- ------------------ ----------------- --------------- ----------------------------- ----------------------------- ----------------------------- ------------------------- ------------------------------
Galaxy SourceID F([H$\alpha$]{}) c([H$\beta$]{}) E$_{(B-V)}$ [H$\alpha$]{}/ [H$\beta$]{} [\[S [ii]{}\]]{}(6716+6731) [\[N [ii]{}\]]{}(6548+6584) [\[S [ii]{}\]]{}(6716) [\[O [iii]{}\]]{}(4959+5007)
/ [H$\alpha$]{} / [H$\alpha$]{} /[\[S [ii]{}\]]{}(6731) / [H$\beta$]{}
NGC4449 LBZ581 0.82 0.39$\pm$0.06 0.30$\pm$0.04 3.90$\pm$0.18 0.22$\pm$0.01 0.18$\pm$0.01 1.40$\pm$0.08 2.77$\pm$0.18
- LBZ593 0.15 0.32$\pm$0.14 0.25$\pm$0.11 3.72$\pm$0.42 0.24$\pm$0.02 0.25$\pm$0.02 0.77$\pm$0.12 1.64$\pm$0.26
- LBZ567 0.20 0.62$\pm$0.17 0.48$\pm$0.13 4.71$\pm$0.63 0.20$\pm$0.02 0.11$\pm$0.01 1.40$\pm$0.23 0.41$\pm$0.08[^86]
- LBZ527 0.71 0.35$\pm$0.07 0.27$\pm$0.05 3.81$\pm$0.20 0.31$\pm$0.01 0.19$\pm$0.01 1.41$\pm$0.07 0.83$\pm$0.07
- LBZ503 0.27 0.64$\pm$0.12 0.49$\pm$0.09 4.78$\pm$0.45 0.36$\pm$0.02 0.23$\pm$0.01 1.29$\pm$0.10 1.82$\pm$0.25
- LBZ500 0.20 0.84$\pm$0.15 0.65$\pm$0.12 5.62$\pm$0.68 0.24$\pm$0.02 0.15$\pm$0.02 1.36$\pm$0.16 -
- LBZ521 0.25 0.36$\pm$0.11 0.28$\pm$0.08 3.83$\pm$0.33 0.21$\pm$0.01 0.25$\pm$0.01 1.11$\pm$0.14 0.38$\pm$0.08
- LBZ266 1.2 0.36$\pm$0.06 0.28$\pm$0.04 3.82$\pm$0.17 0.26$\pm$0.01 0.18$\pm$0.01 1.43$\pm$0.06 2.45$\pm$0.16
- LBZ449 0.33 1.34$\pm$0.09 1.03$\pm$0.07 8.35$\pm$0.63 0.27$\pm$0.02 - 1.43$\pm$0.18 6.42$\pm$0.55[^87]
- LBZ398 6.9 0.29$\pm$0.03 0.22$\pm$0.02 3.60$\pm$0.09 0.30$\pm$0.01 0.15$\pm$0.01 1.41$\pm$0.04 2.50$\pm$0.09
- LBZ391 21. 0.06$\pm$0.02 0.04$\pm$0.01 3.01$\pm$0.04 0.27$\pm$0.01 0.17$\pm$0.01 1.38$\pm$0.02 1.80$\pm$0.04
- LBZ432 1.0 0.14$\pm$0.08 0.11$\pm$0.06 3.21$\pm$0.20 0.28$\pm$0.01 0.11$\pm$0.01 1.40$\pm$0.08 0.81$\pm$0.06[^88]
- LBZ311 0.29 0.28$\pm$0.10 0.21$\pm$0.08 3.57$\pm$0.28 0.26$\pm$0.01 0.14$\pm$0.01 1.36$\pm$0.12 1.41$\pm$0.17
- LBZ401 0.57 0.63$\pm$0.09 0.48$\pm$0.07 4.74$\pm$0.33 0.32$\pm$0.01 0.25$\pm$0.01 1.19$\pm$0.08 2.83$\pm$0.29
- LBZ387 0.33 0.44$\pm$0.09 0.34$\pm$0.07 4.07$\pm$0.29 0.33$\pm$0.01 0.20$\pm$0.01 1.15$\pm$0.08 1.12$\pm$0.13
- LBZ394 0.29 0.25$\pm$0.10 0.19$\pm$0.08 3.49$\pm$0.28 0.33$\pm$0.01 0.19$\pm$0.01 1.41$\pm$0.10 2.69$\pm$0.30
- LBZ95 0.23 0.59$\pm$0.11 0.45$\pm$0.09 4.59$\pm$0.42 0.23$\pm$0.01 0.14$\pm$0.01 1.39$\pm$0.14 12.06$\pm$1.52
- LBZ260 0.15 0.75$\pm$0.24 0.58$\pm$0.18 5.22$\pm$0.99 0.18$\pm$0.02 0.12$\pm$0.01[^89] 1.43$\pm$0.27 1.21$\pm$0.34
- LBZ318 0.36 0.09$\pm$0.07 0.07$\pm$0.06 3.09$\pm$0.18 0.16$\pm$0.01 0.12$\pm$0.01 1.35$\pm$0.14 1.23$\pm$0.11
NGC5204 LBZ154 0.23 0.15$\pm$0.11 0.11$\pm$0.09 3.23$\pm$0.29 0.13$\pm$0.01 0.04$\pm$0.01[^90] 1.35$\pm$0.23 7.61$\pm$0.93
- LBZ487 0.17 0.77$\pm$0.33 0.59$\pm$0.25 5.30$\pm$1.40 0.30$\pm$0.02 0.06$\pm$0.01[^91] 0.38$\pm$0.16 1.87$\pm$0.67
- LBZ458 0.11 0.26$\pm$0.23 0.20$\pm$0.18 3.53$\pm$0.64 0.32$\pm$0.03 - 1.39$\pm$0.20 1.33$\pm$0.28[^92]
- LBZ439 0.59 0.03$\pm$0.06 0.02$\pm$0.05 2.94$\pm$0.14 0.22$\pm$0.10 0.13$\pm$0.01 1.27$\pm$0.10 3.52$\pm$0.25
- LBZ412 0.05 0.08$\pm$0.21 0.06$\pm$0.16 3.06$\pm$0.50 0.31$\pm$0.04 0.19$\pm$0.03 1.04$\pm$0.22 0.71$\pm$0.17[^93]
- LBZ242 0.95 0.94$\pm$0.09 0.73$\pm$0.07 6.09$\pm$0.42 0.12$\pm$0.01 0.11$\pm$0.01 1.10$\pm$0.08 1.39$\pm$0.16
- LBZ299 0.18 - - - 0.21$\pm$0.02 0.11$\pm$0.02 0.82$\pm$0.14 -
--------- ---------- ------------------ ----------------- --------------- ----------------------------- ----------------------------- ----------------------------- ------------------------- ------------------------------
Photometric Ratio Detected sources Photometric SNRs Obtained Spectra Spectroscopic Classification in SNRs Success in SNRs
------------------- ------------------ ------------------ ------------------ -------------------------------------- -----------------
NGC2403
$>$0.4 111 102 14 5 36$\%$
0.3 - 0.4 48 47 8 7 88$\%$
$<$0.3 ... 0 2 0 0 $\%$
NGC3077
$>$0.4 16 16 4 4 100$\%$
0.3 - 0.4 8 8 2 2 100$\%$
$<$0.3 ... - 2 - -
NGC4214
$>$0.4 78 71 23 16 69.5$\%$
0.3 - 0.4 23 19 4 0 0$\%$
$<$0.3 ... 2 8 2 25$\%$
NGC4395
$>$0.4 16 15 2 1 50$\%$
0.3 - 0.4 36 29 9 2 22$\%$
$<$0.3 ... 3 4 3 75$\%$
NGC4449
$>$0.4 59 53 20 14 70$\%$
0.3 - 0.4 19 15 6 2 33$\%$
$<$0.3 ... 2 11 2 18$\%$
NGC5204
$>$0.4 12 12 - - -
0.3 - 0.4 21 20 4 3 75$\%$
$<$0.3 ... 4 11 4 36$\%$
All Galaxies
$>$0.4 292 269 63 40 64$\%$
0.3 - 0.4 155 138 33 16 48$\%$
$<$0.3 ... 11 38 11 30$\%$
Sum
447+... 418 134 67
Galaxy 12 + log (N/H) 12 + log (O/H)
---------------- ---------------- ----------------
NGC2403 7.48 8.52
NGC3077 7.48 8.64
NGC4214 - 8.22
NGC4395 7.18 8.27
NGC4449 7.10 8.30
NGC5204 - -
NGC6946 8.15 8.70
NGC5585 - -
M81 (NGC3031) 7.96 8.69
M101 (NGC5457) 8.23 8.80
LMC 6.45 8.35
SMC - 8.03
Milky Way - 8.52
---------------------- ------------------------ ------------ ------------ ----------- ------------- ---------------------- ------------- ------------- -------------
Source ID Classification RA Dec Optical Offset X-ray Offset Radio Offset
(h:m:s) (d:m:s) associate ($\arcsec$) associate ($\arcsec$) associate ($\arcsec$)
out of field of view - 07:36:21.2 65:40:56.4 SNR-1 - - - - -
LBZ22 candidate SNR 07:36:24.1 65:36:07.2 SNR-2 2.29 - - - -
LBZ1 SNR 07:36:30.4 65:35:43.4 SNR-3 3.44 - - - -
LBZ118 probable candidate SNR 07:36:37.0 65:36:39.1 SNR-4 1.24 - - - -
LBZ60 candidate SNR 07:36:42.9 65:34:51.9 SNR-5 1.01 - - - -
LBZ67 candidate SNR 07:36:45.8 65:36:36.0 SNR-6 0.69 - - - -
LBZ66 candidate SNR 07:36:45.7 65:36:40.6 SNR-7 0.10 probable SNR (LZB30) 2.18 SNR ($\mu$) 3.09
LBZ131 probable candidate SNR 07:36:49.2 65:34:30.6 SNR-8 1.06 - - - -
LBZ87 candidate SNR 07:36:52.2 65:33:41.9 SNR-9 1.57 - - - -
LBZ135 probable candidate SNR 07:36:52.7 65:35:50.2 SNR-10 0.30 - - - -
LBZ89 candidate SNR 07:36:53.4 65:35:59.8 SNR-11 0.80 - - - -
LBZ90 candidate SNR 07:36:53.8 65:33:41.7 SNR-12 1.89 - - - -
LBZ137 probable candidate SNR 07:36:53.7 65:35:11.5 SNR-13 0.63 - - - -
LBZ93 candidate SNR 07:36:55.1 65:35:38.1 SNR-14 0.72 - - - -
LBZ6 SNR 07:36:55.8 65:35:43.0 SNR-15 3.36 XRB (LZB104) 2.98
LBZ139 probable candidate SNR 07:36:56.3 65:34:05.6 SNR-16 0.63 - - - -
LBZ96 candidate SNR 07:36:57.2 65:36:03.9 SNR-17 2.11 SNR (LZB42) 2.51
LBZ102 candidate SNR 07:37:01.8 65:34:13.4 SNR-18 1.30 XRB (LZB93) 5.9 - -
LBZ145 probable candidate SNR 07:37:02.1 65:34:36.6 SNR-19 0.74 - - - -
LBZ144 probable candidate SNR 07:37:02.0 65:33:42.0 SNR-20 0.48 - - - -
LBZ1530 Frame-4 07:37:02.7 65:37:22.0 SNR-21 3.36 - - - -
LBZ103 candidate SNR 07:37:02.4 65:36:01.7 SNR-22 1.54 probable SNR (LZB86) 1.86 - -
LBZ146 probable candidate SNR 07:37:03.0 65:33:46.1 SNR-23 1.10 - - - -
LBZ104 candidate SNR 07:37:02.8 65:34:38.1 SNR-24 1.24 probable SNR (LZB80) 3.69 - -
LBZ560 Frame-4 07:37:06.1 65:36:04.1 SNR-25 1.12 - - - -
LBZ651 SNR/[H[ii]{}]{} 07:37:06.3 65:36:10.5 SNR-26 1.41 - - - -
LBZ1301 Frame-4 07:37:07.2 65:37:10.4 SNR-27 0.79 - - - -
LBZ1373 SNR/[H[ii]{}]{} 07:37:09.7 65:32:55.6 SNR-28 2.08 - - - -
LBZ107 candidate SNR 07:37:10.7 65:33:11.0 SNR-29 1.17 probable SNR (LZB2) 0.73 - -
LBZ108 candidate SNR 07:37:12.4 65:33:45.9 SNR-30 0.47 XRB (LZB99) 2.54 - -
LBZ11 SNR 07:37:16.0 65:33:28.9 SNR-31 0.51 probable SNR (LZB14) 2.06 - -
LBZ12 SNR 07:37:21.4 65:33:06.9 SNR-32 2.11 - - - -
LBZ109 candidate SNR 07:37:21.6 65:33:14.4 SNR-33 0.62 - - - -
LBZ622 Frame-4 07:37:23.0 65:35:46.8 SNR-34 0.97 - - - -
Mat35 Frame-4 07:37:29.5 65:36:57.7 SNR-35 1.14 - - - -
LBZ127 probable candidate SNR 07:36:46.5 65:36:10.8 - - XRB (LZB58) 3.28 - -
---------------------- ------------------------ ------------ ------------ ----------- ------------- ---------------------- ------------- ------------- -------------
---------------------- ------------------ ------------ ------------ ----------- ------------- ----------------------- ------------- ----------- -------------
Source ID Classification RA Dec Optical Offset X-ray Offset Radio Offset
(h:m:s) (d:m:s) associate ($\arcsec$) associate ($\arcsec$) associate ($\arcsec$)
LBZ8 SNR 07:37:03.2 65:37:13.7 - - SNR (LZB81) 2.82 - -
LBZ74 candidate SNR 07:36:47.9 65:36:23.9 - - probable SNR (LZB120) 2.09 - -
LBZ56 candidate SNR 07:36:41.9 65:36:51.7 - - SNR (LZB107) 3.05 SNR (TH2)
LBZ635.2 SNR/[H[ii]{}]{} 07:36:52.3 65:36:40.3 - - probable SNR (LZB5) 1.72 - -
out of field of view - 07:37:08.0 65:39:20.6 - - probable SNR (LZB78) - - -
not detected nothing 07:37:17.9 65:36:24.2 - - probable SNR (LZB76) - - -
LBZ902.4 SNR/[H[ii]{}]{} 07:37:13.3 65:35:59.2 - - probable SNR (LZB41) 1.42 - -
LBZ1562.1 SNR/[H[ii]{}]{} 07:37:14.8 65:32:04.1 - - probable SNR (LZB68) 2.20 - -
not detected nothing 07:37:22.2 65:33:18.5 - - probable SNR (LZB71) - - -
LBZ301.2 SNR/[H[ii]{}]{} 07:36:49.2 65:36:51.4 - - - - SNR (TH4) 0.53
---------------------- ------------------ ------------ ------------ ----------- ------------- ----------------------- ------------- ----------- -------------
--------------- ------------------------ ------------------- ------------------- ----------- ------------- ----------------- ------------- ----------- -------------
Source ID Classification RA Dec Optical Offset X-ray Offset Radio Offset
(h:m:s) (d:m:s) associate ($\arcsec$) associate ($\arcsec$) associate ($\arcsec$)
LBZ236 ? SNR/[H[ii]{}]{} 10:03:18.2 68:44:02.4 - - SNR (LZB6, S6) 1.86 - -
not detected nothing 10:03:21.8 68:45:03.3 - - SNR (LZB12) - - -
not detected nothing 10:03:12.1 68:43:19.1 - - SNR (LZB13) - - -
LBZ24 probable candidate SNR 10:03:20.8 68:41:40.1 - - SNR (LZB15) 1.22 - -
LBZ299/LBZ300 SNR/[H[ii]{}]{} 10:03:19/10:03:19 68:43:54/68:43:59 - - SNR (LZB18, S1) 2.85/2.35 SNR (S1) 1.97/2.62
LBZ303 ? SNR/[H[ii]{}]{} 10:03:18.1 68:43:57.0 - - SNR (S5) 1.33 - -
--------------- ------------------------ ------------------- ------------------- ----------- ------------- ----------------- ------------- ----------- -------------
-------------- ------------------------ ------------ ------------ -------------------- ------------- ----------------------- ------------- -------------------- -------------
Source ID Classification RA Dec Optical Offset X-ray Offset Radio Offset
(h:m:s) (d:m:s) associate ($\arcsec$) associate ($\arcsec$) associate ($\arcsec$)
LBZ35 ? candidate SNR 12:15:33.4 36:19:01.0 - - SNR (LZB7) 2.16 - -
not detected nothing 12:15:49.7 36:18:46.7 - - candidate SNR (LZB10) - - -
LBZ47 ? candidate SNR 12:15:38.0 36:22:22.4 - - candidate SNR (LZB11) 1.48 - -
not detected diffused 12:15:40.2 36:19:25.2 - - candidate SNR (LZB16) - - -
LBZ73 candidate SNR 12:15:48.8 36:17:02.3 - - candidate SNR (LZB23) 0.95 - -
LBZ1073 SNR/[H[ii]{}]{} 12:15:34.7 36:20:17.2 [H[ii]{}]{} region - - - SNR-2 0.20
LBZ80 probable candidate SNR 12:15:38.2 36:19:45.2 - - XRB (LZB26) 1.44 SNR/[H[ii]{}]{}-3 0.29
LBZ82 probable candidate SNR 12:15:38.9 36:18:58.9 SNR-1 0.34 - - SNR-4 0.59
not detected diffused 12:15:39.7 36:19:34.3 - - - - SNR/[H[ii]{}]{}-8 -
LBZ57 candidate SNR 12:15:40.0 36:18:39.4 SNR-2 0.47 SNR (LZB30) 0.32 SNR-9 0.85
LBZ56 candidate SNR 12:15:39.4 36:20:54.1 - - probable SNR (LZB31) 0.00 - -
LBZ1098 SNR/[H[ii]{}]{} 12:15:40.0 36:19:35.8 SNR-3 0.28 probable SNR (LZB34) 0.78 SNR-10 0.52
LBZ936 SNR/[H[ii]{}]{} 12:15:37.2 36:22:19.6 - - probable SNR (LZB35) 0.97 - -
LBZ83 probable candidate SNR 12:15:40.2 36:19:30.2 SNR-4 0.30 - - SNR-11 0.72
LBZ1099 SNR/[H[ii]{}]{} 12:15:40.5 36:19:31.5 - - - - SNR-12 0.00
not detected diffused 12:15:41.6 36:19:09.7 - - - - SNR/[H[ii]{}]{}-18 -
LBZ87 probable candidate SNR 12:15:41.9 36:19:15.5 SNR-5 0.37 probable SNR (LZB28) 1.06 SNR-19, $\rho$ 0.26,1.43
LBZ16 SNR 12:15:42.5 36:19:47.7 SNR-6 0.08 - - [H[ii]{}]{} region -
LBZ18 SNR 12:15:45.7 36:19:41.8 SNR-7 0.30 probable SNR (LZB38) 0.43 [H[ii]{}]{} region -
LBZ832 SNR/[H[ii]{}]{} 12:15:41.0 36:19:03.8 - - - - $\alpha$ 1.91
LBZ857 SNR/[H[ii]{}]{} 12:15:40.7 36:19:11.9 - - - - $\beta$ 1.99
-------------- ------------------------ ------------ ------------ -------------------- ------------- ----------------------- ------------- -------------------- -------------
---------------------- ----------------- ------------ ------------ ----------- ------------- ----------------------- ------------- ---------------- -------------
Source ID Classification RA Dec Optical Offset X-ray Offset Radio Offset
(h:m:s) (d:m:s) associate ($\arcsec$) associate ($\arcsec$) associate ($\arcsec$)
out of field of view - 12:25:53.2 33:38:30.4 - - candidate SNR (LZB10) - - -
LBZ1503 ? SNR/[H[ii]{}]{} 12:25:39.6 33:32:04.2 - - SNR (LZB14) 2.28 - -
LBZ 1099 SNR/[H[ii]{}]{} 12:25:58.1 33:31:38.3 - - - - SNR (source 3) 1.27
---------------------- ----------------- ------------ ------------ ----------- ------------- ----------------------- ------------- ---------------- -------------
-------------- ------------------------ ------------ ------------ ----------------- ------------- ---------------------- ------------- -------------------- -------------
Source ID Classification RA Dec Optical Offset X-ray Offset Radio Offset
(h:m:s) (d:m:s) associate ($\arcsec$) associate ($\arcsec$) associate ($\arcsec$)
LBZ201? SNR/[H[ii]{}]{} 12:28:12.1 44:05:58.4 - - SNR (LZB9) 1.29 - -
LBZ122 SNR/[H[ii]{}]{} 12:28:11.0 44:06:47.8 oxygen-rich SNR 0.57 SNR (LZB12) 0.57 SNR-12 1.43
LBZ241 SNR/[H[ii]{}]{} 12:28:11.2 44:05:37.7 - - probable SNR (LZB24) 1.08 [H[ii]{}]{} region 0.79
not detected nothing 12:28:15.6 44:05:36.3 - - probable SNR (LZB26) - - -
LBZ475 ? SNR/[H[ii]{}]{} 12:28:09.5 44:05:20.4 - - - - SNR-7 1.92
not detected diffused 12:28:10.9 44:05:40.2 - - - - SNR-11 -
LBZ363 ? SNR/[H[ii]{}]{} 12:28:11.3 44:05:38.5 - - - - SNR-14 2.04
LBZ407 ? SNR/[H[ii]{}]{} 12:28:12.8 44:06:10.4 - - - - SNR-17 1.81
not detected nothing 12:28:13.1 44:05:37.8 - - - - SNR-19 -
LBZ323 SNR/[H[ii]{}]{} 12:28:16.2 44:06:42.8 - - - - SNR-24 0.96
LBZ57 candidate SNR 12:28:19.2 44:06:55.7 - - - - SNR-26 0.33
LBZ60 probable candidate SNR 12:28:09.7 44:05:54.8 - - XRB (LZB15) 1.98 - -
-------------- ------------------------ ------------ ------------ ----------------- ------------- ---------------------- ------------- -------------------- -------------
----------- ---------------- ------------ ------------ ----------- ------------- ----------- ------------- ----------- -------------
Source ID Classification RA Dec Optical Offset X-ray Offset Radio Offset
(h:m:s) (d:m:s) associate ($\arcsec$) associate ($\arcsec$) associate ($\arcsec$)
LBZ9 candidate SNR 13:29:30.3 58:25:20.6 SNR-1 0.70 - - - -
LBZ4 SNR 13:29:34.5 58:24:23.8 SNR-2 1.64 - - - -
LBZ16 candidate SNR 13:29:36.9 58:24:26.9 SNR-3 0.31 - - - -
----------- ---------------- ------------ ------------ ----------- ------------- ----------- ------------- ----------- -------------
![Number of spectroscopically-observed SNRs against their [\[N [ii]{}\]]{}/[H$\alpha$]{} ratios. SNRs in the irregular galaxies, apart from NGC3077, extend to lower [\[N [ii]{}\]]{}/[H$\alpha$]{} ratios indicating that the contamination of the [\[N [ii]{}\]]{} emission lines in the [H$\alpha$]{}+ [\[N [ii]{}\]]{} images is different in each galaxy.](histogram_NIIHa.ps){width="3.0in"}
![Zoomed-in display of the 67 spectroscopically observed SNRs in this study over the relevant [H$\alpha$]{} image of each galaxy. The arrows point at the SNRs’ position. North is at the top, East to the left. The images cover 30$\arcsec$$\times$30$\arcsec$ on both sides.](Fig3aN.eps){width="7in"}
{width="7in"}
{width="7in"}
{width="7in"}
{width="7in"}
{width="7in"}
{width="180mm" height="220mm"}
{width="180mm" height="220mm"}
{width="180mm" height="220mm"}
{width="180mm" height="220mm"}
{width="180mm" height="220mm"}
{width="180mm" height="220mm"}
{width="180mm" height="200mm"}
![[\[S [ii]{}\]]{}/[H$\alpha$]{} ratio of the 418 photometric SNRs (see Table 14) in our sample of galaxies against their [H$\alpha$]{} flux (see Tables 3-8 and §4.2).](SNR_ratio_fluxHa_all_SNRs_in_paper.ps){width="3.0in"}
![Number of spectroscopically-observed SNRs against their [\[S [ii]{}\]]{}(6716 Å)/[\[S [ii]{}\]]{}(6731Å) ratios. Red: SNRs in NGC2403 (the only spiral galaxy in our sample), black: SNRs in the remaining galaxies of our sample (irregulars), magenta: Spectroscopically observed SNRs by @MFBL97 in four spiral galaxies (NGC5585, NGC6946, M81 and M101). As can been seen, there is no trend in the [\[S [ii]{}\]]{}ratios between different types of galaxies. However, the majority of the spectroscopically observed SNRs present [\[S [ii]{}\]]{}(6716 Å)/[\[S [ii]{}\]]{}(6731Å) $>$ 1 (see §4.2 for details).](histogram_SII_ratio.ps){width="3.0in"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ versus the ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ ratios of all spectroscopically observed sources. The red circles denote SNRs (see $\S$4, Tables 3-8, Table 14) while the green circles indicate sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $\geq$ 0.3 (within their error-bars) but were not spectroscopically verified as SNRs (see Tables 9 and 13). Black circles denote sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot} \le$ 0.4 and ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec} \le$ 0.4 (see Tables 9 and 13). The solid line represents the 1:1 relation between photometric and spectroscopic [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios while the dashed lines denote the borderline area for SNRs ([\[S [ii]{}\]]{}/[H$\alpha$]{}$>$0.4).[]{data-label="eis1"}](NGC2403_SII_Ha_corrected_ratios_correlation.ps "fig:"){width="37.00000%"} ![The ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ versus the ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ ratios of all spectroscopically observed sources. The red circles denote SNRs (see $\S$4, Tables 3-8, Table 14) while the green circles indicate sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $\geq$ 0.3 (within their error-bars) but were not spectroscopically verified as SNRs (see Tables 9 and 13). Black circles denote sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot} \le$ 0.4 and ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec} \le$ 0.4 (see Tables 9 and 13). The solid line represents the 1:1 relation between photometric and spectroscopic [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios while the dashed lines denote the borderline area for SNRs ([\[S [ii]{}\]]{}/[H$\alpha$]{}$>$0.4).[]{data-label="eis1"}](NGC3077_SII_Ha_corrected_ratios_correlation.ps "fig:"){width="37.00000%"}
![The ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ versus the ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ ratios of all spectroscopically observed sources. The red circles denote SNRs (see $\S$4, Tables 3-8, Table 14) while the green circles indicate sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $\geq$ 0.3 (within their error-bars) but were not spectroscopically verified as SNRs (see Tables 9 and 13). Black circles denote sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot} \le$ 0.4 and ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec} \le$ 0.4 (see Tables 9 and 13). The solid line represents the 1:1 relation between photometric and spectroscopic [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios while the dashed lines denote the borderline area for SNRs ([\[S [ii]{}\]]{}/[H$\alpha$]{}$>$0.4).[]{data-label="eis1"}](NGC4214_SII_Ha_corrected_ratios_correlation.ps "fig:"){width="37.00000%"} ![The ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ versus the ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ ratios of all spectroscopically observed sources. The red circles denote SNRs (see $\S$4, Tables 3-8, Table 14) while the green circles indicate sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $\geq$ 0.3 (within their error-bars) but were not spectroscopically verified as SNRs (see Tables 9 and 13). Black circles denote sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot} \le$ 0.4 and ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec} \le$ 0.4 (see Tables 9 and 13). The solid line represents the 1:1 relation between photometric and spectroscopic [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios while the dashed lines denote the borderline area for SNRs ([\[S [ii]{}\]]{}/[H$\alpha$]{}$>$0.4).[]{data-label="eis1"}](NGC4395_SII_Ha_corrected_ratios_correlation.ps "fig:"){width="37.00000%"}
![The ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ versus the ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ ratios of all spectroscopically observed sources. The red circles denote SNRs (see $\S$4, Tables 3-8, Table 14) while the green circles indicate sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $\geq$ 0.3 (within their error-bars) but were not spectroscopically verified as SNRs (see Tables 9 and 13). Black circles denote sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot} \le$ 0.4 and ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec} \le$ 0.4 (see Tables 9 and 13). The solid line represents the 1:1 relation between photometric and spectroscopic [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios while the dashed lines denote the borderline area for SNRs ([\[S [ii]{}\]]{}/[H$\alpha$]{}$>$0.4).[]{data-label="eis1"}](NGC4449_SII_Ha_corrected_ratios_correlation.ps "fig:"){width="37.00000%"} ![The ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec}$ versus the ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ ratios of all spectroscopically observed sources. The red circles denote SNRs (see $\S$4, Tables 3-8, Table 14) while the green circles indicate sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot}$ $\geq$ 0.3 (within their error-bars) but were not spectroscopically verified as SNRs (see Tables 9 and 13). Black circles denote sources with ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{phot} \le$ 0.4 and ([\[S [ii]{}\]]{}/[H$\alpha$]{})$_{spec} \le$ 0.4 (see Tables 9 and 13). The solid line represents the 1:1 relation between photometric and spectroscopic [\[S [ii]{}\]]{}/[H$\alpha$]{} ratios while the dashed lines denote the borderline area for SNRs ([\[S [ii]{}\]]{}/[H$\alpha$]{}$>$0.4).[]{data-label="eis1"}](NGC5204_SII_Ha_corrected_ratios_correlation.ps "fig:"){width="37.00000%"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![log([H$\alpha$]{}/[\[S [ii]{}\]]{}Å (6716 & 6731)) against the log([H$\alpha$]{}/[\[N [ii]{}\]]{}Å (6548 & 6584)) emission line ratios of the spectroscopically observed SNRs. The dashed lines have been defined using the emission line ratios of an adequate number of Galactic SNRs, [H[ii]{}]{} regions and planetary nebulae (PNe).](Fig_ratioSIINIIf3.eps){width="4in"}
![[\[S [ii]{}\]]{}Å (6716)/[\[S [ii]{}\]]{}Å (6731) line ratio versus log([H$\alpha$]{}/[\[S [ii]{}\]]{}Å (6716 & 6731).](Fig_ratioSIIHaSIIf3.eps){width="4.5in"}
![[\[S [ii]{}\]]{}Å (6716)/[\[S [ii]{}\]]{}Å (6731) line ratio versus log([\[N [ii]{}\]]{}/[\[S [ii]{}\]]{}Å (6716 & 6731).](Fig_ratioNIIHaSIIf3.eps){width="4.5in"}
![Diagnostic diagram of [\[O [iii]{}\]]{}(Å 5007)/[H$\beta$]{} versus [\[N [ii]{}\]]{}(Å 6584)/[H$\alpha$]{} for shock-only models and for five different abundance sets with n=1 cm$^{-3}$ by @Allen08. Each grid is labeled with the abundance set that was used, moving from left to right with increasing metallicity. Each grid comprises lines of constant magnetic parameter shown with thick lines and lines of constant shock velocity shown with thin lines. The shock velocities range between 200 and 1000 km s$^{-1}$, from top to bottom with a step of 50 km s$^{-1}$.](Fig_ratioO3Hb_N2HaAbf.eps){width="4.5in"}
![Diagnostic diagram of [\[O [iii]{}\]]{}(Å 5007)/[H$\beta$]{} versus [\[N [ii]{}\]]{}(Å 6584)/[H$\alpha$]{} for shock+precursor models and for five different abundance sets with n=1 cm$^{-3}$ by @Allen08. Each grid is labeled with the abundance set that was used, moving from left to right with increasing metallicity. Each grid comprises lines of constant magnetic parameter shown with thick lines and lines of constant shock velocity shown with thin lines. The shock velocities range between 200 and 1000 km s$^{-1}$, from bottom to top with a step of 50 km s$^{-1}$.](Fig_ratioO3Hb_N2HaAbprf.eps){width="4.5in"}
![[H$\alpha$]{} against the X-ray luminosity of the 16 optically selected, X-ray emitting SNRs. The dashed line indicates the 1:1 relation between the two luminosities.](SNRs_Lx_LHa.ps){width="2.5in"}
![X-ray luminosity against the [\[S [ii]{}\]]{}/[H$\alpha$]{} ratio of the 16 optically selected, X-ray emitting SNRs.](SNRs_Lx_SIIHa.ps){width="2.5in"}
![Histograms of the SNR [H$\alpha$]{} luminosities in each galaxy of our sample. The peak at each histogram denotes the completeness limit of each galaxy.](histogram_LHa_number_phot_SNRs.ps){width="5in"}
![[*Top*]{}: Number of photometric SNRs above the completeness limit of each galaxy against the integrated [H$\alpha$]{} luminosity of the host galaxy. [*Bottom*]{}: Number of photometric SNRs above the completeness limit of each galaxy against the radio luminosity.](Number_SNRs_SFR.ps){width="4in"}
[^1]: E-mail: ileonid@astro.noa.gr; ptb@astro.noa.gr; azezas@physics.uoc.gr
[^2]: The D$_{25}$ area is defined as the optical isophote at the B-band surface brightness of 25 mag arcsec$^{-2}$
[^3]: http://iraf.net/irafdocs/ccduser3/
[^4]: http://stsdas.stsci.edu/nebular/temden.html
[^5]: http://hea-www.cfa.harvard.edu/ChandraSNR/
[^6]: Photometrically detected optical SNR by @MF97 denoted as SNR-3. This source is also a possible superbubble (see §5.3.4)
[^7]: Part of the photometricaly detected, optical SNR-15 detected by @MF97. Also X-ray detected XRB in Paper I
[^8]: X-ray detected SNR in Paper I, denoted as source LZB81
[^9]: X-ray detected SNR (LZB14) in Paper I and spectroscopically detected optical SNR by @MF97 denoted as SNR-31
[^10]: Part of the optical photometric SNR-32 detected by @MF97
[^11]: Photometrically detected, optical SNR by @MF97 (SNR-2)
[^12]: X-ray detected SNR in Paper I, quoted as LZB107 and radio selected SNR (TH2) by @Turner94
[^13]: Optically detected, photometric SNR (SNR-5) by @MF97
[^14]: Spectroscopically detected, optical SNR by @MF97 (SNR-7), X-ray detected SNR in Paper I, quoted as LZB30 and radio selected SNR ($\mu$) by\
@Eck02
[^15]: Spectroscopically detected, optical SNR by @MF97 (SNR-6)
[^16]: X-ray detected SNR (LZB120) in Paper I
[^17]: Optically detected, photometric SNR (SNR-9) by @MF97
[^18]: Optically detected, photometric SNR (SNR-11) by @MF97
[^19]: Spectroscopically observed, optical SNR (SNR-12) by @MF97. Also possible superbubble (see §5.3.4)
[^20]: Optically detected, photometric SNR (SNR-14) by @MF97. Also possible superbubble (see §5.3.4)
[^21]: Part of optically detected SNR (SNR-14) by @MF97
[^22]: Part of the photometrically-detected, optical SNR-15 detected by @MF97
[^23]: Spectroscopically detected, optical SNR by @MF97 (SNR-17) and X-ray detected SNR in Paper I, quoted as LZB42
[^24]: Optically detected, photometric SNR (SNR-18) by @MF97 and X-ray detected XRB in Paper I.
[^25]: Spectroscopically detected, optical SNR (SNR-22) by @MF97 and X-ray detected SNR (LZB86) in Paper I
[^26]: Spectroscopically detected, optical SNR by @MF97, denoted as SNR-24 and X-ray detected SNR (LZB80) in Paper I
[^27]: X-ray detected SNR in Paper I, denoted as LZB2 and photometrically detected, optical SNR by @MF97 (SNR-29)
[^28]: Optically detected, photometric SNR (SNR-30) by @MF97 and X-ray detected XRB in Paper I
[^29]: Spectroscopically detected, optical SNR by @MF97 (SNR-33)
[^30]: Spectroscopically observed, optical SNR (SNR-4) by @MF97. Also possible superbubble (see §5.3.4)
[^31]: X-ray detected XRB in Paper I
[^32]: Optically detected, photometric SNR (SNR-8) by @MF97. Also possible superbubble (see §5.3.4)
[^33]: Spectroscopically detected, optical SNR (SNR-10) by @MF97. Also possible superbubble (see §5.3.4)
[^34]: Spectroscopically detected, optical SNR by @MF97 (SNR-13)
[^35]: Spectroscopically detected, optical SNR (SNR-16) by @MF97. Also possible superbubble (see §5.3.4)
[^36]: Optically detected, photometric SNR (SNR-20) by @MF97
[^37]: Spectroscopically detected, optical SNR (SNR-19) by @MF97
[^38]: Optically detected, photometric SNR (SNR-23) by @MF97
[^39]: X-ray detected SNR in Paper I, denoted as source LZB15
[^40]: Optically detected SNR (SNR-6) by @Dopita10. Also possible superbubble (see §5.3.4)
[^41]: X-ray (LZB38) and optically (SNR-7) detected SNR in Paper I and @Dopita10 respectively.
[^42]: Probably associated with SNR LZB7 in Paper I
[^43]: This source probably coincides with X-ray detected SNR (source LZB11) in Paper I.
[^44]: X-ray selected SNR (denoted as source LZB31) in Paper I
[^45]: X-ray detected SNR (denoted as LZB30) in Paper I, optically detected SNR (denoted as SNR-2) by @Dopita10 and radio selected SNR (denoted as SNR-9)\
by @Chomiuk09
[^46]: X-ray detected SNR (denoted as LZB23) in Paper I.
[^47]: @Chomiuk09 classify this source as a radio SNR/[H[ii]{}]{} while in X-rays is spectroscopically identified as XRB in Paper I
[^48]: Optically detected source by @Dopita10, denoted as SNR-1 and radio detected source by @Chomiuk09, denoted as SNR-4
[^49]: Optical SNR (SNR-4) by @Dopita10 and radio SNR (SNR-11) by @Chomiuk09.
[^50]: This source is also identified as SNR in various wavebands; optical: @Dopita10 denoted as SNR5, radio: @Vukotic05 and @Chomiuk09\
denoted as SNR$\rho$ and SNR19 respectively and X-rays: Paper I denoted as LZB28.
[^51]: Radio detected SNR by @Chomiuk09 denoted as source 26.
[^52]: This source is classified as XRB (LZB15) in Paper I
[^53]: Optically, spectroscopically-verified SNR by @MF97, denoted as SNR 2
[^54]: Optically detected SNR (based on imaging photometry) by @MF97, denoted as SNR 1
[^55]: Optically, spectroscopically-verified SNR by @MF97, denoted as SNR 3
[^56]: \*
[^57]: \*
[^58]: \*
[^59]: \*
[^60]: \*
[^61]: \*
[^62]: \*
[^63]: \*
[^64]: \*
[^65]: \*
[^66]: \*
[^67]: \*
[^68]: \*
[^69]: \*
[^70]: \*
[^71]: \*
[^72]: \*
[^73]: \*
[^74]: \*
[^75]: \*
[^76]: \*
[^77]: \*
[^78]: \*
[^79]: \*
[^80]: \*
[^81]: \*
[^82]: \*
[^83]: \*
[^84]: \*
[^85]: \*
[^86]: \*
[^87]: \*
[^88]: \*
[^89]: \*
[^90]: \*
[^91]: \*
[^92]: \*
[^93]: \*
|
---
author:
- |
[^1]\
Institut de Physique Nucléaire d’Orsay, CNRS-IN2P3, Université Paris-Sud & Paris-Saclay, 91406 Orsay, France\
E-mail:
title: 'Studying nucleon structure via Double Deeply Virtual Compton Scattering (DDVCS)'
---
Introduction
============
There are essentially three experimental golden channels for direct measurements of GPDs: the electroproduction of a photon $eN\rightarrow eN\gamma$ which is sensitive to the deeply virtual Compton scattering (DVCS) amplitude, the photoproduction of a lepton pair $\gamma N\rightarrow l\bar{l}N$ which is sensitive to the timelike Compton scattering (TCS) amplitude, and the electroproduction of a lepton pair $eN\rightarrow eNl\bar{l}$ which is sensitive to the double deeply virtual Compton scattering (DDVCS) amplitude. Only the latter provides the framework necessary for an uncorrelated measurement of a GPD ($\xi',\xi,t$) as a function of both scaling variable $\xi'$ and $\xi$ [@ref1; @ref2]. The former two reactions cannot entirely serve the purpose of testing the angular momentum sum rule [@ref3] due to the reality of the final- or initial-state photons, which leads to the restriction $\xi'=\pm \xi$. For instance, the Compton form factors (CFF) $\mathcal{H}$ associated with the GPD $H$ and accessible in DVCS cross section or beam spin asymmetry experiments can be written $$\begin{aligned}
\relax\mathcal{H}(\xi'=\xi,\xi,t)=\sum_{q}e_q^2&\bigg\{&\mathcal{P}\int_{-1}^1dx~H^q(x,\xi,t)\bigg[\frac{1}{x-\xi}+\frac{1}{x+\xi}\bigg]
\nonumber\\
&&-i\pi\big[H^q(\xi,\xi,t)-H^q(-\xi,\xi,t)\big]\bigg\}
\label{eq1}\end{aligned}$$ where the sum runs over all parton flavors with elementary electrical charge $e_q$, and $\mathcal{P}$ indicates the Cauchy principal value of the integral. While the imaginary part of the CFF accesses the GPD values at $\xi'=\pm \xi$, it is clear from Eq. \[eq1\] that the real part of the CFF is a more complex quantity involving the convolution of parton propagators and the GPD values out of the diagonals $\xi'=\pm \xi$, that is a domain that cannot be resolved unambiguously with DVCS experiments. Because of the virtuality of the final state photon, DDVCS provides a way to circumvent the DVCS limitation, allowing to vary independently $\xi'$ and $\xi$. Considering the same GPD $H$, the corresponding CFF for DDVCS process writes $$\begin{aligned}
\mathcal{H}(\xi',\xi,t)=\sum_{q}e_q^2&\bigg\{&\mathcal{P}\int_{-1}^1dx~H^q(x,\xi,t)\bigg[\frac{1}{x-\xi'}+\frac{1}{x+\xi'}\bigg]
\nonumber\\
&&-i\pi\big[H^q(\xi',\xi,t)-H^q(-\xi',\xi,t)\big]\bigg\}
\label{eq2}\end{aligned}$$ providing access to the scaling variable $\xi' \neq \xi$.
The DDVCS process is most challenging from the experimental point of view due to the small magnitude of the cross section and requires high luminosity and full exclusivity of the final state. Moreover, the difficult theoretical interpretation of electron-induced lepton pair production when detecting the $e^+ e^-$ pairs from the decay of the final virtual photon, hampers any reliable experimental study. Taking advantage of the energy upgrade of the CEBAF accelerator, it is proposed to investigate the electroproduction of $\mu^+ \mu^-$ di-muon pairs and measure the beam spin asymmetry of the exclusive $ep\rightarrow e'p'\gamma^* \rightarrow e'p'\mu^+\mu^-$ reaction in the hard scattering regime [@intent; @intent2; @ref6].
At sufficiently high virtuality of the initial space-like virtual photon and small enough four-momentum transfer to the nucleon with respect to the photon virtuality ($-t \ll Q^2$), DDVCS can be seen as the absorption of a space-like virtual photon by a parton of the nucleon, followed by the quasi-instantaneous emission of a time-like virtual photon by the same parton, which finally decays into a di-muon pair (Fig. \[fig1\]). $Q^2$ and $Q'^2$ represent the virtuality of the incoming space-like and outgoing time-like photons. The scaling variable $\xi'$ and $\xi$ write $$\begin{aligned}
\xi' = \frac{Q^2-Q'^2+t/2}{2Q^2/x_\text{B}-Q^2-Q'^2+t}~~\text{and}~~
\xi = \frac{Q^2+Q'^2}{2Q^2/x_\text{B}-Q^2-Q'^2+t}
\label{xipxi}\end{aligned}$$ from which one obtains $\xi' = \xi \frac{Q^2-Q'^2+t/2}{Q^2+Q'^2} .$ This relation indicates that $\xi'$, and consequently the CFF imaginary part, is changing sign about $Q^2=Q'^2$, which procures a strong testing ground of the universality of the GPD formalism.
![The handbag diagram symbolizing the DDVCS direct term with di-muon final states.[]{data-label="fig1"}](DDVCS.pdf){width=".4\textwidth"}
In this proceeding, the feasibility of a DDVCS experiment at JLab 12 GeV is discussed. Section \[sec2\] describes DDVCS kinematics and experimental observables. Section \[sec3\] reports model-predicted experimental projection at a certain luminosity with ideal detectors. Preliminary conclusions about this study are drawn in the last section.
Kinematics and experimental observables {#sec2}
=======================================
The following kinematics cuts to ensure applicability of the GPD formalism have been applied: the center-of-mass energy $W>2$ GeV to ensure the deep inelastic scattering regime; $Q^2>1~($GeV$/c^2)^2$ to ensure the reaction at parton level; $t >-1~($GeV$/c^2)^2$ to support the factorization regime; and $Q'^2>(2m_\mu)^2$ to ensure the production of a di-muon pair. Fig. \[fig2\] shows the DDVCS allowed $(Q^2,~x_\text{B})$ phase space with kinematics cuts and one $(t,~Q'^{2})$ phase space at a specific $(Q^2,~x_\text{B})$ set. In the allowed region represented by the shaded area, kinematics bins with uniform widths have been chosen. $\Delta Q^2=0.5~($GeV$/c^2)^2$, $\Delta x_\text{B}=0.05$, $\Delta t=0.2~($GeV$/c^2)^2$ and $\Delta Q'^2=0.5~($GeV$/c^2)^2$, which is sufficiently small to allow a first-order estimation of the integral cross section. As a preliminary study of DDVCS, only the bins at $Q^2<5~($GeV$/c^2)^2$ have been studied, where the cross section is supposedly larger than at high $Q^2$. As a consequence, 664 four-dimensional bins have been considered. The bins boundary are shown in Fig. \[fig2\] as the dashed lines.
![Kinematics phase spaces: left panel shows the $(Q^2,~x_\text{B})$ phase space with physics inspired cuts. The shaded area represents the physics region of interest, and the point represents $(Q^2=1.25~($GeV$/c^2)^2,~x_\text{B}=0.1)$ whose correlated $(t,~Q'^{2})$ phase space is shown in the right panel together with the region of interest (shaded area).[]{data-label="fig2"}](bin1.pdf "fig:"){width=".49\textwidth"} ![Kinematics phase spaces: left panel shows the $(Q^2,~x_\text{B})$ phase space with physics inspired cuts. The shaded area represents the physics region of interest, and the point represents $(Q^2=1.25~($GeV$/c^2)^2,~x_\text{B}=0.1)$ whose correlated $(t,~Q'^{2})$ phase space is shown in the right panel together with the region of interest (shaded area).[]{data-label="fig2"}](bin2.pdf "fig:"){width=".49\textwidth"}
The lepton-pair electroproduction process consists of three interfering elementary mechanisms, depicted in Fig. \[fig3\], with implied crossed contributions. The 7-fold differential cross section is proportional to the square of the total amplitude that is the coherent sum of the three processes, i.e. $d^7\sigma/(dQ^2dx_BdtdQ'^2d\phi d\Omega_\mu)$ $\propto |\mathcal{T}_\text{DDVCS}+\mathcal{T}_{\text{BH}_1}+\mathcal{T}_{\text{BH}_2}|^2$. We consider in this study the 5-fold cross section, integrating over the muon solid angle. The integration leads to the vanishing of the interference contributions originated from the BH$_2$ amplitude: $d^5\sigma/(dQ^2dx_BdtdQ'^2d\phi) \propto |\mathcal{T}_\text{DDVCS}+\mathcal{T}_{\text{BH}_1}|^2+|\mathcal{T}_{\text{BH}_2}|^2$ [@ref2; @ref7]. Though partial information is sacrificed, this simplification offers an easier understanding of this totally unexplored reaction. The cross section without target polarization can be described in terms of different contributions $$\sigma_{P}^{e}=\sigma_{\text{BH}_1}+\sigma_{\text{BH}_2}
+\sigma_\text{DDVCS}+P\widetilde{\sigma}_\text{DDVCS}+(-e)\left( \sigma_{\text{INT}_1}+P\widetilde{\sigma}_{\text{INT}_1} \right)
\label{eq3}$$ where, to simplify the notation, $\sigma$ stands for the 5-fold differential cross section, $e$ is the lepton beam electric charge, and $P$ is the polarization of the beam. The subscript INT$_1$ represents the interference terms between the DDVCS and the BH$_1$ amplitudes. BH terms are calculable since the nucleon form factors are well-known at small $t$. DDVCS terms are bi-linear in CFFs, while interference terms are linear. $\sigma_\text{DDVCS}$ and $\sigma_{\text{INT}_1}$ are sensitive to the real part of CFFs, while $\widetilde{\sigma}_\text{DDVCS}$ and $\widetilde{\sigma}_{\text{INT}_1}$ are sensitive to the imaginary part.
![Subprocesses contributing to electroproduction of a di-muon pair including DDVCS (left) and two kinds of Bethe-Heitler processes, i.e. BH$_1$ (middle) and BH$_2$ (right).[]{data-label="fig3"}](dd.pdf "fig:"){height=".2\textwidth"} ![Subprocesses contributing to electroproduction of a di-muon pair including DDVCS (left) and two kinds of Bethe-Heitler processes, i.e. BH$_1$ (middle) and BH$_2$ (right).[]{data-label="fig3"}](dbh1.pdf "fig:"){height=".2\textwidth"} ![Subprocesses contributing to electroproduction of a di-muon pair including DDVCS (left) and two kinds of Bethe-Heitler processes, i.e. BH$_1$ (middle) and BH$_2$ (right).[]{data-label="fig3"}](dbh2.pdf "fig:"){height=".2\textwidth"}
Considering polarized positron and electron beams, single contributions can be separated from the three experimental observables: unpolarized cross section with electron beam ($\sigma_\text{UU}$), beam spin cross section difference with polarized electron and positron beam ($\Delta\sigma_\text{LU}$), and beam charge cross section difference ($\Delta\sigma^\text{C}$). From Eq. \[eq3\], $$\left\{
\begin{aligned}
&\sigma_\text{UU}=\frac{1}{2}\left(\sigma_{+}^-+\sigma_{-}^-\right)
&&=\sigma_{\text{BH}_1}+\sigma_{\text{BH}_2}
+\sigma_{\text{DDVCS}}
+\sigma_{\text{INT}_1},
\\
&\Delta\sigma_\text{LU}=\frac{1}{4}\left[\left(\sigma_{+}^--\sigma_{-}^-\right)-\left(\sigma_{+}^+-\sigma_{-}^+\right)\right]
&&=\widetilde{\sigma}_{\text{INT}_1},
\\
&\Delta\sigma^\text{C}=\frac{1}{4}\left[\left(\sigma_{+}^-+\sigma_{-}^-\right)-\left(\sigma_{+}^++\sigma_{-}^+\right)\right]
&&=\sigma_{\text{INT}_1}.
\end{aligned}
\right.
\label{eq4}$$ It is difficult to extract CFFs from DDVCS term of bi-linear combination, $\Delta\sigma^\text{C}$ therefore provides pure interference term. The imaginary part of CFFs can be extracted from $\Delta\sigma_\text{LU}$, which provides directly the information for GPDs. In addition, we can also obtain pure $\sigma_\text{DDVCS}$ when combining $\sigma_\text{UU}$ and $\Delta\sigma^\text{C}$, and $\widetilde\sigma_\text{DDVCS}$ when combining $\Delta\sigma_\text{LU}$ and the electron beam spin cross section difference. The experimental projections of these observables have been performed, and is discussed in the next section.
Projections {#sec3}
===========
The projections have been performed in the ideal situation that all the particles of the final state can be detected with 100% efficiency. The count-rate calculation was done for a luminosity $\mathrsfso{L}=10^{37}~\text{cm}^{-2}\text{s}^{-1}$ considering 100 days running time equally distributed between each lepton beam charge. The number of events, for each five-dimensional bin ($Q^2,~x_\text{B},~t,~Q'^{2}~\text{and}~\phi$), was determined following $$\begin{aligned}
N=\frac{d^5\sigma}{dQ^2dx_BdtdQ'^2d\phi} \cdot\Delta Q^2 \cdot\Delta x_B \cdot\Delta t \cdot\Delta Q'^2 \cdot\Delta\phi \cdot\mathrsfso{L} \cdot T,
\label{eq5}\end{aligned}$$ where the differential cross section has been calculated with the VGG model [@refVGG] set at the central values of each four-dimensional bin at the beam energy of 11 GeV. Besides, 24 bins in $\phi$ 15$^\circ$-wide have been considered.
Fig. \[fig4\] shows the observables with statistic errors at some different $Q'^2$ and a set of fixed ($Q^2,~x_\text{B}~\text{and}~t$) as a function of $\phi$ (upper half). The cross section decreases generally as $Q'^2$ increases, since the process at $Q'^2=0$ is equivalent to the DVCS process having one less electromagnetic vertex. $\Delta\sigma_\text{LU}$ in the $Q^2>Q'^2$ and $Q^2<Q'^2$ regions has opposite signs due to the antisymmetric property of GPD [@ref7]. The bottom half shows the ones at some different $t$ and a set of fixed ($Q^2,~x_\text{B}~\text{and}~Q'^2$). The cross section and precision decrease as $-t$ increases.
![The upper half of the figure shows the VGG projections at $Q^2 =1.25~(\text{GeV}/c^2)^2$, $x_\text{B}=0.1$, $-t=0.15~(\text{GeV}/c^2)^2$ and $Q'^2=0.3,~0.8,~1.3~\text{and}~1.8~(\text{GeV}/c^2)^2$ (left to right). The panels of the top row show the unpolarized cross section, the middle panels show the beam-spin cross section difference, and the bottom panels show the beam-charge cross section difference. Note that each panel has its own y-axis scale. The bottom half shows the ones at $Q^2 =1.75~(\text{GeV}/c^2)^2$, $x_\text{B}=0.125$, $Q'^2=0.8~(\text{GeV}/c^2)^2$ and $-t=0.15,~0.35,~0.55~\text{and}~0.75~(\text{GeV}/c^2)^2$ (left to right).[]{data-label="fig4"}](zcombin.pdf){width="100.00000%"}
Fig. \[fig5\] shows the correlated location of all the four-dimensional bins in the CFFs phase space $(\xi',~\xi)$. Among the 664 bins, 82% have $\sigma_\text{UU}$ with a relative error less than 10% , 22% have $\Delta\sigma^\text{C}$ of the same quality, and only 7% for $\Delta\sigma_\text{LU}$ in the vicinity of the DVCS diagonal.
![CFFs phase space: the solid line indicates $\xi'=\xi$ or $Q'^2=0$ that is the DVCS correlation, the dashed line indicates $\xi'=-\xi$ or $Q^2=0$ that is the TCS correlation, and the colored markers represent the successful four-dimensional bins of DDVCS process (Eq. 1.2). The blue open circles indicate the bins where $\sigma_\text{UU}$ has the relative error less than 10%. Some of them, represented by the red open squares, have $\Delta\sigma^\text{C}$ with the relative error less than 10%, and a few of them, represented by green open triangles, have the $\Delta\sigma_\text{LU}$ at the same level of accuracy. The black crosses indicate the failed bins where all the three observables have the relative errors greater than 10%.[]{data-label="fig5"}](cff_uu_2d.pdf "fig:"){width=".32\textwidth"} ![CFFs phase space: the solid line indicates $\xi'=\xi$ or $Q'^2=0$ that is the DVCS correlation, the dashed line indicates $\xi'=-\xi$ or $Q^2=0$ that is the TCS correlation, and the colored markers represent the successful four-dimensional bins of DDVCS process (Eq. 1.2). The blue open circles indicate the bins where $\sigma_\text{UU}$ has the relative error less than 10%. Some of them, represented by the red open squares, have $\Delta\sigma^\text{C}$ with the relative error less than 10%, and a few of them, represented by green open triangles, have the $\Delta\sigma_\text{LU}$ at the same level of accuracy. The black crosses indicate the failed bins where all the three observables have the relative errors greater than 10%.[]{data-label="fig5"}](cff_uu_c_2d.pdf "fig:"){width=".32\textwidth"} ![CFFs phase space: the solid line indicates $\xi'=\xi$ or $Q'^2=0$ that is the DVCS correlation, the dashed line indicates $\xi'=-\xi$ or $Q^2=0$ that is the TCS correlation, and the colored markers represent the successful four-dimensional bins of DDVCS process (Eq. 1.2). The blue open circles indicate the bins where $\sigma_\text{UU}$ has the relative error less than 10%. Some of them, represented by the red open squares, have $\Delta\sigma^\text{C}$ with the relative error less than 10%, and a few of them, represented by green open triangles, have the $\Delta\sigma_\text{LU}$ at the same level of accuracy. The black crosses indicate the failed bins where all the three observables have the relative errors greater than 10%.[]{data-label="fig5"}](cff_uu_lu_2d.pdf "fig:"){width=".32\textwidth"}
Conclusion {#sec4}
==========
The model-predicted projections of a DDVCS experiment indicate a high degree of feasibility at a challenging luminosity with exclusive final states completely detected. The unpolarized cross section with very small statistics error can be obtained. Although the beam charge cross section difference has less precision, a better extraction of the real part of CFFs can be performed. The beam spin cross section difference can be obtained accurately only at a few specific kinematics, but it is the most powerful tools to directly access the totally unexplored GPDs phase space, otherwise inaccessible. An additional feature supporting the importance of this observable is the sign change of the beam spin cross section difference as $Q'^2$ becomes larger than $Q^2$. This behaviour is a strong prediction of the GPD formalism [@ref7], and consequently provides a stringent test for experimental investigations.
Due to the strong sensitivity to $Q'^2$ and $t$ as well as the small cross section and insufficient statistics at high values, the binning strategy will be adapted in the next phase of DDVCS exploration covering the whole kinematic phase space, and finally extracting the CFFs.
Acknowledgement {#acknowledgement .unnumbered}
===============
I would like to express my appreciation to my thesis supervisor, E. Voutier, for the guidance of this work, also to M. Guidal, S. Niccolai, and M. Vanderheaghen for helpful discussions.
[99]{}
M. Guidal and M. Vanderhaeghen, *Double Deeply Virtual Compton Scattering off the Nucleon*, *Phys. Rev. Lett.* [**90**]{} (2003) 012001.
A. V. Belitsky and D. Müller, *Exclusive electroproduction of lepton pairs as a probe of nucleon structure*, *Phys. Rev. Lett.* [**90**]{} (2003) 022001. X. Ji, *Gauge-Invariant Decomposition of Nucleon Spin*, *Phys. Rev. D* [**78**]{} (1997) 610. M. Boer, A. Camsonne, K. Gnanvo, E. Voutier, Z. Zhao, [*et al.*]{}, Jefferson Lab Experiment [**LOI12-15-005**]{} (2015).
(CLAS Collaboration) S. Stepanyan, [*et al.*]{}, Jefferson Lab Experiment [**LOI12-16-004**]{} (2016).
I. V. Anikin, [*et al.*]{}, *Nucleon and nuclear structure through dilepton production*, *Acta Phys. Pol. B* [**49**]{} (2018) 741 A. V. Belitsky and D. Müller, *Probing generalized parton distributions with electroproduction of lepton pairs off the nucleon*, *Phys. Rev. D* [**68**]{} (2003) 116005. M. Vanderhaeghen, P. A. M. Guichon and M. Guidal, *Deeply virtual electroproduction of photons and mesons on the nucleon: Leading order amplitudes and power corrections*, *Phys. Rev. D* [**60**]{} (1999) 094017.
[^1]: Supported by the China Scholarship Council (CSC) and the French Centre National de la Recherche Scientifique (CNRS).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.